10,000 Matching Annotations
  1. Jan 2026
    1. R0:

      Reviewer #1: This manuscript addresses antimicrobial resistance in Ecuador through a One Health lens, focusing on governance, infrastructure, and equity. The topic is highly relevant to PLOS Global Public Health, particularly given the emphasis on health systems, intersectoral governance, and equity in low and middle income country contexts. The study makes a valuable contribution to regional and global discussions on AMR governance. Some points need to be addressed: 1. While the conclusions are generally consistent with the qualitative findings, some claims, particularly those related to macro level political shifts, austerity policies, and governance deterioration, would benefit from clearer and more explicit linkage to the empirical data presented. In several instances, the discussion moves toward a normative or interpretive tone that appears to draw as much from secondary literature as from the study’s primary data. Strengthening signposting between interview findings, document analysis, and specific conclusions would improve analytical clarity. 2. The manuscript would benefit from more explicit clarification that the study is a qualitative governance and policy analysis rather than an epidemiological assessment of antimicrobial resistance trends. Readers may otherwise expect microbiological or quantitative AMR indicators, which are outside the scope of this work but not always clearly distinguished in the framing. 3. The Data Availability Statement indicates that all relevant data are included within the manuscript and that additional information is available upon reasonable request. However, this does not fully meet PLOS data policy requirements. The primary qualitative data underlying the findings, such as anonymized interview transcripts, coded data excerpts, or NVivo codebooks, are not publicly available as supplementary files or deposited in a repository. If there are ethical or confidentiality constraints that prevent public sharing of these materials, these restrictions should be clearly specified in the Data Availability Statement. Alternatively, the authors are encouraged to share de-identified qualitative data, coding frameworks, or analytic matrices as Supporting Information to enhance transparency and reproducibility. Here minnor suggestions: a) Consider minor language and stylistic revisions throughout the manuscript to improve clarity and flow, particularly in the Introduction and Discussion sections. b) Ensure consistent terminology when referring to governance structures, committees, and surveillance systems. c) Some tables (e.g., interview results) could benefit from brief interpretive summaries to guide readers unfamiliar with the Ecuadorian institutional context. The equity analysis is a strong component of the manuscript; however, explicitly distinguishing between findings derived from interview data versus document analysis would further strengthen this section.

      Reviewer #2: Overview: This study examines national approaches to addressing antimicrobial resistance (AMR) in Ecuador from a One Health (OH) perspective, with emphasis on governance, public policy, health infrastructure, and equity. The authors use a qualitative design combining document review, scientific literature analysis, and semi-structured interviews with key informants representing multiple OH sectors. The manuscript offers a useful overview of the challenges Ecuador faces in implementing an OH approach to AMR prevention. However, many of the broader claims are not sufficiently supported by the evidence currently presented. In particular, findings from the document analysis, the central component of the study, are not reported in a clear or substantive way, making it difficult to assess how the conclusions were derived. Strengthening the presentation of document-analysis results, clarifying how these findings were integrated with interview data, and improving the organization and flow of the manuscript would substantially increase its rigor and impact. With these revisions, the paper has the potential to become a valuable contribution to the literature on AMR and One Health in Ecuador. Major revisions • The Introduction would benefit from a brief description of Ecuador's National Plan for the Prevention and Control of AMR (2019-2023)-including its overarching goals, structure, key components/strategic axes, and intended governance/implementation approach. This context is necessary for readers to understand what was constrained in implementation and to interpret the claims made in the Discussion and Conclusions. • The Methods section needs substantial revision to clearly describe how the qualitative research was conducted and analyzed. I recommend aligning the reporting with SRQR (Standards for Reporting Qualitative Research) and citing: O'Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Academic Medicine. 2014;89(9):1245-1251. Please consider including an SRQR checklist as Supplementary Information to improve transparency and reproducibility of the qualitative analysis. • The manuscript currently provides limited explicit reporting of findings from the document analysis, despite this being a central component of the study. Please present clearer, more detailed results from the document analysis (e.g., what patterns/themes emerged, concrete examples), and explain how these findings were integrated with (or triangulated against) the semi-structured interview data. • As written, the Results and Discussion sections are difficult to follow. Consider restructuring the manuscript around the four analytical themes/framework domains used in the study: 1. Intersectoral governance analysis; 2. Situational analysis; 3. Transitions toward One Health; 4. Equity analysis using a GBA+ lens. Using these as consistent subheadings throughout would strengthen coherence and readability. • The Discussion does not yet fully unpack what the findings mean, nor does it adequately situate them in relation to experiences from other countries (Latin America, LMIC settings, and high-income settings where implementation has been more effective or similarly constrained). Additionally, the manuscript states that it proposes a "context-specific action framework," but this framework is not clearly presented or easy to locate. If this is a key contribution ("So what? What now?"), please make it explicit. • Several conclusions currently extend beyond what is clearly supported by the Results section. Please ensure the Conclusions are tightly grounded in the reported evidence (from both document analysis and interviews), or revise/soften claims where direct supporting data are not presented. Minor revisions Introduction • Line 46: Please briefly define selective pressure and explain how it contributes to the emergence and spread of antimicrobial-resistant microorganisms. • Line 59: Before discussing constraints, it would help to briefly describe the Ecuadorian National Plan for the Prevention and Control of AMR (2019–2023), for example, its overarching goals, structure, key components/strategic axes, and intended governance/implementation approach. This context will help readers understand what specific aspects were constrained. • Line 72: Grammar: “reduced” instead of “reduces.” • Lines 75–78: These statements read as interpretive claims; please clarify whether they are based on cited literature or derived from your data. If they are claims about broader context, references are needed. • Line 75: Consider starting a new paragraph around here to introduce the National Plan/Committee context more clearly before transitioning into limitations. Methods • Line 106: Please briefly define semi-structured interviews and include a reference for the approach. • Lines 106–108: The study objective is already stated earlier; consider removing repeated objective language here to streamline the Methods. • Recommend adding a clearly labeled Ethics subsection (IRB approval/waiver, consent procedures, confidentiality protections). • Lines 150–151, 157–158, 160–161: These appear related and could be consolidated into one coherent paragraph to improve flow. • Table 1: Please provide more detail on the “affiliated agencies”/ “agencies” included. For example, within the Ministry of Health, does this include INSPI or other specific bodies? Consider organizing the table using headings aligned with your interview sampling frame (e.g., human health, animal health, environment, academia, civil society) to match the manuscript text. • Line 173: Please define the acronym GBA+ at first use. The Methods section would benefit from clearer subsections. Suggested structure:1. Study design and setting; 2. Sampling and participants (sampling strategy, eligibility criteria, recruitment, number approached/interviewed; how you determined sampling adequacy/saturation); 3.Data sources and data collection document analysis: document types, inclusion criteria, extraction approach interviews: interview guide development, interviewer training/positionality if relevant, interview mode, audio recording, transcription/translation, any iterative changes to guides; 4. Data management (storage/security, de-identification/anonymization, coding workflow); 5. Data analysis (analytic approach for documents and interviews; how themes were developed; triangulation across methods; reflexivity/rigor strategies such as audit trail, double coding, member checking if used). • Consider including (as Supplementary Information) an SRQR checklist to improve transparency Results • The Results section would be clearer if organized explicitly around your four analytical themes/framework domains: 1. Intersectoral governance analysis; 2. Situational analysis; 3. Transitions toward One Health; 4. Equity analysis using a GBA+ lens. Consider using these as subheadings and presenting findings under each. • There is currently little explicit reporting of what was found from the document analysis. Please include concrete results from that component (e.g., what patterns/gaps were identified, and specific examples). • Lines 207–212: This reads like interpretation more appropriate for the Discussion (and would likely need supporting references if it’s a broader claim). Consider moving it. • Lines 223–226: These statements also appear interpretive and would fit better in the Discussion. • Line 228: “Barriers and facilitators” are introduced here but not clearly set up earlier. If identifying barriers/facilitators is a central objective, please introduce it in the Introduction/Aims and ensure consistent framing throughout. • Lines 234–241; 243–249: These sections read like discussion/interpretation rather than results. Consider revising to focus on what participants/documents explicitly reported (with evidence) and move broader implications to the Discussion. • Consider adding a small number of representative verbatim quotes from the semi-structured interviews to support each major theme. Including 1–2 quotes per theme (with anonymized participant identifiers/roles) would strengthen credibility and transparency and is standard for reporting semi-structured interview findings. If space is limited, quotes can be placed in a table or supplement.

      Discussion • Consider organizing the Discussion using the same four analytical themes as the Results to improve coherence and readability. • The Discussion would benefit from deeper comparison with related work from Latin America and other LMIC settings, as well as contrasting with experiences in high-income settings where national AMR plans may have been implemented more effectively. This would strengthen interpretation and generalizability. • The Introduction indicates that a “context-specific action framework” is proposed; however, this is not easy to locate in the current manuscript. Please clearly identify where the framework is presented (potentially Lines 320–327?) and consider adding a figure/table or a clearly labeled subsection so readers can easily find and understand it. Conclusion • Overall, the conclusions are plausible, but some claims appear stronger than what is currently supported by the Results section, especially without clearly presented document-analysis findings. • For example, the statement about deterioration in governance capacities, information system interoperability, laboratory infrastructure, and budget allocations would be strengthened by explicit evidence from the document analysis and/or interviews. If budget shifts were assessed, please report what sources were used and what changes were observed; if not directly assessed, consider softening the language or clarifying that it reflects stakeholder perceptions rather than documented budgetary evidence.

    1. Reviewer #1 (Public review):

      I read this paper with great interest based on my experience in insect sciences. I have some minor comments (and recommendations) that I believe the authors should address.

      (1) The paper has an original biological question that is overly broad and mechanistically ambitious. The central biological question, namely how CLas infection enhances fecundity of Diaphorina citri via dopamine signaling, is clearly stated and well motivated by previous literature. However, my advice to the authors is that, while the general question is clear, the manuscript attempts to answer multiple mechanistic layers simultaneously. As a result, I feel that the biological narrative becomes diffuse, especially in later sections where DA, miRNA regulation, AKH signaling, and JH signaling are all proposed as parts of a single linear cascade. In summary, my key concern is that the paper often moves from correlation to causal hierarchy without fully disentangling whether these pathways act sequentially, in parallel, or redundantly. A more explicitly framed primary hypothesis (e.g., "DA-DcDop2 is necessary and sufficient for CLas-induced fecundity") may improve conceptual clarity.

      (2) On the novelty of the data, I feel they are moderately novel, with substantial confirmatory components. If I am correct, the novel contributions include the identification of DcDop2 as the DA receptor responsive to CLas infection in D. citri, the discovery that miR-31a directly targets DcDop2, which is supported by luciferase assays and RIP, and thirdly, the integration of dopamine signaling into the already-described CLas-AKH-JH-fecundity framework. My advice to the authors is to focus more on the manuscript's novelty, which lies more in pathway integration than in discovering fundamentally new biological phenomena. This is appropriate for a mechanistic paper, but should be framed as an extension of existing models rather than a paradigm shift.

      (3) On the conclusions, I recommend that the authors modify their statements a little. I feel that there are some overstated or insufficiently supported claims. For instance, the assertion that CLas "hijacks" the DA-DcDop2-miR-31a-AKH-JH cascade implies direct pathogen manipulation, but no CLas-derived effector or mechanism is identified. Also that the model suggests a linear signaling hierarchy, but the data largely show correlation and partial dependency rather than strict epistasis. In third, the term "mutualistic interaction" may be too strong, as host fitness costs outside fecundity (e.g., longevity, immunity) are not evaluated. In conclusion, I confirm that the data support a functional association, but mechanistic causality and evolutionary interpretation are somewhat overstated.

    2. Reviewer #2 (Public review):

      Summary:

      Nian and colleagues comprehensively apply metabolomics, molecular, and genetic approaches to demonstrate that CLas hijacks the DA/DcDop2-miR-31a-AKH-JH signaling cascade to enhance lipid metabolism and fecundity in D. citri, while concurrently promoting its own replication.

      Strengths:

      These findings provide solid evidence of a mutualistic interaction between CLas proliferation and ovarian development in the insect host. This insight significantly advances our understanding of the molecular interplay between plant pathogens and vector insects, and offers novel targets and strategies for HLB field management.

      Weaknesses:

      While the article investigates the involvement of dopamine signaling and specific microRNAs in enhancing fecundity and pathogen proliferation, it still needs to provide a detailed mechanistic understanding of these interactions. The precise molecular pathways and feedback mechanisms by which CLas manipulates dopamine signaling in Diaphorina citri remain unclear.

    1. mutualidad

      Me gusta un montón la noción de mutualidad. Expresa mejor ese "ser con otros" que la palabra colaboración donde está intrínseca la cuestión de la tarea y el fin.

    2. para un internet que nos ayude a inventar modos de encontrarnos entre los diferentes,

      Esta parte, junto con la descripción del compost como lugar de encuentro multiespecie, me interpela. Creo que los jardines nos sirven para conectar desde otras formas de relacionamiento, desde los intereses y las ideas de la/s otra/s persona/s. Este diálogo desde lo divergente permite escapar a la automatización de la conversación, de las frases hechas y la fórmula de construcción de la IA.

    3. nuevas formas de computar el mundo.

      Destaco solamente esto para señalar las posibilidades de uno de esos juegos de palabras que nos gustan tanto, y pensar los jardines digitales como un proceso que va desde la computadora a la compostadora. Entendiendo que la computación es algo más estrictamente cuantitativo y la compostadora trataría de aportar matices cualitativos al ecosistema digital

    4. compost es la manera en que veríamos nuestro jardín si pudiéramos ir más allá del antropocentrismo

      Esto me parece un punto elemental para el planteamiento que queremos hacer

    5. El compost, con lo que tiene de cuidado de la tierra, es lo que nos enseña a hacer alquimia de lo digital, de su binarismo ontológico, y hacer emerger, a partir de ahí, nuevas formas de computar el mundo.

      El cuidado de lo digital desde el descuido, lo que nace fuera de la observación meditada y funcionalista.

    6. El compost es, en este sentido, una figura de la simbiosis atada a la tierra que, precisamente porque es de lo divergente, se vincula también a la noción de tecnodiversidad.

      Qué buena referencia. Se me antoja totalmente como posible metáfora de #cibernetica positiva.

    7. –pero bueno ese es, por ahora, otro tema–

      Será bonito ver cuándo se escriba el post sobre la importancia del compost en Asturias y esa mención sea un enlace a esa nueva entrada. Ganas de leerlo.

    1. Quedé y olvidéme,el rostro recliné sobre el Amado;cesó todo, y dejéme,dejando mi cuidadoentre las azucenas olvidado.

      Que poderoso. Que por un momento las normas de la vida no existen solo este amor.

    2. mi casa

      ¿Qué significa su casa? ¿La prisión o a mejor ese poema no es de ella pero es completamente de ficción? También, puede ser su imaginación a lo que quiere pero realmente no puede existir en su vida actual.

    3. oscura

      Hipótesis: las cosas de la noche están fuera del normal. Me recuerdo a unas de las obras de Lorca en que los pensamientos y deseos están iluminados en la noche. Noche oscura del alma se ve como un poema que abre los ojos de los problemas de día con las acciones de la noche.

    4. prisión o quizás unpoco después de salir

      No quiero ser muy rígido, pero digo que hay una gran diferencia en lo que está escrito durante su tiempo en prisión y después porque la emoción está más cruda en el momento. Sin embargo, con todo de esta época, información va lento y casi nunca van a tener los datos específicos. Con esto, es interesante ver si realimente hay una diferencia en su literatura durante y después de prisión.

    1. Además, se genera una imagen sobrevalorada del desempeño, lo que deteriora la confiabilidad de las notas como un indicador de competencias (Schorr, 2025). Así, obtener buenas calificaciones independientemente de las características del curso desincentiva la asistencia a clases, debido a que se puede considerar una actividad prescindible para aprobar las materias. De esta manera, se genera una sensación de injusticia en las notas entre aquellos estudiantes comprometidos con sus deberes académicos y quienes cumplen con lo justo, pues las calificaciones no distinguen esto. De hecho, como el estándar de calificación es elevado, si un estudiante no alcanza una nota de acuerdo a ese estándar, esto genera estrés en los universitarios, puesto que cualquier nota por debajo de este umbral es considerada como un fracaso (Schorr, 2025).

      Revisaría la redacción de este párrafo. Me parece que se repite muchas veces una o dos ideas y que se podrían presentar de manera más concisa.

    2. llegando al punto de que los universitarios pueden ausentarse a las evaluaciones regulares de los cursos y solamente presentarse al exámen final, o bien que la asistencia obligatoria pierda este carácter.

      En general, creo que hace falta citas en este párrafo.

      Sobre esta frase en particular, ¿hay alguna evaluación desde Pregrado sobre la implementación de las adecuaciones curriculares?

    3. Las causas sobre el fenómeno de las calificaciones universitarias pueden encontrar diversas razones

      Está un poco rara la redacción. Pondría algo como "Los factores asociados a las calificaciones universitarias..."

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      The authors use methylphenidate (MPH) administration after learning a Pavlovian to instrumental transfer (PIT) task to parse decision making from instrumental influences. While the main effects were null, individual differences in working memory ability moderated the tendency of MPH to boost cognitive control in order to override PIT-biased instrumental learning. Importantly, this working memory moderator had symmetrical effects in appetite and aversive conditions, and these patterns replicated within each valence condition across different values of gain/loss (Fig S1c), suggesting a reliable effect that is generalized across instances of Pavlovian influence.

      Strengths:

      The idea of using pharmacological challenge after learning but prior to transfer is a novel technique that highlights the influence of catecholamines on the expression of learning under Pavlovian bias, and importantly it dissociated this decision feature from the learning of stimulus-outcome or action-outcome pairings.

      We thank the reviewer for highlighting the timing of the pharmacological intervention as a strength for this study and for the suggested improvements for clarification.

      Weaknesses:

      While the report is largely straightforward and clearly written, some aspects may be edited to improve the clarity for other readers.

      (1) Theoretical clarity. The authors seem to hedge their bets when it comes to placing these findings within a broader theoretical framework.

      Our findings ask for a revision of theories on how catecholamines are involved in instantiation of Pavlovian biases in decision making. The reviewer rightly notices that we offer three routes to modify current theory to be able to incorporate our findings. Briefly, these routes discuss catecholaminergic modulation of Pavlovian biases (i) through modulation of the putative striatal ‘origin’ of Pavlovian biases, (ii) through top-down control, primarily relying on prefrontal processes, and (ii) a combination of the two, where catecholamines regulate the balance between these striatal and frontal processes.

      Given the systemic nature of the pharmacological manipulation, we cannot dissociate between these three accounts. We believe that discussing these possible explanations enriches our Discussion and strengthens our recommendation in the ultimate paragraph to use pharmacological neuroimaging studies to arbitrate between these options. In the revision, we have made this line of reasoning more clear, in part by adding guiding titles to the Discussion section and adding a summary paragraph in the Discussion (Discussion, page 9-12).

      (2) Analytic clarity: what's c^2?

      C^2 seems a technical pdf conversion error problem: all chi-squares (Χ2) have been converted to C2. This is now corrected in our revision.

      Reviewer #2 (Public review):

      Summary:

      In this study, Geurts et al. investigated the effects of the catecholamine reuptake inhibitor methylphenidate (MPH) on value-based decision making using a combination of aversive and appetitive Pavlovian to Instrumental Transfer (PIT) in a human cohort. Using an elegant behavioural design they showed a valence- and action-specific effects of Pavlovian cues on instrumental responses. Initial analyses show no effect of MPH on these processes. However the authors performed a more in-depth analysis and demonstrated that MPH actually modulates PIT in actionspecific manner depending of individual working memory capacities. The authors interpret that as an effect on cognitive control of Pavlovian biasing of actions and decision making more than an invigoration of motivational biases.

      Strengths:

      A major strength of this study is its experimental design. The elegant combination of appetitive and aversive Pavlovian learning with approach/avoidance instrumental actions allows to precisely investigate the different modulation of value-based decision making depending on the context and environmental stimuli. Important MPH is only administered after Pavlovian and instrumental learning, restricting the effect on PIT performance only. Finally, the use of a placeboontrolled crossover design allows within-comparisons between PIT effect under placebo and MPH and the investigation of the relationships between working memory abilities, PIT and MPH effects.

      We thank the reviewer for highlighting the experimental design as a strength for this study and the suggested improvements for clarification.

      Weaknesses:

      As authors stated in their discussion, this study is purely correlational and their conclusions could be strengthened by the addition of interesting (but time- and resource-consuming) neuroimaging work.

      We employ a pharmacological intervention within a randomized placebo controlled cross-over design, which allows for causal inferences with respect to the placebo-controlled intervention. Thus, the reported interactions of interest include correlations, but these are causally dependent on our intervention.

      Perhaps the reviewer refers to the implications of our findings for hypotheses regarding neural implementation of Pavlovian bias-generation. Indeed, based on our data we are not able to arbitrate between frontal and striatal accounts, due to the systemic nature of the pharmacological intervention. Thus, we agree with the reviewer that neuroimaging (in combination with for example brain stimulation) would be a valuable next step to identify the neural correlates to these pharmacological intervention effects, to dissociate between frontal and striatal basis of the effects. In the revision, as per our reply to reviewer 1, we have made this line of reasoning more clear, in part by adding guiding titles to the Discussion section and adding a summary paragraph in the Discussion (Discussion, page 9-12).

      The originality of this work compared to their previous published work using the same cohort could also be clarified at different stages of the article, as I initially wondered what was really novel. This point is much clearer in the discussion section.

      As recommended, we brought forward parts of the Discussion that clarify the originality of the current experiment to the introduction (page 4/5) and result section (page 8).

      A point which, in my opinion, really requires clarification is when the working memory performance presented in Figure 2B has been determined. Was it under placebo (as I would guess) or under MPH? If it is the former, it would be also interesting to look at how MPH modulates working memory based on initial abilities.

      We now clarified that working memory span was assessed for all participants on Day 2 prior to the start of instrumental training (as illustrated in figure 1A). Importantly, this was done prior to ingestion of the drug or placebo (which subjects received after Pavlovian training, which followed the instrumental training). This design also precludes an assessment of the effects of MPH on working memory capacity.

      A final point is that it could be interesting to also discuss these results, not only regarding dopamine signalling, but also including potential effect of MPH on noradrenaline in frontal regions, considering the known role of this system in modulating behavioural flexibility.

      We indeed focus our Discussion more on dopamine than on noradrenaline. Our revision now also discusses noradrenaline in light of our frontal control hypothesis and the recommendation, in future studies, to use a multi-drug design, incorporating, for example, a session with the drug atomoxetine, which modulates cortical catecholamines, but not striatal dopamine (Discussion, page 12).

      Reviewer #3 (Public review):

      The manuscript by Geurts and colleagues studies the effects of methylphenidate on Pavlovian to instrumental transfer in humans and demonstrates that the effects of the drug depend on the baseline working memory capacity of the participants. The experiment used a well established cognitive task that allows to measure the effects of Pavlovian cues predicting monetary wins and losses on instrumental responding in two different contexts, namely approach and withdraw. By administering the drug after participants went through the instrumental and Pavlovian learning phases of the experiment, the authors limited the effects of the drug to the transfer phase in extinction. This allowed the authors to make inference about the invigorating effects of the cues independently from any learning bias. Moreover, the authors employed a within subject design to study the effect of the drug on 100 participants, which also allows to detect continuous between-subject relationships with covariates such as working memory capacity.

      The study replicates previous findings using this task, namely that appetitive cues promote active responding, and aversive cues promote passive responding in an approach instrumental context, whereas the effect of the cues reverses in a withdraw instrumental context. The results of the methylphenidate manipulation show that the drug decreases the effects of the Pavlovian cues on instrumental responding in participants with low working memory capacity but increases the Pavlovian effects in participants with high working memory capacity. Importantly, in the latter group, methylphenidate increases the invigorating effect of appetitive Pavlovian cues on active approach and aversive Pavlovian cues on active withdrawal as well as the inhibitory effects of aversive Pavlovian cues on active approach and appetitive Pavlovian cues on active withdrawal. These results cannot be explained if catecholamines are just involved in Pavlovian biases by modulating behavioral invigoration driven by the anticipation of reward and punishment in the striatum, as this account can't account for the reversal of the effects of a valence cue on vigor depending on the instrumental context.

      In general, I find the methods of this study very robust and the results very convincing and important. However, I have some concerns:

      We thank the Reviewer for highlighting the robustness of the methods and the importance of the results. We are glad to shortly address the concerns here and have incorporated these in our revision.

      I am not convinced that the inclusion of impulsivity scores in the logistic mixed model to analyze the effects of methylphenidate on PIT is warranted. The authors do not show whether inclusion of this covariate is justified in terms of BIC. Moreover, they include this covariate but do not report the effects. Finally, it is possible that impulsivity is correlated with working memory capacity. In that case, multicollinearity may impact the estimation of the coefficient estimates and may inflate the p-values for the correlated covariates. Are the reported results robust when this factor is not included?

      With regard to the inclusion of impulsivity we first like to mention that this inclusion in our analyses was planned a priori and therefore consistently implemented in the other reports resulting from the overarching study (Froböse et al., 2018; Cook et al., 2019; Rostami Kandroodi et al., 2021), especially the study with regard to which the current report is an e-life research advance (Swart et al., 2017). Moreover, we preregistered both working memory span and impulsivity as potential factors (under secondary measures) that could mediate the effects of catecholamines (see https://onderzoekmetmensen.nl/nl/trial/26989). The inclusion of working memory span was based on evidence from PET imaging studies demonstrating a link with dopamine synthesis capacity (Cools et al., 2008; Landau et al, 2009), whereas the inclusion of trait impulsivity was based on evidence from other PET imaging studies showing a link with dopamine (auto)receptor availability (Buckholtz et al., 2010; Kim et al., 2014; Lee et al., 2009; Reeves et al., 2012). Although there was no significant improvement for the model with impulsivity compared with the model without impulsivity, we feel that we should follow our a priori established analyses.

      We can confirm that impulsivity and working memory were not correlated in this sample (r98=-0.16, p=0.88), which rules out multicollinearity.

      Most importantly, results are robust to excluding impulsivity scores as evidenced by a significant four-way interaction from the omnibus GLMM without impulsivity (Action Context x Valence x Drug x WM span: X<sup>2</sup> = 9.5, p=0.002). We will report these findings in the revised manuscript. We now added the text to the Supplemental Results: Control analyses, page 28.

      The authors state that working memory capacity is an established proxy for dopamine synthesis capacity and cite some studies supporting this view. However, the authors omit a recent reference by van den Bosch et al that provides evidence for the absence of links between striatal dopamine synthesis capacity and working memory capacity. The lack of a robust link between working memory capacity and dopamine synthesis capacity in the striatum strengthens the alternative explanations of the results suggested in the discussion.

      We agree with the Reviewer that the lack of a robust link between working memory capacity and dopamine synthesis capacity in the striatum, as measured with [<sup>18</sup>F]-FDOPA PET imaging, is lending support for the proposed hypothesis incorporating a broader perspective on Pavlovian bias generation than the dopaminergic direct/indirect pathway account (although it is possible that the association will hold in a larger sample when synthesis capacity is measured with [<sup>18</sup>F]-FMT PET imaging, which is sensitive to a different component of the metabolic pathway). We will indeed incorporate in our planned revision the findings from our group reported in van den Bosch et al (2022).

      See Supplemental methods 2: Working memory and impulsivity assessment, page 26.

      ** Recommendations for the authors:**

      Reviewer #1 (Recommendations for the authors):

      (1) Theoretical clarity. Some aspects of the paper are ideally clear: Figure 1 clearly explains the paradigm. The general take-home message is clearly described in the last line of the abstract, the last line of the introduction, the first line of the discussion, and throughout other places in the discussion. Yet the authors seem to hedge their bets when it comes to placing these findings within a broader theoretical framework.

      The discussion includes many possible theoretical interpretations of the findings, which is laudable, but many readers may get lost in this multitude (particularly anyone who isn't an RL/DA aficionado). The group's prior work (i.e. striatal hypothesis) is first described, followed by a rather complex breakdown of valenceaction tendencies, then the seemingly preferred explanation for the current study (i.e. cognitive control hypothesis) is advanced as "an alternative account ...". This is followed by a third, more complex idea (i.e. cortico-striatal balance hypothesis), then the paper ends. A reader may be forgiven for skimming through this discussion and not having a clear idea of how to frame these effects. I think some subheaders would help, as well as clearer labeling of the theoretical interpretations in line with a more authoritative description of the author's preferred interpretation of the empirical effects.

      Our findings ask for a revision of theories on how catecholamines are involved in instantiation of Pavlovian biases in decision making. The reviewer rightly notices that we offer three routes to modify current theory to be able to incorporate our findings. Briefly, these routes discuss catecholaminergic modulation of Pavlovian biases (i) through modulation of the putative striatal ‘origin’ of Pavlovian biases, (ii) through top-down control, primarily relying on prefrontal processes, and (ii) a combination of the two, where catecholamines regulate the balance between these striatal and frontal processes.

      Given the systemic nature of the pharmacological manipulation, we cannot dissociate between these three accounts. We believe that discussing these possible explanations enriches our Discussion and strengthens our recommendation in the ultimate paragraph to use pharmacological neuroimaging studies to arbitrate between these options. In the revision, we have made this line of reasoning more clear, in part by adding guiding titles to the Discussion section and adding a summary paragraph in the Discussion (Discussion, page 9-12).

      (2) All statistical effects are presented as c^2 with no df. The methods only describe LMER and make no mention of what the c^2 measure represents.

      C^2 seems a technical pdf conversion error problem: all chi-squares (Χ2) have been converted to C2. This is now corrected in our revision.

      Reviewer #2 (Recommendations for the authors):

      Few minor points:

      Figure 2A is not cited in the text I think

      Checked and changed.

      Figure 2C: "C" is not present in the figure. Also I could not see the data corresponding at MPH-Approach context in Neutral Pavlovian condition but I think it is probably masked by another curve.

      Checked and changed. Indeed, the one curve is masked by the other curve.

      As I stated in the public review, a clarification or more detailed analysis of working memory performance depending on if it was measured under MPH or placebo could be a plus.

      Changed this (see public review reply).

      I did not see any statement about the availability of data but I may have missed it.

      Yes, the statement can be found:

      Methods, page 13: Data and code for the study are freely available at https://data.ru.nl/collections/di/dccn/DSC_3017031.02_734.

      Reviewer #3 (Recommendations for the authors):

      The authors should check that inclusion of impulsivity in the logistic mixed model is justified and if it is justified make sure that multicollinearity is not problematic.

      See answer to public review for convenience reiterated below:

      With regard to the inclusion of impulsivity we first like to mention that this inclusion in our analyses was planned a priori and therefore consistently implemented in the other reports resulting from the overarching study (Froböse et al., 2018; Cook et al., 2019; Rostami Kandroodi et al., 2021), especially the study with regard to which the current report is an e-life research advance (Swart et al., 2017). Moreover, we preregistered both working memory span and impulsivity as potential factors (under secondary measures) that could mediate the effects of catecholamines (see https://onderzoekmetmensen.nl/nl/trial/26989). The inclusion of working memory span was based on evidence from PET imaging studies demonstrating a link with dopamine synthesis capacity (Cools et al., 2008; Landau et al, 2009), whereas the inclusion of trait impulsivity was based on evidence from other PET imaging studies showing a link with dopamine (auto)receptor availability (Buckholtz et al., 2010; Kim et al., 2014; Lee et al., 2009; Reeves et al., 2012). Although there was no significant improvement for the model with impulsivity compared with the model without impulsivity, we feel that we should follow our a priori established analyses.

      We can confirm that impulsivity and working memory were not correlated in this sample (r98=-0.16, p=0.88), which rules out multicollinearity.

      Most importantly, results are robust to excluding impulsivity scores as evidenced by a significant four-way interaction from the omnibus GLMM without impulsivity (Action Context x Valence x Drug x WM span: X<sup>2</sup> = 9.5, p=0.002). We will report these findings in the revised manuscript. We now added the text to the Supplemental Results Control analyses, page 28.

      I would recommend that the authors make clear that the effects of methylphenidate are dependent on working memory capacity in the first sentence of the fore last paragraph of the introduction on page 4.

      Changed this accordingly, see Introduction, page 5.

      I would make sure that the text in the figures is readable without needing to enlarge the figures. I would also highlight the significant effects in the figures.

      We changed the font size accordingly and added significance statements to the caption, because depicting the significance of a four-way interaction including one continuous variable is not straightforward.

      The distributions of p(Go) by conditions such as in figure 1D or 2A are very intuitive. Figure 2B is very informative as it shows the continuous effects of working memory capacity on the PIT effect. I would add (in figure 2 or in the supplement) a plot of the p(Go) with a tertile split based on working memory. Considering that the correspondent analysis is being reported, having the plot would strengthen and simplify the understanding of the results.

      The continuous effects of working memory are based on WM values on the listening span ranging from 2.5-7, in steps of 0.5, resulting in 10 different values. A tertile split would result in binning these into two bins of three values, and one bin of four values. Given that all of the datapoints for this tertile split are already presented in the current figures, we strongly prefer not to include this additional figure.

      I would add some sentences in the results section (and maybe in the discussion if needed) addressing the results that the effect of Valence by drug by WM span is only significant in the withdrawal context but not in the approach context.

      We now added an emphasis on the specifically significant drug effects in withdrawal in the Results section, page 8.

    1. In de MIM standaard is er echter maar één subtypering mogelijk

      Staat dat ergens? Ik ga er niet zomaar van uit dat subtypen mutual exclusive zijn. Met een constraint kan je het wel aangeven.

    2. categorieën ook zien als conceptuele domeinobjecten

      Vraag. Is dat altijd zo? Is 'gewicht tussen 0 en 5 kg' een categorie? Zo ja kan het ook een domeinobject zijn?

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      In this manuscript, Aghabi et al. present a comprehensive characterization of ZFT, a metal transporter located at the plasma membrane of the eukaryotic parasite Toxoplasma gondii. The authors provide convincing evidence that ZFT plays a crucial role in parasite fitness, as demonstrated by the generation of a conditional knockdown mutant cell line, which exhibits a marked impact on mitochondrial respiration, a process dependent on several iron-containing proteins. Consistent with previous reports, the authors also show that disruption of mitochondrial metabolism leads to conversion into the persistent bradyzoite stage. The study then employed advanced techniques, such as inductively coupled plasma-mass spectrometry (ICP-MS) and X-ray fluorescence microscopy (XFM), to demonstrate that ZFT depletion results in reduced parasite-associated metals, particularly iron and zinc. Additionally, the authors show that ZFT expression is modulated by the availability of these metals, although defects in the transporter could not be compensated for by exogenous addition of iron or zinc. 

      While the manuscript does not directly investigate the transport function of ZFT through biochemical assays, the authors indirectly support the notion that ZFT can transport zinc by demonstrating its ability to compensate for a lack of zinc transport in a yeast heterologous system. Furthermore, phenotypic analyses suggest defects in iron availability, particularly with regard to Fe-S mitochondrial proteins and mitochondrial function. Overall, the manuscript provides a solid, well-rounded argument for ZFT's role in metal transport, using a combination of complementary approaches. Although direct biochemical evidence for the transporter's substrate specificity and transport activity is lacking, the converging evidence, including changes in metal concentrations upon ZFT depletion, yeast complementation data, and phenotypic changes linked to iron deficiency, presents a convincing case. Some aspects of the results may appear somewhat unbalanced, particularly since iron transport could not be confirmed through heterologous complementation, while zinc-related phenotypes in the parasites have not been thoroughly explored (which is challenging given the limited number of zinc-dependent proteins characterized in Toxoplasma). Nevertheless, given that metal acquisition remains largely uncharacterized in Toxoplasma, this manuscript provides an important first step in identifying a metal transporter in these parasites, and the data presented are generally convincing and insightful. 

      We thank the reviewer for their assessment and would like to highlight that we now add direct biochemical characterisation in the new Figure 8, supporting our hypothesis and confirming iron transport by this protein.

      Reviewer #2 (Public review): 

      Summary: 

      The intracellular pathogen Toxoplasma gondii scavenges metal ions such as iron and zinc to support its replication; however, mechanistic studies of iron and zinc uptake are limited. This study investigates the function of a putative iron and zinc transporter, ZFT. In this paper, the authors provide evidence that ZFT mediates iron and zinc uptake by examining the regulation of ZFT expression by iron and zinc levels, the impact of altered ZFT expression on iron sensitivity, and the effects of ZFT depletion on intracellular iron and zinc levels in the parasite. The effects of ZFT depletion on parasite growth are also investigated, showing the importance of ZFT function for the parasite. 

      Strengths: 

      A key strength of the study is the use of multiple complementary approaches to demonstrate that ZFT is involved in iron and zinc uptake. Additionally, the authors build on their finding that loss of ZFT impairs parasite growth by showing that ZFT depletion induces stage conversion and leads to defects in both the apicoplast and mitochondrion. 

      Weaknesses: 

      (1) Excess zinc was shown not to alter ZFT expression, but a cation chelator (TPEN) did lead to decreased expression. While TPEN is often used to reduce zinc levels, does it have any effect on iron levels? Could the reduction in ZFT after TPEN treatment be due to a reduction in the level of iron or another cation?

      WE thank the reviewers for this comment, we agree that TPEN is a fairly unspecific cation chelator so to determine if its effects are due to removal of zinc or other cations we treated with TPEN and either zinc or iron. Co-incubation of TPEN and zinc prevented ZFT depletion, while TPEN+FAC had no effect compared to TPEN alone (new Figure 6h and i), strongly suggesting the effects on ZFT abundance are linked to zinc and not just iron.  

      (2) ZFT expression was found to be dynamic depending on the size of the vacuole, based on mean fluorescence intensity measurements. Looking at protein levels by Western blot at different times during infection would strengthen this finding. 

      We show here that ZFT expression is highly dynamic, depending both the iron status of the host cell and the number of parasites/vacuole. However, validating this finding by western would be complex due to the highly unsynchronised nature of parasite replication and the large number (5x10<sup>6</sup> - 1x10<sup>7</sup>cells) of parasites required to visualise ZFT. Further, we show that ZFT is apparently internalised prior to degradation. For this reason, we have not attempted to validate this finding by western blotting at this time.

      (3) ZFT localization remained at the parasite periphery under low iron conditions. However, in the images shown in Figure S1c, larger vacuoles (containing 4-8 parasites) are shown for the untreated conditions, and single parasite-containing vacuoles are shown for the low iron condition. As ZFT localization is predominantly at the basal end of the parasite in larger PV and at the parasite periphery for smaller vacuoles, it would be better to compare vacuoles of similar size between the untreated and low-iron conditions.

      The reviewer brings up a good point, the concentration of iron chelator that we used here does not enable parasite replication, making an assessment of changes in localisation challenging. To address this, have new data using a much lower concentration of chelator (20 mM), which is still expected to impact the parasites (Hanna et al, 2025), but allows for replication. In this low iron environment, ZFT localisation remained significantly more peripheral (Fig. S1d,e), supporting our hypothesis that ZFT localisation is iron dependent, independent of vacuolar stage.

      Reviewer #3 (Public review): 

      Summary:

      Aghabi et al set out to characterize a T. gondii transmembrane protein with a ZIP domain, termed ZFT. The authors investigate the consequences of ZFT downregulation and overexpression for parasite fitness. Downregulation of ZFT causes defects in the parasite's endosymbiotic organelles, the apicoplast and the mitochondrion. Specifically, lack of ZFT causes a decrease in mitochondrial respiration, consistent with its role as an iron transporter. This impact on the mitochondria appears to trigger partial differentiation to bradyzoites. The authors furthermore demonstrate that expression of TgZFT can rescue a yeast mutant lacking its zinc transporter and perform an array of direct metal ion measurements, including X-ray fluorescence microscopy and inductively coupled mass spectrometry (ICP-MS). These reveal reduced metal ions in parasites depleted in ZFT. Overall, the data by Aghabi et al. reveal that ZFT is a major metal ion transporter in T. gondii, importing iron and zinc for diverse essential processes. 

      Strengths:

      This study's strength lies in the thorough characterization of the transporter. The authors combine a number of techniques to measure the impact of ZFT depletion, ranging from the direct measurement of metal ions to determining the consequences for the parasite's metabolism (mitochondrial respiration), as well as performing a yeast mutant complementation. This work is very thorough and clearly presented, leaving little doubt about this protein's function. 

      Weaknesses:

      This study offers no major novel insights into the biology of T. gondii. The transporter was already annotated as a zinc transporter (ToxoDB), was deemed essential (PMID: 27594426), and localized to the plasma membrane (PMID: 33053376). This study mostly confirms and validates these previous datasets. The authors identify three other proteins with a ZIT domain. Particularly, the role of TGME49_225530 is intriguing, as it is likely fitness-conferring (score: -2.8, PMID: 27594426) and has no subcellular localization assigned. Characterizing this protein as well, revealing its localization, and identifying if and how these transporters coordinate metal ion transport would have been worthwhile. 

      We agree that the work presented here validates the previous datasets, and if that was all we had done, we agree that the biological insights would be limited. However, we have gone significantly beyond the predictions, demonstrating dynamic localisation changes, iron-mediated regulation, the lack of substrate-based complementation and validating transport activity of both zinc and iron. Although in silico predictions and screens can be informative, it remains important to validate biological functions experimentally. While we agree that characterisation of TGME49_225530 (as well as the other two annotated ZIP proteins) would be interesting, and will certainly form part of our future plans, it is significantly beyond the scope of the presented manuscript.

      Another weakness is the data related to the impact of ZFT downregulation on the apicoplast in Figure 4. The authors show that downregulation of ZFT causes an increase in elongated apicoplasts (Figure 4d). The subsequent panels seem to show that the parasites present a dramatic growth defect at that time point. This growth arrest can directly explain the elongated apicoplast, but does not allow any conclusion about an impact on the organelle. In any case, an assessment of 'delayed death' as presented in Figure 4c seems futile, since the many other processes affected by zinc and iron depletion likely cause a rapid death, masking any potential delayed death.

      To address this point, we agree that given the importance of iron and zinc to the parasite that we cannot differentiate the death of the parasite due to apicoplast defects from death from other causes and we have modified the discussion to reflect this, as below.

      “However, given the delayed phenotype typically seen upon apicoplast disruption, we cannot determine if this is a direct effect of ZFT, or a downstream consequence of metal depletion”

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      Specific Comments: 

      (1) The background on the typical sequence features that would identify Toxoplasma ZIP homologues should be expanded and clarified. While these proteins are likely quite divergent and may lack many conserved features, the manuscript currently does not provide enough detail to assess how similar (or different) TgZIPs are from well-characterized family members. Additionally, the justification for focusing on TGGT1_261720 (ZFT) over TGGT1_225530, as stated in the first paragraph of the results section, seems unclear. There is no predictive data supporting a potential plasma membrane localization for TGGT1_225530 (yet this cannot be excluded), and TGGT1_225530 appears to have more canonical metal-binding motifs. I believe that the fact that only TGGT1_261720 is iron-regulated should be sufficient justification for its selection, and this point could be emphasized more clearly. Furthermore, the discussion mentions a leucine residue that may be associated with broad substrate specificity, but this is not addressed in the initial comparative sequence analysis. These residues and the HK motif are not actually addressed in the Gyimesi et al. reference currently mentioned; thus this could be clarified and updated with references (such as PMID: 31914589) that provide more recent insights into key residues involved in metal selectivity in ZIP transporters.

      We thank you for this comment, to address these points:

      We agree that the iron-mediated regulation is sufficient for our focus on ZFT and have clarified the text to reflect this, as described above.

      We have also updated the references as suggested, our apologies for this oversight.

      We have further expanded the discussion, especially with reference to our new results using heterologous expression in oocytes (please see above).

      (2) Figure 1D, Figure 2A, C, H, Figure 3D, Figure 6F, H, corresponding text and paragraph 2 of the Discussion: It seems that most of the "non-specific bands" annotated in Figure 1D, which are lower molecular weight products, are not present in the parental cell line, suggesting they may not be non-specific after all. These bands also vary depending on the cell line (e.g., promoter used, see Figures 2H and 3D) or experimental conditions (e.g., iron excess or depletion). Given the dynamic localization of ZFT during intracellular development, it may be worth exploring whether these lower molecular weight bands represent degraded forms of TgZFT, possibly corresponding to the basally-clustered signal observed by immunofluorescence, with only the full-length protein associating with the plasma membrane. This possibility should be investigated or at least discussed further.

      While the lower bands are not present in the parental, we do see them in other HA-tagged lines, especially when the expression of the tagged protein is low, seen below (Author response image 1). We don’t currently have an explanation for these, but we can confirm that they do not change in abundance in parallel with the full length protein, supporting our hypothesis that these bands are an artefact of the anti-HA antibody in our system. Although ZFT is clearly degraded (e.g. Fig. 1g), we currently do not believe these bands are ZFT c-terminal degradation products.

      Author response image 1.

      Western blot of ZFT-3HA<sub>zft</sub> and another HA-tagged unrelated cytosolic protein, demonstrating that the lower bands are most likely nonspecific.

      (3) It is unfortunate that ZFT could not complement a yeast iron transporter mutant cell line, as this would have provided a strong argument for ZFT's role in iron transport. The manuscript does not provide much detail about the Δfet2/3 yeast mutant line. Fet3 is the ferroxidase subunit, while Ftr1 is the permease subunit of the high-affinity iron transport complex in yeast. Fet2, however, appears to be Saccharomyces cerevisiae's VPS41 homolog. Therefore, is Δfet2/3 the most appropriate mutant to use, or would another mutant line (e.g., ΔFtr1) be a better choice? Additionally, while Figure 7 suggests a decrease in metal uptake upon ZFT depletion, it would be useful to test whether overexpression of ZFT leads to enhanced metal incorporation, perhaps using a FerroOrange assay. 

      We thank the reviewer for their comments, which we have answered below:

      The Δfet2/3 yeast mutant was a typo and has been corrected, or apologies, we did use the  Δfet3/4 mutant line, based on previous successful experiments involving plant metal transporters (e.g  (DiDonato et al., 2004)).

      Unfortunately, we were unable to perform the FerroOrange assay in the overexpression line as this line is endogenously fluorescent in the same channel as FerroOrange.

      However, as detailed above we have now added significant new data, confirming our hypothesis that ZFT is an iron/zinc transporter through heterologous expression in Xenopus oocytes in the new figure 8. This provides direct evidence of transport of iron, and evidence that zinc can inhibit this transport, consistent with our hypothesis.  

      (4) The annotation of the blot in Figure 2H suggests that overexpressed ZFT-TY can only be detected in the absence of heat denaturation. However, this is not addressed in the text. Does heat denaturation also affect the detection of ZFT-3HA or the lower molecular weight products? This should be clarified in the manuscript. 

      Interestingly, ZFT is detectable after boiling at 95° C for 5 minutes when expressed at endogenous (or near endogenous) levels in the ZFT-3HA<sub>sag1</sub> and ZFT-3HA<sub>zft</sub> tagged parasite lines. However, overexpression of ZFT leads to a loss of detection via western blot when boiled, although the protein is detectable without heat denaturation.

      A possible explanation for this is that overexpression of protein may cause ZFT to miss-fold, making the protein more prone to aggregation following boiling, rendering the protein insoluble and unable to enter the gel. Moreover, heat aggregation can sometimes mask the epitope tags on the protein that is required for the antibody to be recognised, possibly explaining by ZFT is undetectable when overexpressed and exposed to boiling conditions, as has previously been observed for other transmembrane proteins (e.g. (Tsuji, 2020)).

      We have clarified this in the results section, although we do not have a full explanation for this, we consider it important to share for others who may be looking at expression of these proteins.

      (5) Figure 3G: It might be helpful to include an uncropped gel profile to allow readers to visualize that the main product does indeed correspond to a potential dimeric form in the native PAGE. 

      This has now been added in Figure S3e, thank you for this suggestion.

      (6) The investigation of the impact of ZFT depletion on the apicoplast could be improved. The authors suggest that ZFT knockdown inhibits apicoplast replication based on a modest increase in elongated organelles, but the term "delayed death" is not appropriate in that case, as it is typically linked to a loss of the organelle. This is not observed here and is also illustrated by the unchanged CPN60 processing profile. So, clearly, there seems to be no strong morphological effect on the apicoplast early on after ZFT depletion. On the other hand, the authors dismiss any impact on TgPDH-E2 lipoylation (which is iron-dependent) based on the fact that the lipoylated form of the protein is still detected by Western blot. However, closer inspection of the blot in Figure 4B suggests that the intensity of the annotated TgPDH-E2 signal is reduced compared to the -ATc condition (although there might be differences in protein loading, as indicated by the control) or even with the mitochondrial 2-oxoglutarate dehydrogenase-E2, whose lipoylation is presumably iron-independent (see PMID: 16778769). This experiment should be repeated, and the results quantified properly in case something was missed, and the duration of depletion conditions perhaps extended further. Of note, it would also be worthwhile to revisit size estimations, as the displayed profiles seem inconsistent with the typical sizes of lipoylated proteins detected with the anti-lipoyl antibody (e.g., ~100 kDa for PDH-E2, ~60 kDa for branched-chain 2-oxo acid dehydrogenase, and ~40 kDa 2-oxoglutarate dehydrogenase).

      We thank the reviewer for this comment. We agree that there is no strong defect on the apicoplast in the first lytic cycle and we have modified the language to remove reference to delayed death, as given the magnitude of changes associated with loss of iron and zinc, we cannot be certain about the role of the apicoplast.

      Based on this suggestion, we have now quantified the levels of lipoylation of PDH-E2, BDCK-E2 and OGDH-E2 and now include this in Figure S4b, c, d. Supporting our other results, we do not see a significant change in PDH-E2 lipolyation upon ZFT knockdown. However, although OGDH-E2 lipoylation is unchanged (Figure S4c) interestingly we do see a significant increase in BDCK-E2 lipoylation (Figure S4d). This process is not expected to be directly iron related, as mitochondrial lipoylation is through scavenging rather than synthesis however, speaks to the larger mitochondrial disruption that we see. We now consider this further in the discussion.

      For the sizes, we thank the reviewer for bringing this up, our apologies this was due to an error in the annotation, and we have now corrected this in the figure.

      (7) In the third paragraph of the discussion, the authors mention the inability to complement ZFT loss by adding exogenous metals. One argument is the potential lack of metal access to the parasitophorous vacuole (PV). Although largely unexplored, this point could be expanded further in the discussion, as the issue of metal transport to the parasite involves not only the parasite plasma membrane but also the PV membrane. Additionally, the authors mention the absence of functional redundancy in transporters, but it would be helpful to discuss potential stage-specific or differential expression of other ZIP candidates. Transcriptomic data available on Toxodb.org could provide useful insights into this, and experimental approaches, such as RT-PCR, could be used to assess the expression of these candidates in the absence of ZFT. 

      On the issue of metals crossing the PV membrane, we agree that while we do not currently know mechanisms of metal transport within the infected host cell, we do have experimental confirmation that the concentration and form of the metals that we are using can impact the parasites. We show that metal treatment inhibits parasites growth (e.g. Figure 3k-n, Figure 6a-d) and we can detect the increased metals through our experiments using FerroOrange and FluroZine (Figure 7a, c). In these experiments, parasites were treated intracellularly and so we can confirm that, regardless of the mechanism, iron and zinc can reach the parasite. While entry of metals across the PV is an intriguing question, it is beyond the scope of the present work which focuses on the role of the selected transporter.

      We agree that a more detailed discussion of the other ZIP transporters is warranted. We have extended this section of the discussion although for now, we cannot determine the role of the other ZIP transporters in Toxoplasma.

      (8) In the discussion, the authors mention that « Inhibition of respiration has previously been linked to bradyzoite conversion ». To strengthen their point, the authors could mention that mitochondrial Fe-S mutants, as well as mutants affecting mitochondrial translation or the mitochondrial electron transport chain, also initiate bradyzoite conversion (PMID: 34793583). This would reinforce the connection between mitochondrial dysfunction and stage conversion. 

      This is an excellent point and we have added this to the discussion as follows:

      “Inhibition of mitochondrial Fe-S biogenesis or mitochondrial respiration have both previously been linked to bradyzoite conversion (Pamukcu et al., 2021; Tomavo and Boothroyd, 1995), however we do not yet know the signalling factors linking iron, zinc or mitochondrial function to bradyzoite differentiation”.

      (9) As a general comment on manuscript formatting, providing page and line numbers would significantly improve the manuscript's readability and allow reviewers to more easily reference specific sections. This would help address the minor issues of typos (e.g., multiple occurrences of "promotor"). I suggest a careful read-through to correct these issues. 

      We thank the reviewer for this comment and in the resubmitted version we have corrected these issues. 

      Reviewer #2 (Recommendations for the authors): 

      (1) In the alignment (Figure 1a), the BPZIP sequence is from which organism (genus, species)? It would be helpful to include this information in the figure legend.

      Apologies for this oversight, this figure and section have been reworked and the species name (Bordetella bronchiseptica) added.

      (2) In reference to Figure 1a, the authors state, "Interestingly, all parasite ZIP-domain proteins examined have a HK motif at the M2 metal binding". I was wondering if by "all" the authors mean Toxoplasma and Plasmodium falciparum (shown in Figure 1a) or did the authors also look at other apicomplexan parasites such as Cryptosporidium or Neospora? Is this a general feature of apicomplexan parasites? 

      We looked at this, and the HK motif in the M2 binding site is conserved in Neospora Cryptosporidium, and even the digenic gregarine Porospora cf. gigantea. However, in the more distantly related Chromera we find a HH motif at the same position. This suggests that the HK motif is present in the Apicomplexa, but not conserved in the free-living Alveolata. Although we cannot speculate on the role of this motif currently, its role in metal import in Apicomplexa does deserve future scrutiny. To reflect this finding we have modified Figure 1a and the text.

      (3) In Figure 1e, to better visualize the ZFT-3HA staining at the basal pole, it would be better to omit the DAPI staining from the merged image. It is difficult to see the ZFT staining in the image of the large vacuole.

      We have removed the DAPI from this image to improve clarity.

      (4) Based on the "delayed-death" phenotype of the apicoplast, it is not surprising that no defects were observed in CPN60 processing or protein lipoylation. Have the authors considered measuring these phenotypes after a further round of growth (as was done for visualizing apicoplast morphology)? 

      We agree that changes in apicoplast function are often only seen in the second round of replication. However, here we wanted to check if ZFT depletion led to immediate changes in function of the organelle, which was not the case. It is highly likely that after the second round, we would see significant defects in the apicoplast function, however given the immediate importance of iron and zinc to many processes within the parasite, we believe that these experiments would be complicated to interpret.

      (5) Depleting ZFT led to a reduction in expression levels for the mitochondrial Fe-S protein SDHB but not for a cytosolic Fe-S protein. Is it expected that less intracellular iron (via depleted ZFT) would differentially affect mitochondrial versus cytosolic Fe-S proteins? 

      Previous studies (e.g., Maclean et al., 2024; Renaud et al., 2025) have shown that upon direct inhibition of the cytosolic Fe-S pathway, ABCE1 is fairly stable and levels can persist for 2-3 days post treatment. However, our recent work has shown that rapid and acute depletion of iron directly (though treatment with a chelator) can lead to ABCE1 levels decreasing within 24h (Hanna et al., 2025). In the case of ZFT knockdown, due to the more gradual reduction in iron levels seen (e.g. Figure 7j) we believe the parasites are prioritising key Fe-S pathways (e.g. essential proteostasis through ABCE1), probably while remodelling metabolism (as seen in our Seahorse assays). However, there are many proteins expected to be directly impacted by iron and zinc restriction that these parasites experience, and different protein classes are expected to behave differently in these conditions.

      Reviewer #3 (Recommendations for the authors): 

      (1) Is the effect on the plaque size between T7S4-ZFT (-aTc) in regular and 'high iron' conditions significant? The authors show convincingly that the plaque size is smaller due to the swapped promoter and the resulting overexpression of ZFT. But is the effect aggravated in high iron? This would be expected if excess iron were the problem.

      The plaque sizes are significantly smaller in the T7S4-ZFT line under high iron compared to the untreated condition, and compared to the parental untreated line. However, if we normalise plaque size to untreated conditions for both lines, there is not a significant change in plaque size in high iron between the parental and T7S4-ZFT. This is possibly due to the concentration of iron used (200 mM), which may not be optimal to see this effect, or the time taken for plaque assays (6-7 days), which may allow the excess iron to be stored by the host cells, changing the effective concentration of parasite exposure.

      (2) I struggle to understand the intracellular growth assay in Figure 5b. Here, T7S4-ZFT parasites show 25 % of vacuoles with more than 8 parasites (labelled 8+). But such large vacuoles are not observed in the parental strain. It appears as if the inducible strain grows faster even though it was earlier shown to have a fitness defect (see Figure 3j). Can you please clarify?

      This is a result of rapid growth of the parental line, some vacuoles in this line lysed and initiated a new round of replication at this time point while we saw no evidence at any timepoint that ZFT-depleted parasites were able to lyse the host cell. However, the initial (24-48h post ATc addition) replication rate of the ZFT KD remains similar to the parental. In this panel, we wanted to emphasize that the major phenotype we see upon ZFT depletion is vacuole disorganisation, which we believe is linked to the start of differentiation into bradyzoites.

      (3) Did the authors perform an IFA in addition to the Western blot to localize the 2nd Ty-tagged ZFT copy? It seems important to validate that the protein correctly localizes to the plasma membrane. 

      We have done so and now include these data in Figure S2b. Overexpression of ZFT-Ty localises to internal structures (probably vesicles) with some signal at the periphery, however, this limited expression at the periphery is sufficient to mediate the phenotypes that we see.

      (4) First sentence of the abstract and introduction: The authors speak of metabolism and cellular respiration as though they are two different processes. Is respiration not part of metabolism? 

      This is an excellent point, we wanted to distinguish mitochondrial respiration  from general cellular metabolism, but this was not clear. We have now changed this in the introduction to the below:

      “Iron, and other transition metals such as zinc, manganese and copper, are essential nutrients for almost all life, playing vital roles in biological processes such as DNA replication, translation, and metabolic processes including mitochondrial respiration (Teh et al., 2024)”

      (5) 2nd paragraph of the introduction: toxoplasmosis is written capitalized but should be lower case.

      This has been corrected.

      (6) Figure 4j legend: change 'shits parasites to a more quiescent stage' to 'shifts parasites'.

      This has been corrected, our apologies.

      (7) Please correct the following sentence: 'These data demonstrate ZFT depletion leads to the expression of the bradyzoite-specific markers BAG1 and DBL.' DBL is not expressed by the parasite. It is a lectin that binds to the sugars in the cyst wall.

      We have now modified this in the text. The sentence now reads: “These data show that ZFT depletion leads to the expression of the bradyzoite marker BAG1 and the production of the cyst wall, as detected by DBL”.

      (8) In the section on yeast complementation with TgZFT, the authors write: 'Based on this success, we also attempted to complement...'. Please consider changing 'Success' to something more neutral.

      We have modified the text to now read: “Based on these results, we also attempted to complement”…

      (9) In the discussion, the authors write: 'We see a delayed phenotype on the apicoplast, suggesting that metal import is also required in this organelle, although no apicoplast metal transporters have yet been identified.' Please consider the study Plasmodium falciparum ZIP1 Is a Zinc-Selective Transporter with Stage-Dependent Targeting to the Apicoplast and Plasma Membrane in Erythrocytic Parasites (PMID: (38163252).

      We thank the reviewer for the note and have modified the text to include this and the reference. Please see below:

      “Iron is known to be required in the apicoplast (Renaud et al., 2022), zinc also may be required, as the fitness-conferring Plasmodium zinc transporter ZIP1 is transiently localised to the apicoplast (Shrivastava et al., 2024), although the functional relevance of this localisation has not yet been established”.

      (10) The authors write: 'Iron is known to be required in the apicoplast (Renaud et al., 2022), although a potential role for zinc in this organelle has not yet been established.' The role for zinc in the apicoplast may not have been shown formally, but surely among its hundreds of proteins, and those involved in replication and transcription, there are some that depend on zinc...?

      Yes, we agree it would make sense, however multiple searches using ToxoDB and the datasets from Chen et al (2025) were unable to find any apicoplast-localised proteins with zinc-binding domains. We cannot exclude that zinc is in the apicoplast, and the results from Plasmodium (Shrivastava et al., 2024) may suggest that is, however currently we do not have any evidence for its role within this organelle.

      References

      DiDonato, R.J., Roberts, L.A., Sanderson, T., Eisley, R.B., Walker, E.L., 2004. Arabidopsis Yellow Stripe-Like2 (YSL2): a metal-regulated gene encoding a plasma membrane transporter of nicotianamine-metal complexes. Plant J 39, 403–414. https://doi.org/10.1111/j.1365-313X.2004.02128.x

      Hanna, J.C., Shikha, S., Sloan, M.A., Harding, C.R., 2025. Global translational and metabolic remodelling during iron deprivation in Toxoplasma gondii. https://doi.org/10.1101/2025.08.11.669662

      Maclean, A.E., Sloan, M.A., Renaud, E.A., Argyle, B.E., Lewis, W.H., Ovciarikova, J., Demolombe, V., Waller, R.F., Besteiro, S., Sheiner, L., 2024. The Toxoplasma gondii mitochondrial transporter ABCB7L is essential for the biogenesis of cytosolic and nuclear iron-sulfur cluster proteins and cytosolic translation. mBio 15, e00872-24. https://doi.org/10.1128/mbio.00872-24

      Pamukcu, S., Cerutti, A., Bordat, Y., Hem, S., Rofidal, V., Besteiro, S., 2021. Differential contribution of two organelles of endosymbiotic origin to iron-sulfur cluster synthesis and overall fitness in Toxoplasma. PLoS Pathog 17, e1010096. https://doi.org/10.1371/journal.ppat.1010096

      Renaud, E.A., Maupin, A.J.M., Berry, L., Bals, J., Bordat, Y., Demolombe, V., Rofidal, V., Vignols, F., Besteiro, S., 2025. The HCF101 protein is an important component of the cytosolic iron–sulfur synthesis pathway in Toxoplasma gondii. PLoS Biol 23, e3003028. https://doi.org/10.1371/journal.pbio.3003028

      Shrivastava, D., Jha, A., Kabrambam, R., Vishwakarma, J., Mitra, K., Ramachandran, R., Habib, S., 2024. Plasmodium falciparum ZIP1 Is a Zinc-Selective Transporter with Stage-Dependent Targeting to the Apicoplast and Plasma Membrane in Erythrocytic Parasites. ACS Infect. Dis. 10, 155–169. https://doi.org/10.1021/acsinfecdis.3c00426

      Teh, M.R., Armitage, A.E., Drakesmith, H., 2024. Why cells need iron: a compendium of iron utilisation. Trends in Endocrinology & Metabolism 35, 1026–1049. https://doi.org/10.1016/j.tem.2024.04.015 Tomavo, S., Boothroyd, J.C., 1995. Interconnection between organellar functions, development and drug resistance in the protozoan parasite, Toxoplasma gondii. International Journal for Parasitology 25, 1293–1299. https://doi.org/10.1016/0020-7519(95)00066-B.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      We thank Reviewer #1 for the careful reading of our manuscript and for the constructive comments. We have provided responses to each of the comments below.

      We greatly appreciate Reviewer #1’s accurate public review of our study on the kinesin motor using the DNA origami nanospring (NS). With respect to the strengths, we fully agree with Reviewer #1’s comments. Regarding the weakness, we would like to respond as follows.

      It is true that, unlike optical tweezers, our method does not provide real-time data display. Optical tweezers enable real-time observation and manipulation of kinesin molecules at arbitrary time points. Achieving real-time observation and manipulation is indeed an important challenge for the future development of the NS technique. On the other hand, Iwaki et al. (our co-corresponding author) has already investigated dynamic properties of motor proteins under load, such as step size and force–velocity relationship of myosin VI using NS. We are now preparing high spatiotemporal resolution microscopy experiments on the KIF1A system to measure its step size and force–velocity relationship, which inherently require such resolution.

      Reviewer #2 Public Review

      We appreciate the constructive comments of Reviewer #2, which have strengthened both the presentation and interpretation of our results.

      We would like to thank Reviewer #2 for providing a highly accurate assessment of the strengths of our experiments. Regarding the weaknesses, we would like to respond as follows. First, Iwaki et al. (our co-corresponding author) have already succeeded in observing the stepping motion of myosin VI using the nanospring (NS) in their previous work. We are also currently preparing high spatiotemporal resolution microscopy experiments to observe the stepping motion of KIF1A in our system. Second, while it is true that the NS does not follow Hooke’s law, it is possible to design and construct NSs with an appropriate dynamic range by tuning the spring constant to match the forces exerted by protein molecules. Finally, we agree that our first observation of the stall plateau in KIF1A using the NS is a meaningful achievement. However, with respect to the suggestion that “increasing validity requires also studying kinesin-1,” we have a somewhat different perspective. The validity of the NS method has already been thoroughly examined in the previous work on myosin VI by Iwaki et al., where results were compared with those obtained using optical tweezers. Moreover, the focus of this manuscript is on KAND caused by KIF1A mutations. From this perspective, although we appreciate the suggestion, we consider it important to keep the present study focused on KIF1A and its implications for KAND.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      (1) The authors detect the attachments that occur during a processive run by KIF1A by monitoring the suppression of the angular fluctuations of the fluorescent signal and plot this, for example, in Figure 3a as the Length of the NS (which presumably is a readout of force) vs time. This interval includes the time when the KIF1A is actively moving along the MT and when it is stalled. It would be interesting to know the actual stall time of the motor in order to be able to calculate a detachment rate constant. For attachment periods such as the first example highlighted in pink in Figure 3a, the stall time is pretty much equal to the attachment time since the motor is moving so fast and the stall period is so long. However, for short attachment times such as the fifth pink interval shown in this same figure or the traces with the mutant KIF1As in Figure 4 this is not so. Can the authors institute a program to identify the periods where the motor has stretched the NS spring to the point where it stalls, and then calculate this time in order to do an exponential fit to the "dwell time distribution"?

      By introducing another criterion (see Methods, “Rate of relative increase in NS’s length”), the attachment duration was separated into the two time regions noted by the reviewer. After reanalyzing all the data, we evaluated only the stall duration this time. As a result, the estimated stall-force values became more reliable and accurate. The dwell time analysis of was performed and included in the supplementary material for WT KIF1A, for which sufficient data were available.

      (2) The histogram of stall events in Figure 3b is quite broad. Please discuss.

      The newly added distributions from individual molecules (Fig. 3b) show that the variety in the stall force distribution is not due to multiple molecules, but is primarily an intrinsic property of single KIF1A molecules reflecting the complex kinetics of KIF1A under load, including occasional backward steps and reattachments. In addition, because the nanospring is a non-linear spring, a disadvantage is that even small fluctuations in extension can result in a substantial deviation in the measured stall force. These points have been added to the Discussion section.

      (3) Figure 3c, it is clear that for attachment times greater than 5s the attachment duration is independent of the Lstall, but this is not so clear for the short attachment durations. Some of this may relate to the fact that you're measuring attachment durations and not stall or dwell times as described in my first comment. Do you feel this is due to less precision in measuring the "attachment duration" during the short attachments, or just simply that more data is needed here? I assume that you do not want to imply that there is a load-dependence of the attachment durations here? Perhaps an expanded view of the data set from 0-10 seconds would clarify. 

      As described in our response to comment (1), the stall durations were separated from the attachment durations. This improved the measurement accuracy and revealed that and are uncorrelated (Fig. 3c). We appreciate this constructive comment.

      Reviewer #2 (Recommendations for the authors):

      (1) Off-axis forces are described as 'upward', 'perpendicular', and 'horizontal'. Consider referring to off-axis force, and if necessary, defining the direction of the force(s) relative to the axis of the immobilised MT. If necessary, a cartoon of XYZ axes might be added to F1c? 

      An XZ axis was added to the schematic in Fig. 1c.

      (2) If I understand correctly, stall forces are calculated by averaging the entire region in which the angular fluctuation is reduced below a threshold. In cases like the 3rd and 7th events on the trace in F1a, this will reduce the average. Perhaps consider separately averaging the later time points in each stall event? Perhaps also consider correlating the angular fluctuation signals and the spring length signal? Some fluctuations during stall plateaus might indicate slip back and re-engage events? 

      Instead of separately averaging the later time points in each stall event, we separated the stall force duration from the overall attachment duration (Fig. 3). This allowed us to obtain more accurate stall force values. The relationship between the NS length and the angular fluctuation during KIF1A slip-back events differed among individual stall events, and no clear trend was observed. Two representative examples are shown in the Author response image 1.

      Author response image 1.

      (3) Please describe all relevant methods fully instead of referencing previous work. For example, nanospring preparation refers readers to reference 21 (which in turn references an earlier paper).

      We revised the Methods section to include the procedures described in the previous reference, and we added the sequence information of the DNA origami to the supplementary information.

      (4) Were any experiments tried at reduced ATP concentration?

      (5) Were any data obtained from WT KIF5B? For kinesin-1, stall plateau forces of >7 pN are obtained.

      This study focused on comparing the stall forces of wild-type and KAND-related mutant KIF1A molecules under physiological ATP conditions, as our main goal was to characterize the disease-relevant phenotypes. Experiments at reduced ATP concentrations and with WT KIF5B are indeed important future directions but are beyond the scope of the present study. These follow-up experiments are currently in progress.

      (6) In Figure 1b, consider showing the attachment to the mutant KIF5B, and reversing the orientation so it corresponds to Figure 1c.

      KIF1A and KIF5B share the same binding method, so to indicate that the schematic in Fig. 1b represents both, we replaced ‘KIF1A’ with ‘Kinesin’.

      (7) In Figure 3d, add force axis. In general, please re-check all force axes. In Supplement S3, the stall plateau labels appear well above their corresponding axis ticks. In Figure 4, several mutants appear to be stalling at well over 5 pN, yet Table 1 gives a much lower value. Presumably, this reflects averaging effects?

      We added the force axis to Fig. 3d. Besides, we corrected Fig. S3 and Fig. 4 because there were errors in the conversion from length to force. As the reviewer pointed out, the apparent discrepancy between the force values in Fig. 4 and Table 1 arises mainly from averaging effects.

    1. Synthèse sur l'Intelligence Collective

      Résumé Exécutif

      Ce document synthétise les points clés d'une discussion avec Mehdi Moussaïd, chercheur en sciences cognitives, sur le thème de l'intelligence collective, tel que présenté dans son ouvrage « Petit traité d'intelligence collective ».

      L'intelligence collective est définie comme la capacité d'un groupe à surpasser les performances cognitives individuelles, mais son efficacité dépend entièrement de la méthode employée.

      Une simple discussion libre est souvent contre-productive, dominée par les personnalités les plus affirmées.

      L'étude de ce phénomène trouve ses racines à la fin du 18ème siècle avec les travaux de Nicolas de Condorcet et s'inspire également de l'observation des sociétés animales, comme les termites.

      Les applications modernes couvrent divers domaines, des sports collectifs (comme au FC Nantes) à la gouvernance d'entreprise avec des modèles comme la sociocratie. Cependant, l'intelligence collective est sujette à des pièges notables.

      La majorité n'a pas toujours raison, comme l'illustre l'erreur commune sur la capitale de la Côte d'Ivoire.

      Les dynamiques sociales, telles que les révolutions ou le mouvement #MeToo, sont régies par des "points de bascule" soudains et difficiles à prévoir.

      Dans le domaine politique, le vote est un cas complexe où les sondages peuvent créer des effets d'amplification biaisant le résultat, amenant le chercheur à suggérer leur interdiction.

      La clé du succès réside dans la sélection et l'invention de méthodes adaptées à la nature spécifique du problème à résoudre.

      --------------------------------------------------------------------------------

      1. Introduction à l'Intelligence Collective

      L'intelligence collective est un domaine de recherche qui explore comment un groupe peut, dans certaines conditions, prendre des décisions plus pertinentes ou trouver des solutions plus efficaces que ne le ferait un individu seul.

      Mehdi Moussaïd, chercheur en sciences cognitives à l'Institut Max Planck et auteur du livre A-t-on besoin d'un chef ? Petit traité d'intelligence collective, est l'expert central de cette analyse. Son travail fait suite à ses recherches sur la science des foules.

      Le problème fondamental de la discussion de groupe :

      • Lorsqu'un groupe discute librement pour résoudre un problème (par exemple, estimer la distance Paris-Tokyo), la conversation a tendance à s'enliser.

      • Les individus les plus sûrs d'eux ou ceux qui s'expriment en premier ont une influence disproportionnée.

      • Le résultat final est souvent une approximation médiocre, loin du potentiel optimal du groupe.

      2. Fondements Historiques et Naturels

      L'étude de l'intelligence collective n'est pas nouvelle et puise ses origines dans l'histoire des sciences ainsi que dans l'observation du monde naturel.

      Les Origines avec Nicolas de Condorcet :

      ◦ La "première graine" de l'intelligence collective remonte à la fin du 18ème siècle (vers 1785) avec le mathématicien et philosophe Nicolas de Condorcet. 

      ◦ Aristocrate sceptique quant à la capacité du peuple à gouverner, Condorcet a cherché à démontrer que les gens étaient "collectivement stupides". 

      ◦ Il a mené une expérience dans une foire agricole en demandant aux passants d'estimer le poids d'un bœuf, partant du principe que leur incapacité à le faire prouverait leur incompétence à gérer les "affaires de l'État".

      L'Inspiration des Sociétés Animales :

      ◦ L'étude des "bêtes sociales" est une étape charnière dans la discipline.   

      ◦ L'exemple de la termitière, étudiée par le biologiste Pierre-Paul Grasset, est emblématique.

      Les termites, sans architecte central, construisent une structure complexe qui maintient des conditions de vie optimales (humidité et température constantes, absence de courants d'air) grâce à un souci constant de climatisation.

      3. Le Rôle Crucial de la Méthodologie

      Selon Mehdi Moussaïd, l'intelligence collective n'est pas un phénomène spontané ; elle doit être organisée et structurée par des méthodes précises.

      L'adéquation Méthode-Problème : Le cœur du travail sur l'intelligence collective consiste à trouver la "bonne méthode par rapport à la question posée".

      Il existe un répertoire de méthodes qui doivent correspondre à différents types de problèmes.

      L'Évitement des Catastrophes : L'utilisation d'une mauvaise méthode ne mène pas seulement à un résultat sous-optimal, mais peut produire un "résultat catastrophique".

      L'objectif est donc d'optimiser la prise de décision.

      Un Domaine en Évolution : Le nombre de méthodes n'est pas fini.

      La recherche continue d'en inventer de nouvelles pour répondre à des défis toujours plus complexes.

      4. Champs d'Application et Dynamiques Collectives

      L'intelligence collective s'observe et s'applique dans de nombreux contextes, de l'entreprise aux mouvements sociaux.

      | Domaine | Description et Exemple | | --- | --- | | Gouvernance d'Entreprise | La sociocratie est citée comme un modèle de gouvernance basé sur l'intelligence collective. Elle est perçue comme un mode de fonctionnement qui peut amener de la maturité et de la solidité à une équipe. Mehdi Moussaïd note que si les entreprises ont de "bonnes intentions", elles manquent souvent de la méthode nécessaire pour une mise en pratique efficace. | | Sports Collectifs | Les sports d'équipe sont des "terrains d'études privilégiés". La créativité d'un joueur dépend directement des actions et du positionnement de ses coéquipiers. Par exemple, un joueur sur l'aile a plus d'options créatives si ses coéquipiers se positionnent de manière variée (en retrait, sur le côté, en profondeur) plutôt que s'ils effectuent tous la même action. Des recherches sont menées au centre de formation du FC Nantes pour appliquer ces théories. | | Musique | La capacité d'une foule à chanter juste est un exemple direct et reconnu d'intelligence collective. | | Mouvements Sociaux | Les dynamiques collectives sont marquées par des "points de bascule", des moments où une idée minoritaire devient soudainement la norme (ex: le mouvement #MeToo, les révolutions). Ces transitions sont soudaines, très difficiles à prévoir et s'apparentent à l'embrasement d'un "feu de forêt". |

      5. Les Pièges et Limites de l'Intelligence Collective

      Malgré son potentiel, l'intelligence collective est confrontée à des biais et des défis importants.

      Le Piège de la Majorité : La majorité n'a pas systématiquement raison, surtout face à des questions "pièges".

      Exemple : La plupart des gens pensent qu'Abidjan est la capitale de la Côte d'Ivoire, alors qu'il s'agit de Yamoussoukro.

      Dans ce cas, suivre la majorité mène à l'erreur.  

      ◦ Ce phénomène est particulièrement présent lorsque des options "donnent vraiment envie" mais sont incorrectes.

      Le Cas Complexe du Vote Électoral :

      ◦ Le vote est le cas "le plus difficile" à analyser car il n'existe pas de "bonne réponse objective" comme dans une expérience de laboratoire.  

      ◦ Les sondages sont identifiés comme une influence néfaste, car ils créent des "effets d'amplification" : les premières options qui ressortent sont renforcées, car les gens ont tendance à se laisser entraîner par la majorité perçue.  

      ◦ L'opinion de Mehdi Moussaïd est tranchée : "moi si je pouvais, j'interdirais le sondage".

      Contexte d'Application :

      ◦ L'intelligence collective est plus pratiquée dans des organisations collaboratives (coopératives, mutuelles, associations) que dans des entreprises capitalistes classiques.  

      ◦ La raison est que ces structures cherchent moins à "maximiser un revenu", ce qui leur permet d'éviter plus facilement certains pièges décisionnels.

      6. Citations Clés

      Sur l'échec de la discussion non structurée : "Si vous réunissez ces personnes autour d'une table et que vous les laissez discuter librement, la conversation s'enlise.

      Les plus sûrs d'eux parlent davantage, les premiers, à vie formule épaisse, plus lourds que les suivants et le groupe finit par trancher approximativement."

      Sur les origines sceptiques de l'étude : "[Nicolas de Condorcet] écrit d'ailleurs dans son article 'si ils ne sont pas capables d'estimer le poids d'un boeuf comment pourrait-il s'occuper des affaires de l'Etat' ou un truc comme ça."

      Sur l'importance de la méthode : "Des fois, la mauvaise méthode va juste donner un résultat catastrophique."

      Sur la créativité dans le sport collectif : "Si j'ai le ballon sur l'aile droite et que tous les joueurs partent en profondeur, de quelle créativité je peux faire pause? Je peux simplement faire une longue ouverture, c'est tout.

      Mais si [...] un joueur reste en retour, un autre vient sur le côté, un autre part en profondeur. Alors la créativité sourd à moi."

      Sur le danger des sondages en politique : "Je vais me laisser entraîner par la majorité, puis ça crée des effets d'amplification comme ça, où les premiers candidats, ou les premières options qui ressortent, vont ressortir encore plus.

      Donc on a ces effets d'amplification, moi si je pouvais, j'interdirais le sondage."

    1. Synthèse du "Teaching and Learning Toolkit" de l'Education Endowment Foundation (EEF)

      Résumé Exécutif

      Ce document présente une synthèse complète du "Teaching and Learning Toolkit", une ressource de l'Education Endowment Foundation (EEF) conçue pour aider les enseignants et les directeurs d'école à prendre des décisions éclairées basées sur des données probantes afin d'améliorer les résultats d'apprentissage, en particulier pour les élèves défavorisés.

      Le Toolkit résume les données internationales sur plus de 30 approches pédagogiques, en évaluant chacune selon trois critères clés : l'impact moyen sur les acquis (mesuré en mois de progrès supplémentaires), le coût de mise en œuvre et la fiabilité des données probantes.

      Les approches les plus efficaces, soutenues par des preuves solides, incluent la Métacognition et l'autorégulation (+8 mois), le Feedback (+6 mois), et le Tutorat par les pairs (+6 mois).

      Ces interventions à fort impact sont généralement peu coûteuses, ce qui en fait des options très rentables.

      D'autres stratégies prometteuses avec un impact modéré incluent l'Apprentissage collaboratif (+5 mois), les Devoirs (surtout dans le secondaire, +5 mois), et les Interventions sur le langage oral (+6 mois).

      À l'inverse, certaines pratiques courantes montrent un impact faible, nul ou même négatif. La Réduction de la taille des classes (+1 mois) est très coûteuse pour un gain minime.

      La mise en place de Groupes de niveau (setting et streaming) n'a aucun impact moyen sur les progrès (0 mois) et peut même nuire aux élèves les moins performants.

      Le Redoublement est particulièrement préjudiciable, avec un impact négatif moyen de -2 mois de progrès.

      De plus, des concepts populaires comme les Styles d'apprentissage manquent de preuves solides pour justifier leur utilisation.

      Le message central du Toolkit est que le contexte et la qualité de la mise en œuvre sont primordiaux. Les chiffres ne sont que des moyennes basées sur des études passées et ne garantissent pas le succès dans un contexte donné.

      Il est donc crucial que les professionnels de l'éducation utilisent leur jugement, considèrent les besoins spécifiques de leurs élèves et planifient soigneusement l'introduction de toute nouvelle approche.

      Le Toolkit doit être utilisé comme un point de départ pour une réflexion stratégique, et non comme un catalogue de solutions toutes faites.

      Introduction au "Teaching and Learning Toolkit"

      Le "Teaching and Learning Toolkit" (et son équivalent pour la petite enfance, le "Early Years Toolkit") est une synthèse accessible de la recherche en éducation, visant à soutenir les décisions des chefs d'établissement et des enseignants.

      Il ne prétend pas dicter ce qui fonctionnera dans une école donnée, mais fournit des informations de haute qualité sur ce qui est susceptible d'être bénéfique sur la base des preuves existantes.

      La ressource est "vivante" et régulièrement mise à jour pour intégrer les nouvelles recherches.

      Récemment, l'EEF a entrepris une révision méthodologique, en introduisant des critères plus stricts pour l'inclusion des études (publiées après 1990, avec une taille d'échantillon minimale de 30 élèves) afin d'améliorer la rigueur, la pertinence et la fiabilité de la ressource.

      L'objectif est de transformer le Toolkit en une "revue systématique vivante", garantissant un accès continu aux recherches les plus récentes.

      Comprendre les Indicateurs Clés

      Chaque approche du Toolkit est évaluée à l'aide de trois indicateurs principaux :

      1. Impact sur les Progrès (Mois Supplémentaires)

      Cet indicateur mesure le nombre de mois de progrès supplémentaires réalisés, en moyenne, par les élèves ayant bénéficié d'une intervention, par rapport à des élèves similaires n'en ayant pas bénéficié, sur une année scolaire.

      Par exemple, un impact de "+6 mois" signifie que les élèves du groupe d'intervention ont progressé autant en six mois que le groupe de contrôle en un an.

      | Mois de Progrès | Taille de l'Effet (de... à...) | Description | | --- | --- | --- | | 0 | \-0.04 à 0.04 | Impact très faible ou nul | | +1 | 0.05 à 0.09 | Impact faible | | +2 | 0.10 à 0.18 | Impact faible | | +3 | 0.19 à 0.26 | Impact modéré | | +4 | 0.27 à 0.35 | Impact modéré | | +5 | 0.36 à 0.44 | Impact modéré | | +6 | 0.45 à 0.52 | Impact élevé | | +7 | 0.53 à 0.61 | Impact élevé | | +8 | 0.62 à 0.69 | Impact élevé |

      2. Coût de Mise en Œuvre

      Le coût est estimé sur une échelle de cinq points, indiquant les dépenses supplémentaires pour une école. Il inclut les ressources, la formation et le personnel additionnel, mais exclut les coûts prérequis comme les salaires des enseignants existants ou les infrastructures.

      | Évaluation | Coût par an pour une classe de 25 élèves | Coût par an par élève | | --- | --- | --- | | Très faible | jusqu'à 2 000 £ | moins de 80 £ | | Faible | 2 001 £ à 5 000 £ | jusqu'à 200 £ | | Modéré | 5 001 £ à 18 000 £ | jusqu'à 720 £ | | Élevé | 18 001 £ à 30 000 £ | jusqu'à 1 200 £ | | Très élevé | plus de 30 000 £ | plus de 1 200 £ |

      3. Fiabilité des Données (icône "cadenas")

      Cet indicateur évalue la robustesse des preuves disponibles. La note initiale est basée sur le nombre d'études répondant aux critères d'inclusion. Des "cadenas" peuvent être perdus en raison de divers facteurs, tels que :

      • Un faible pourcentage d'études récentes.

      • Une majorité d'études n'étant pas des essais contrôlés randomisés (ECR).

      • Des études menées par des chercheurs plutôt que par des enseignants en conditions réelles.

      • Un manque d'évaluations indépendantes (par ex., études menées par des fournisseurs commerciaux).

      • Une grande variation inexpliquée (hétérogénéité) dans les résultats des études.

      Pour les approches avec des preuves jugées "extrêmement faibles" (0 cadenas), aucun chiffre d'impact en mois n'est communiqué.

      Synthèse des Approches Pédagogiques

      Le tableau suivant résume les évaluations pour chaque approche examinée dans le Toolkit.

      | Approche | Impact (Mois) | Coût | Fiabilité des Données | | --- | --- | --- | --- | | Approches à Très Fort Impact | | | | | Métacognition et autorégulation | +8 | Très faible | Élevée | | Feedback | +6 | Très faible | Élevée | | Tutorat par les pairs | +6 | Très faible | Élevée | | Interventions sur le langage oral | +6 | Très faible | Élevée | | Approches à Impact Modéré et Positif | | | | | Apprentissage collaboratif | +5 | Très faible | Faible | | Devoirs | +5 | Très faible | Faible | | Apprentissage par la maîtrise | +5 | Très faible | Faible | | Tutorat individuel | +5 | Modéré | Modérée | | Apprentissage par l'instruction individualisée | +4 | Très faible | Limitée | | Engagement parental | +4 | Très faible | Élevée | | Tutorat en petits groupes | +4 | Faible | Modérée | | Interventions des assistants d'enseignement | +4 | Modéré | Modérée | | Interventions comportementales | +3 | Faible | Modérée | | Apprentissage socio-émotionnel | +3 | Très faible | Modérée | | Écoles d'été | +3 | Modéré | Faible | | Approches à Impact Faible, Nul ou Négatif | | | | | Rémunération à la performance | +1 | Faible | Très faible | | Réduction de la taille des classes | +1 | Très élevé | Très limitée | | Groupes de niveau (Setting et streaming) | 0 | Très faible | Très limitée | | Redoublement | \-2 | Très élevé | Faible | | Approches avec des Données Insuffisantes | | | | | Interventions sur les aspirations | \- | Très faible | Extrêmement faible | | Styles d'apprentissage | \- | Très faible | Extrêmement faible | | Apprentissage par l'aventure en plein air | \- | Modéré | Extrêmement faible | | Uniforme scolaire | \- | Très faible | Extrêmement faible |

      Analyse Détaillée des Approches Clés

      1. Approches à Très Fort Impact

      Métacognition et autorégulation (+8 mois) : Enseigner aux élèves des stratégies explicites pour planifier, suivre et évaluer leur propre apprentissage.

      C'est l'approche la plus efficace et la moins coûteuse. Son impact est élevé à tous les âges et dans toutes les matières.

      La clé est d'intégrer ces stratégies dans le contenu habituel du programme plutôt que de les enseigner de manière isolée.

      Feedback (+6 mois) : Fournir aux apprenants des informations sur leur performance par rapport aux objectifs d'apprentissage.

      Le feedback le plus efficace est spécifique, exploitable et axé sur la tâche, le sujet ou les stratégies d'autorégulation.

      Le feedback verbal montre un impact légèrement supérieur (+7 mois).

      Il est crucial de donner du feedback sur les réussites comme sur les erreurs.

      Tutorat par les pairs (+6 mois) : Les élèves travaillent en binômes ou en petits groupes pour se soutenir mutuellement dans leur apprentissage.

      Cette approche bénéficie à la fois au tuteur et au tutoré, en particulier aux élèves en difficulté. Une formation et une structure adéquates sont essentielles pour garantir des interactions de haute qualité.

      Interventions sur le langage oral (+6 mois) : Mettre l'accent sur le langage parlé et l'interaction verbale en classe.

      Cela inclut le développement explicite du vocabulaire, l'utilisation de questions structurées et le dialogue centré sur le programme.

      Ces approches sont particulièrement bénéfiques pour les élèves défavorisés.

      2. Approches à Impact Modéré et Positif

      Apprentissage collaboratif (+5 mois) : Les élèves travaillent ensemble en petits groupes (3 à 5 personnes est optimal) sur des tâches structurées avec un objectif commun.

      L'enseignant doit concevoir soigneusement les tâches et enseigner explicitement les compétences de collaboration.

      Devoirs (+5 mois) : Efficaces surtout dans le secondaire (+5 mois) par rapport au primaire (+3 mois).

      La qualité et la pertinence des tâches par rapport au travail en classe sont plus importantes que la quantité. Un feedback de qualité sur les devoirs est crucial.

      Tutorat individuel (+5 mois) et Tutorat en petits groupes (+4 mois) : Le soutien intensif et ciblé est très efficace, en particulier pour les élèves en difficulté. Le tutorat en petits groupes est une alternative plus rentable au tutorat individuel, avec un impact presque aussi élevé.

      Interventions des assistants d'enseignement (+4 mois) :

      L'impact moyen masque une grande variation. Le déploiement général en classe n'a pas montré d'avantages, et peut même être préjudiciable si le soutien de l'assistant remplace celui de l'enseignant.

      En revanche, lorsque les assistants sont formés pour dispenser des interventions structurées et ciblées en petits groupes, l'impact est significativement positif.

      3. Approches à Impact Faible, Nul ou Négatif

      Réduction de la taille des classes (+1 mois) : Bien que populaire, cette approche est extrêmement coûteuse et n'a qu'un faible impact, sauf si la réduction est très importante (classes de moins de 20 élèves) et permet à l'enseignant de modifier radicalement sa pédagogie.

      Groupes de niveau (Setting et streaming) (0 mois) : La répartition des élèves en classes homogènes basées sur leurs résultats actuels n'a globalement aucun impact positif.

      Les données suggèrent un léger effet négatif pour les élèves les moins performants et un léger effet positif pour les plus performants.

      Cette pratique risque de creuser les inégalités, notamment parce que les élèves défavorisés sont plus susceptibles d'être mal orientés vers des groupes de niveau inférieur.

      Redoublement (-2 mois) : Cette approche a un impact négatif constant et significatif sur les progrès des élèves.

      Les effets négatifs sont encore plus marqués pour les élèves défavorisés, les élèves issus de minorités ethniques et les plus jeunes de leur classe d'âge.

      C'est une stratégie à très haut risque qui augmente la probabilité de décrochage scolaire.

      4. Approches avec des Données Insuffisantes

      Styles d'apprentissage : Les preuves sont extrêmement faibles.

      Il n'existe pas de données probantes solides validant l'idée qu'enseigner aux élèves selon leur "style" préféré améliore l'apprentissage.

      Au contraire, étiqueter les élèves peut nuire à leur motivation et à leur perception de leur potentiel.

      Interventions sur les aspirations : Les preuves sont également très faibles.

      La plupart des jeunes ont déjà des aspirations élevées. Le problème n'est souvent pas le manque d'aspiration, mais le manque de connaissances et de compétences pour les atteindre.

      Les interventions qui se concentrent uniquement sur l'augmentation des aspirations sans soutien scolaire concret sont inefficaces.

      Principes Clés pour une Utilisation Efficace du Toolkit

      L'EEF insiste sur le fait que le Toolkit est un outil de réflexion et non un livre de recettes. Pour l'utiliser efficacement, les responsables d'établissement devraient :

      1. Regarder au-delà des chiffres : Lire les détails de chaque approche, en particulier les sections "Derrière la moyenne" qui nuancent l'impact selon l'âge, la matière ou le mode de mise en œuvre.

      2. Considérer ensemble l'impact, le coût et la fiabilité : Une approche à fort impact peut ne pas être la plus rentable.

      Une approche à impact modéré mais peu coûteuse et fondée sur des preuves solides peut être un meilleur choix.

      3. Utiliser son expertise professionnelle : Le Toolkit informe sur ce qui a fonctionné ailleurs, mais le jugement professionnel est essentiel pour évaluer la pertinence et la faisabilité d'une approche dans son propre contexte scolaire.

      4. Planifier soigneusement la mise en œuvre : L'adoption d'une nouvelle approche n'est pas un événement ponctuel.

      Il faut identifier les "ingrédients actifs" de l'intervention et prévoir un plan de mise en œuvre rigoureux.

      5. Évaluer les risques : Comprendre les potentiels effets indésirables d'une approche (par exemple, la stigmatisation des élèves dans les groupes de niveau inférieur) et mettre en place des stratégies pour les atténuer.

      6. Consulter d'autres ressources de l'EEF : Le Toolkit est un point de départ. Les rapports de recommandations ("Guidance Reports") et les évaluations de projets spécifiques de l'EEF offrent des informations plus détaillées et pratiques.

    1. La Cour d'École : Enjeux pour le Bien-être des Élèves

      Résumé Exécutif

      Ce document de synthèse analyse les enjeux fondamentaux liés à l'aménagement des cours de récréation, en se basant sur les expertises croisées d'Annie Sbir, spécialiste en éducation physique et sportive, et de Charlotte Vanesburg, architecte-urbaniste impliquée dans le projet des "Cours Oasis".

      La cour d'école, loin d'être un simple espace de défoulement, est un lieu essentiel au développement de l'autonomie, à l'apprentissage du vivre-ensemble et à la gestion des conflits.

      Le constat est que le modèle traditionnel français – une surface de bitume vide et centrée sur un terrain de sport – génère du stress, des conflits et renforce les stéréotypes de genre en marginalisant les activités calmes et mixtes.

      Les stratégies de réaménagement proposées visent à transformer cet espace en un écosystème riche et diversifié.

      Cela passe par la multiplication des types d'espaces (dynamiques, calmes, de repli), la végétalisation pour réduire le bruit et la chaleur (projet "Cours Oasis"), et l'introduction de matériaux variés (copeaux de bois, sable, etc.).

      Une telle transformation encourage une prise de risque mesurée, essentielle à la construction de la confiance en soi, et permet de briser la monopolisation de l'espace par des jeux uniques comme le football.

      La réussite de ce projet repose sur une démarche collective, impliquant les élèves, les enseignants et l'ensemble du personnel de l'école dans un processus de diagnostic et de conception, faisant de la cour un levier puissant pour améliorer le climat scolaire global.

      --------------------------------------------------------------------------------

      1. Le Constat : La Cour d'École Traditionnelle et ses Limites

      La cour de récréation classique en France est souvent un espace négligé sur le plan éducatif, réduit à un "carré de bitume" dont la fonction première est d'assurer la sécurité et la surveillance. Cette conception minimaliste engendre plusieurs problématiques majeures.

      La Monotonie des Aménagements

      La différence la plus frappante entre les cours de maternelle et celles des niveaux supérieurs (primaire, collège) est la disparition quasi totale des structures de jeu.

      Absence de Jeux : À partir du CP, les jeux fixes disparaissent, remplacés majoritairement par des équipements sportifs basiques (buts de football, paniers de basket) et des bancs. Charlotte Vanesburg souligne ironiquement : "à partir du CP c'est tout le monde le sait on ne joue plus, on a plus besoin de jouer donc on ne met plus de jeu dans une cour de récréation."

      Uniformité des Sols : Que ce soit dans les grandes villes ou à la campagne, la majorité des cours sont asphaltées. Même dans les zones rurales où l'espace est plus grand, la cour elle-même reste un carré de bitume.

      La Reproduction des Stéréotypes Sociaux et de Genre

      La cour d'école est décrite comme un "microcosme social", le premier pour les enfants. Une cour non aménagée reproduit et amplifie les schémas sociaux existants, notamment les stéréotypes de genre.

      Domination Spatiale : L'espace central est massivement occupé par les jeux de ballon, principalement le football, pratiqué majoritairement par les garçons.

      Marginalisation : Les autres élèves, et en particulier les filles, sont relégués sur les pourtours et dans les "petits coins qu'on avait bien voulu leur laisser".

      Leurs activités sont souvent réduites à la discussion ou à des jeux statiques. Certaines finissent même par se réfugier dans les toilettes pour éviter les ballons.

      Cristallisation des Rôles : Dès la maternelle, des rôles de dominants et de dominés se mettent en place.

      Le film documentaire de Claire Simon (1994) est cité comme une illustration "terrifiante" de cette dynamique, montrant des violences verbales et des comportements qui se cristallisent très tôt.

      Source de Stress et de Conflits

      Un environnement pauvre en sollicitations et en aménagements devient une source de stress et de tensions pour les élèves et les adultes.

      Besoin de Mouvement non Canalisé : Annie Sbir insiste sur le "besoin impératif de mouvement" des enfants, souvent contraint en classe.

      Une cour vide ne propose pas de support pour canaliser cette énergie. Le corps devient alors le principal support de jeu, menant à des bousculades et des chahuts. "Il faut que le corps bouge, exprime, et que si on ne m'offre pas des moyens d'investir mon énergie bah je vais pas forcément l'investir comme il faut."

      Le Bruit : Le bruit constant et les cris sont une source de stress majeure. Une cour aménagée avec des végétaux et des matériaux absorbants peut diminuer les pics de bruit de moitié.

      Insécurité : La présence constante de ballons fusant dans tous les sens génère un stress pour les enfants qui ne participent pas au jeu, les forçant à chercher des zones de refuge.

      2. Vers une Réinvention de la Cour : Principes et Stratégies

      La transformation des cours d'école repose sur l'idée que l'aménagement de l'espace peut répondre aux besoins multiples des enfants (physiques, mentaux et sociaux, selon la définition de la santé de l'OMS) et ainsi améliorer le climat scolaire.

      Diversifier les Espaces et les Usages

      La clé est de multiplier l'offre d'activités et de supports pour que chaque enfant trouve un espace qui lui convient.

      Zonage des Activités : Il est suggéré de matérialiser, même de manière temporaire, des espaces dédiés aux "jeux dynamiques", "jeux modérés" et "jeux calmes".

      Matériel et Aménagement : Il est crucial de faire un inventaire du matériel disponible et de l'enrichir.

      Les aménagements peuvent inclure des marquages au sol variés (marelles, escargots, cibles) et sur les murs, qui font écho au matériel proposé.

      L'idée est que ce qui est appris en cours d'EPS puisse être réinvesti durant la récréation.

      Cartographie par les Élèves : Un outil efficace pour la prise de conscience est de demander aux élèves de cartographier la cour, en se positionnant et en indiquant qui joue à quoi et où.

      Cet exercice, réalisé avant et après aménagement, permet de visualiser et de verbaliser les inégalités spatiales.

      L'Importance du Mouvement et de la Prise de Risque Mesurée

      La cour doit permettre aux enfants de bouger, mais aussi d'apprendre à gérer le risque dans un cadre sécurisé.

      Le Droit à l'Erreur : S'inspirant du concept belge du "droit au bleu", il est rappelé que se faire mal fait partie de l'apprentissage.

      Prendre des risques permet de grandir, de prendre confiance en soi et d'apprendre à évaluer ses propres capacités.

      Risque Mesuré vs Danger : Le but n'est pas de créer du danger, mais d'offrir une "prise de risque mesurée, raisonnée".

      Cela passe par des aménagements qui permettent de grimper, sauter, passer par-dessus des obstacles, etc.

      La Sécurité Objective : Cette prise de risque doit être encadrée par des conditions de sécurité objectives incontournables : sols souples (sable, copeaux), matériel aux normes, et présence d'un adulte à proximité.

      La phrase clé est de concevoir des cours "aussi sûr que nécessaires, mais pas aussi sûr que possible".

      Surveillance et Intimité : Trouver le Juste Équilibre

      Les enfants expriment un fort besoin de "cachettes", tandis que les enseignants ont besoin de tout voir. Il est possible de concilier ces deux demandes.

      Les "Cachettes" Perméables : Des solutions comme des cabanes en saule tressé ou des structures en bois ajourées permettent de créer un sentiment d'intériorité et d'intimité pour l'enfant, tout en restant visibles pour l'adulte surveillant.

      La Surveillance Mobile : Une cour richement aménagée ne permet plus une surveillance à 360° depuis un point fixe.

      Cela implique une surveillance mobile, avec un adulte qui se déplace dans l'espace. Idéalement, deux adultes seraient présents : un en surveillance globale ("embrasse du regard") et un autre en animation ou en interaction plus directe.

      Le Rôle Actif de l'Adulte : L'adulte peut endosser un rôle plus actif et moins intrusif que celui de simple "surveillant".

      L'exemple d'une enseignante qui ratisse les feuilles est donné : elle est présente, observe, mais participe à la vie de la cour sans être dans une posture de contrôle fixe.

      3. Les Cours Oasis : Une Approche Environnementale et Pédagogique

      Le programme des "Cours Oasis", initié à Paris et repris sous d'autres noms en France ("cours buissonnières" à Bordeaux), incarne cette nouvelle vision de la cour d'école.

      Origines et Objectifs : Né d'une volonté de lutter contre le changement climatique en créant des "îlots de fraîcheur" en ville, le projet a rapidement intégré l'enjeu central du bien-être des enfants. Il vise à désimperméabiliser les sols, ramener de la végétation et de la biodiversité.

      Processus Participatif : La transformation physique de l'espace est accompagnée d'un processus de sensibilisation et de co-conception avec toute la communauté scolaire pour s'assurer que les nouveaux usages soient appropriés par tous.

      Défis Pratiques (Boue et Entretien) : La suppression de l'asphalte soulève la question de la boue et de la propreté.

      Les Copeaux de Bois : Une solution efficace est l'utilisation de copeaux de bois, qui recouvrent la terre, évitent la boue, amortissent les chutes et enrichissent le sol. Ils permettent de courir et de jouer. 

      L'Entretien comme Pédagogie : La gestion des "saletés" (copeaux, sable) devient une routine pédagogique : "danse des copeaux" avant de rentrer, utilisation de paillassons, et participation des enfants au rangement de la cour, au même titre que n'importe quel autre jeu.

      4. Perspectives Internationales : Diversité des Approches

      L'analyse des cours d'école dans d'autres pays révèle une grande diversité de cultures et de pratiques, souvent plus en lien avec la nature.

      | Pays | Caractéristiques Principales | | --- | --- | | Pays du Nord | Pratique très naturelle, espaces tournés vers la nature, enfants bien équipés. | | Espagne | Sols naturels, peu de végétation mais beaucoup de sable (jusqu'à 90% de la surface). | | Suisse | Cours ouvertes le week-end, fonctionnant comme des parcs publics pour les familles. | | Allemagne | "Jardins d'enfants" très naturels avec des éléments manipulables (pierres, boue, sable, cailloux). | | Japon | Espaces très naturels avec beaucoup de sable et un rapport à l'eau très présent. | | États-Unis | Échelles très variables, avec des établissements pouvant avoir des cours de la taille de forêts. |

      5. La Transformation comme Projet Collectif

      La refonte de la cour de récréation ne peut être une décision individuelle.

      Elle doit être un projet d'équipe, un levier pour dynamiser l'ensemble de l'école.

      Un Point de Départ : Annie Sbir affirme que si elle était directrice, elle commencerait par la cour de récréation pour créer une dynamique d'équipe.

      Le processus d'observation, de diagnostic (avec des outils objectifs) et de réflexion commune est aussi important que le résultat final.

      Impliquer Tous les Acteurs : Le projet doit associer les enseignants, les animateurs périscolaires, le personnel d'entretien, les parents et surtout les élèves. L'implication des délégués de classe et des éco-délégués est une piste pertinente.

      Une Cour de Démocratie : En conclusion, une cour bien aménagée, riche en propositions, est une "cour de démocratie".

      Ne rien faire, c'est "contribuer à la perpétuation de ce à quoi on n'adhère pas forcément", c'est-à-dire une société où les faibles se retranchent et les stéréotypes perdurent.

      6. Ressources et Outils Mentionnés

      Plusieurs ressources ont été citées au cours de la discussion :

      Ouvrages :

      La cour d’école, un enjeu pour le bien-être des élèves (ouvrage de Canopé).  

      Qui veut jouer au ? de Myriam Gallot (sur l'utilisation de la cour par les élèves).   

      Faire jeu égal d'Edith Maruejouls (géographe travaillant sur les cours de récréation).

      Film :

      ◦ Un film documentaire de Claire Simon (sorti en 1998) sur les interactions dans les cours de maternelle.

      Outils Sociologiques :

      ◦ Le sociogramme de Moreno, un outil pour observer les interactions sociales entre élèves.

      Programmes de Financement et d'Accompagnement :

      CAUE (Conseil d'Architecture d'Urbanisme et de l'Environnement) : Présent dans chaque département, il peut accompagner les projets.  

      CNR (Conseil National de la Refondation) : Le programme "Notre école, faisons-la ensemble".  

      EduRenov : Programme de rénovation des cours porté par la Banque des Territoires.  

      Atelier Canopé Paris : Travaille sur l'aménagement des espaces scolaires.

    1. Briefing : Renouer avec l'Autorité à l'École avec Jean-Pierre Bellon

      Source : Extraits de "Instant Canopé : renouer avec l'autorité à l'école avec Jean Pierre Bellon"

      Date : Journée nationale de lutte contre le harcèlement scolaire (veille) 2024

      Intervenants :

      • Sophie Courau : Directrice d'ESF sciences humaines, éditrice des ouvrages de Jean-Pierre Bellon.

      • Jean-Pierre Bellon : Professeur de philosophie, pionnier de la lutte contre le harcèlement scolaire en France, auteur de "Renouer avec l'autorité à l'école".

      • Public : Professionnels de l'éducation (enseignants, chefs d'établissement, CPE, directeurs d'école).

      Synthèse :

      Cette discussion avec Jean-Pierre Bellon met en lumière les liens étroits entre la crise de l'autorité à l'école, le harcèlement scolaire et le chahut en classe.

      Bellon, fort de son expérience d'enseignant et de son travail sur le harcèlement, propose dix mesures concrètes pour restaurer un climat scolaire apaisé, insistant sur la nécessité d'une autorité bienveillante alliant courtoisie et fermeté.

      Il critique le manque de formation des enseignants sur la gestion des incivilités et des classes difficiles, le flou entourant la définition et la hiérarchisation des infractions scolaires, et l'inefficacité perçue des sanctions actuelles, notamment dans le premier degré.

      Des propositions sont faites pour repenser les punitions, les sanctions (notamment l'exclusion temporaire), l'utilisation des téléphones portables, l'architecture des établissements et les relations entre l'école et les familles, en plaidant pour des protocoles clairs, une approche collective et une verticalité institutionnelle renforcée.

      Thèmes principaux et Idées clés :

      Lien entre Manque d'Autorité, Harcèlement et Chahut : Bellon établit un lien direct entre la défaillance de l'autorité et les phénomènes de harcèlement et de chahut.

      Les élèves victimes de harcèlement témoignent que la situation est "le pire" dans les classes des professeurs en difficulté.

      Le chahut, tout comme le harcèlement, est un phénomène de groupe, rendant les sanctions individuelles inefficaces et potentiellement contre-productives en provoquant la coalition du groupe.

      Le manque d'autorité se manifeste par le chahut, dont la réalité dans les classes françaises est confirmée par des enquêtes internationales comme PISA. PISA 2022 révèle qu'un lycéen sur deux considère qu'il y a trop de bruit en classe pour entendre le professeur, une situation qualifiée d'"injustice scolaire gigantesque".

      Citation : "Le lien est direct moi le lien que j'ai vu d'abord entre le le harcèlement et et les classes dites dit difficile il était le suivant tous les élèves victime de brimade tous les élèves victimes de harcèlement faisaient tous le même constat c'est que c'était dans la classe du professeur en difficulté lui-même que la situation était le pire."

      Citation : "le harcèlement comme le Chahu sont deux symptômes d'une défaillance de l'autorité tout de même."

      Citation : "PISA 2022 nous apprend qu'un lycéen sur de considère qu'il y a trop de bruit dans sa classe au point qu'il n'entend pas ce que dit le professeur imaginez l'injustice que cela représente."

      La Crise de l'Autorité à l'École, Symptôme d'une Crise Sociétale :

      Bellon reconnaît que l'école n'échappe pas à une crise de l'autorité plus vaste qui touche la société.

      Cependant, il estime que l'école est un lieu où il est possible "d'essayer de faire quelque chose" pour rétablir l'autorité, en raison du contact quotidien et prolongé (de 8h à 17h) avec les jeunes en formation.

      Propositions pour Rétablir l'Autorité :

      Allier Courtoisie et Fermeté : S'inspirant d'Hannah Arendt, l'autorité se situe entre la force/contrainte et la persuasion/négociation.

      Il s'agit de donner des injonctions claires et fermes, tout en maintenant la courtoisie.

      L'objectif est de ne laisser à l'élève récalcitrant que le choix du refus d'obtempérer, qui peut alors être traité formellement.

      Citation : "cette alliance entre courtoisie et fermeté ne laisse à l'élève contrevenant si jeose dire qu'une seule porte de sortie c'est le refus d'obtempéré."

      Citation : "je pense qu'il convient absolument de rétablir au sein de des établissements scolaires des règles de courtoisie de civilité."

      Nécessité de la Formation des Enseignants : Il existe un "scandale" concernant le manque de formation des enseignants à la gestion des comportements difficiles (élèves arrogants, insultants, menaçants). Contrairement à d'autres métiers en contact avec le public, les enseignants ne disposent pas de protocoles de réaction.

      Citation : "le défaut de formation des enseignants il est quand même criant c'est un purure scandale."

      Citation : "on nous a pas formé à un protocole sur comment je réagis lorsque j'ai face à moi un élève arrogant un élève insultant un élève menaçant et cetera."

      Définir et Hiérarchiser les Infractions Scolaires : Il n'existe pas de liste claire et hiérarchisée des incidents scolaires non délictuels.

      Bellon a tenté de le faire (56 incidents listés) pour permettre aux enseignants de savoir à quoi s'attendre et de faire la distinction entre les incidents mineurs et graves.

      Le manque de hiérarchie conduit à des sanctions disproportionnées (ex: lettre d'excuse pour une insulte grave).

      Citation : "on s'était jamais penché sur qu'est-ce que c'est qu'un incident scolaire j'ai essayé de le faire j'ai essayer rédiger la liste des incidents scolaires c'est-à-dire tout ce qui ne devrait pas se faire se produire dans une classe dans un établissement dans une école et cetera je suis arrivé à 56 inincid."

      Citation : "Avouez quand même que oublier son livre et insulter un professeur c'est pas tout à fait de même nature."

      Tolérance Zéro et Signalement Systématique : Toutes les infractions, même mineures, doivent être systématiquement signalées.

      Cela permettrait d'informer l'opinion, les parents et les élèves, et de s'accorder sur une "échelle" de gravité et de sanctions appropriées ("un barème").

      Citation : "non pas forcément mais elles doivent être systématiquement signalé il faut en laisser passer aucune faut une tolérance zéro à cet égard il faut un signalement systématique de toutes les infractions."

      Réformer le Système de Sanctions (Punitions vs Sanctions) :

      La distinction actuelle entre punitions (données par tout personnel) et sanctions (données par le chef d'établissement) est jugée désordonnée et inefficace.

      Les enseignants sont mis en difficulté en devant gérer seuls les punitions.

      Bellon suggère de supprimer le pouvoir discrétionnaire de punition des enseignants.

      Citation : "ce désordre de la distinction entre les punitions et les sanctions c'est quelque chose qui a fait son temps qui n'a plus de sens."

      Citation : "je suggère qu'on leur retire ce pouvoir discrétionnaire dont ils ne savent pas quoi faire d'ailleurs et qu' est met à risque parce que l'occasion de la sanction en classe c'est un risque authentique pour les professeurs."

      Confier la Proposition de Sanctions à une Commission : Il est proposé de créer une commission (composée de différents professionnels de l'établissement) qui examinerait tous les incidents signalés et ferait une proposition de sanction au chef d'établissement.

      Cela permettrait une plus grande cohérence et déchargerait l'enseignant.

      Adapter la Sanction dans le Primaire : La situation dans le premier degré est décrite comme un "désordre absolu" en matière de sanctions, avec un manque criant de "vie scolaire" et de lieux dédiés pour gérer les élèves perturbateurs. La solitude des professeurs des écoles est soulignée.

      Repenser l'Exclusion Temporaire (Exclusion Internée) : L'exclusion sèche est paradoxale et inefficace, surtout pour les élèves en difficulté.

      L'exclusion devrait être "internée", c'est-à-dire que l'élève reste dans l'établissement mais avec des contraintes (horaires décalés, lieu spécifique, travail différent).

      Il s'agit d'une pratique héritée des Bénédictins (règle de Saint-Benoît). La sanction doit être "frustrante" (enlever quelque chose) et "signifiante" (verbalisée).

      Citation : "renvoyer un garçon ou une fille chez lui ça peut pas marcher ben c'est bien précisément le contraire qu'il faut faire."

      Citation : "l'idée de que l'exclusion soit internée c'est-à-dire que l'élève a l'obligation devenenir en cours mais il pourra peut-être pas faire exactement les mêmes choses."

      Citation : "pour une sanction pour qu'elle soit éducative faut qu'elle soit dite faut qu'elle soit mise en mot faut qu'elle soit verbalisée... il faut qu'aussi la sanction elle soit frustrante."

      Gérer l'Utilisation du Téléphone Portable : Il n'y a pas de "ligne claire" en France sur ce sujet.

      Bellon préconise une règle simple et ferme : téléphone éteint dans le sac au fond de la pièce, comme lors d'un examen.

      Toute utilisation entraîne un signalement et une sanction. Le téléphone en classe représente un risque pour les enseignants.

      L'exemple d'un incident où une enseignante est insultée après avoir demandé à un élève de ranger son téléphone illustre cette difficulté.

      Citation : "sur la question du téléphone portable il faut quand même avoir une ligne claire en France on n'a pas de ligne clair."

      Citation : "franchement les téléphones portables ils n'ont rien à faire à l'école."

      Aménager l'Espace et le Temps Scolaire (Architecture des Établissements) : L'architecture actuelle des établissements peut être propice aux problèmes d'autorité (ex: salle des profs inadaptée, cours de récréation regroupant tous les élèves).

      Bellon suggère de repenser les espaces pour les adapter aux besoins (bureaux pour les enseignants, plusieurs cours de récréation, lieux dédiés pour les sanctions).

      Ce point est considéré comme un projet à plus long terme ("le lycée Hannah Arendt").

      Citation : "tous les architectes scolaires d'ailleurs j'observent que les architectes scolaires c'est pas toujours ils interrogent pas toujours les professeurs pour c'est quand même énorme ça quand on construit une maison en général on s'intéresse à aux habitants là non pas trop."

      Améliorer les Relations École-Familles : Les relations entre l'école et les parents sont souvent tendues, avec des directeurs d'école et enseignants confrontés à des incivilités et des contestations (44% des directeurs d'école insultés selon une étude).

      Bellon recommande l'établissement de protocoles pour accueillir les familles et gérer les situations difficiles.

      Il souligne que les professionnels du primaire sont plus exposés en l'absence de sas d'accueil.

      Citation : "vous pointez les mauvaises relations qu'entretiennent trop souvent les parents avec l'institution scolaire et vous citez notamment une étude de George futinos selon laquelle 44 % des directeurs d'école ont déclaré avoir été insultés par des parents d'élèves."

      Citation : "il faut vraiment là encore avoir un protocole pour réagir face aux incivilités."

      Principes de la Sanction Éducative :

      S'inspirant d'Éric Prerat, Bellon insiste sur trois aspects d'une sanction éducative :

      • Signifiante : Elle doit être dite, verbalisée, prononcée avec gravité et solennité, en distinguant l'infraction de la personne.

      • Frustrante : Elle doit enlever quelque chose à l'élève (un droit, un avantage, la participation à une activité).

      • Réparatrice : Elle doit inclure une dimension de réparation, en lien avec le préjudice causé à la vie collective ou aux personnes (excuses, réparation matérielle).

      Citation : "Eric prerat dit que pour une sanction pour qu'elle soit éducative faut qu'elle soit dite faut qu'elle soit mise en mot faut qu'elle soit verbalisée."

      Citation : "il faut qu'aussi la sanction elle soit frustrante faut qu'on m'enlève quelque chose."

      Citation : "il faut que la sanction elle est une dimension réparatrice."

      Nécessité d'une Approche Collective et Institutionnelle :

      Les directeurs d'école, en particulier, manquent de pouvoir hiérarchique et de protection institutionnelle face aux contestations des parents.

      L'idée de rattacher les écoles aux collèges (proposée par un ancien ministre) aurait pu offrir une direction institutionnelle et une vie scolaire dans le primaire.

      La contestation des parents est également vue comme un symptôme d'une forte inquiétude quant à l'avenir de leurs enfants, qui se manifeste par une contestation systématique de la moindre sanction.

      Il est crucial qu'il y ait une "verticalité" et une "protection systématique" de la part de l'institution (rectorat, dasen, chefs d'établissement) pour soutenir les enseignants face aux contestations et aux pressions.

      Les failles institutionnelles peuvent être exploitées par des "adversaires de l'école".

      Citation : "il faudrait renforcer le pouvoir hiérarchique des directeurs du Prim."

      Citation : "il faut une verticalité il faut qu'il y ait des choses qui ne se négocient pas."

      Certaines règles devraient être non négociables : contenu des enseignements, règles de civilité/courtoisie, respect absolu des personnes, tenue correcte.

      Les enseignants sont des modèles pour les élèves et doivent également faire preuve d'élégance et de distinction.

      Le travail de rétablissement de l'autorité et de gestion des difficultés doit être "construit collectivement" au sein des établissements, en s'inspirant des erreurs et des bonnes pratiques.

      Le dispositif SCORE (inspiré de la préoccupation partagée) pour les classes difficiles est présenté comme un exemple d'approche collective et axée sur la recherche de solutions par les élèves eux-mêmes.

      Points d'action suggérés par Jean-Pierre Bellon :

      Développer des formations spécifiques pour les enseignants sur la gestion des comportements difficiles et l'application de protocoles.

      Établir une liste claire et hiérarchisée des infractions scolaires.

      Mettre en place un système de signalement systématique de toutes les infractions.

      Supprimer le pouvoir discrétionnaire de punition des enseignants et le confier, ainsi que la proposition de sanction, à une commission dédiée.

      Repenser et adapter les sanctions dans le premier degré, notamment en explorant l'idée d'espaces et d'organisation permettant des sanctions frustrantes (ex: récréation décalée).

      Généraliser l'exclusion temporaire "internée". Instaurer une règle claire et ferme concernant l'utilisation des téléphones portables en classe.

      Engager une réflexion à long terme sur l'architecture des établissements pour mieux gérer l'espace et le temps scolaire.

      Développer des protocoles d'accueil et de gestion des relations avec les familles, notamment dans le primaire, et renforcer le soutien institutionnel aux professionnels de terrain.

      Construire collectivement au sein des établissements des chartes ou des règles non négociables concernant le comportement, le respect et la tenue.

      Utiliser des dispositifs collectifs comme SCORE pour gérer les classes difficiles.

      Conclusion :

      Jean-Pierre Bellon propose une approche globale pour faire face à la crise de l'autorité à l'école, en identifiant les liens entre ce phénomène, le harcèlement et le chahut.

      Ses propositions visent à professionnaliser la gestion des incivilités et des conflits, à clarifier les règles et les conséquences des manquements, à repenser les sanctions pour les rendre plus éducatives, et à renforcer la protection et le soutien institutionnel des professionnels de l'éducation.

      L'accent est mis sur la nécessité d'une action collective au sein des établissements et d'une réaffirmation de la verticalité institutionnelle pour faire face aux contestations et garantir un climat scolaire apaisé.

    1. Faire collectif, une dynamique pour l'École - Parlons pratiques ! #45

      25 déc. 2024 Extraclasse On entend souvent que le métier d’enseignant est plutôt solitaire.

      Pourtant la classe, l’équipe pédagogique d’une école ou d’un établissement, un groupe de travail disciplinaire, ce sont des collectifs – choisis ou imposés – dans lesquels un ou une prof évolue chaque jour.

      Quelles conditions facilitent le « faire collectif » ?

      Est-ce le fait de percevoir l’intérêt de faire ensemble, de partager une vision commune ?

      Quel type de pilotage est le plus adapté pour maintenir la dynamique d’un collectif de professionnels ?

      Quelles plus-values en attendre ?

      Le « faire collectif » est trop souvent une injonction, voire un impensé professionnel.

      À partir de l’expérience réussie et inspirante d’un master MEEF, cet épisode vous donne des principes et des clés très concrètes à appliquer à chacun de vos contextes, pour construire ensemble au service de la réussite des élèves.

      Avec : Laurent Soligny, professeur agrégé d’EPS et formateur, co-responsable du parcours EPS à l’Inspé Normandie-Rouen-Le-Havre et affecté à l’UFR STAPS de l’université de Rouen-Normandie.

      Charles Nicaud, professeur d’EPS, doctorant à l’université de Rouen-Normandie, membre du laboratoire CIRNEF (Centre interdisciplinaire de recherche normand en éducation et formation) et formateur à l’Inspé Normandie-Rouen-Le-Havre.

    1. Reviewer #1 (Public review):

      Summary

      In this study, the authors have performed tissue-specific ribosome pulldown to identify gene expression (translatome) differences in the anterior vs posterior cells of the C. elegans intestine. They have performed this analysis in fed and fasted states of the animal. The data generated will be very useful to the C. elegans community, and the role of pyruvate shown in this study will result in interesting follow-up investigations.

      However, several strong claims made in the study are solely based on in silico predictions and are not supported by experimental evidence.

      Strengths:

      Several studies in the past have predicted different functions of the anterior (INT1) vs posterior (INT2-9) epithelial cells of the C. elegans intestine based on their anatomy and ultrastructure, but detailed characterization of differences in gene expression between these cell types (and whether indeed these are different 'cell types') was lacking prior to this study. The genes and drivers identified to be exclusively expressed in the anterior vs posterior segments of the intestine will be very helpful to selectively modulate different parts of the C. elegans intestine in future studies.

      Another strength of this study is the careful experimental design to test how the anterior vs posterior cell types of the intestine respond differently to food deprivation and recovery after return to food. These comparisons between 'states' of a cell in different physiological conditions are difficult to pick up in single-cell analyses due to low sequencing depth, which can fail to identify subtle modulation of gene expression.

      The TRAP-associated bulk RNA-seq approach used in this study is more suitable for such comparisons and provides additional information on post-transcriptional regulation during metabolic stress.

      A key finding of this study is that pyruvate levels modulate the translation state of anterior intestinal cells during fasting. Characterization of pyruvate metabolism genes, especially of the enzymes involved in its mitochondrial breakdown, provides novel insights into how gut epithelial cells respond to the acute absence of food.

      Weaknesses:

      Unlike previous TRAP-seq studies (PMID: 30580965, 36044259, 36977417) that reported sequencing data for both input and IP samples, this study only reports the sequencing data for IP samples. Since biochemical pulldowns are variable across replicates, it is difficult to know if the observed differences between different conditions are due to biological factors or differences in IP efficiency. More importantly, since two different TRAP lines were utilized in this study and a large proportion of the results focus on the differences between the translational profiles of INT1 vs INT2-9 cells, it is essential to know if the IP worked with similar efficiency for both TRAP strains that likely have different expression levels of the HA-tagged ribosomal protein. One way to estimate this would be to perform qRT-PCR of genes that are known to be enriched in all intestinal cells and determine whether their fold-enrichment over housekeeping genes (normalized to input) is similar in INT1 vs INT2-9 TRAP strains and across the fed vs fasted conditions. The authors, in fact, mention variability across biological replicates, due to which certain replicates were excluded from their WGCNA analysis.

      It appears that GFP expression is also detectable in INT2 (in addition to strong expression in INT1 in Fig.1A). Compared to INT3-9, which looks red, INT2 cells appear yellow, suggesting that the expression patterns of the two TRAP drivers are not mutually exclusive, which changes the interpretation of many of the results described in the study.

      Some parts of the study overemphasize the differences between the INT1 vs INT2-9 cell types, which is a biased representation of the results. For example, the authors specifically point out that 270 genes are differentially expressed in opposite directions in INT1 vs INT2-9 cell types during acute (30 min) fasting without mentioning the 1,268 genes that are differentially expressed in the same direction. They also do not mention here that 96% of the genes are differentially expressed in the same direction in INT1 and INT2-9 cell types after prolonged (180 min) fasting, suggesting that the divergent translational responses of these cell types are only observed in the first 30 minutes of food deprivation. Similar results have also been reported for the effect of fasting on locomotory and feeding behaviors, where 30 min of fasting produces more variable effects, which become more consistent after longer periods of fasting (PMID: 36083280). Hence, the effects of brief food deprivation should be interpreted with caution.

      Many of the interpretations of this study primarily rely on pathway enrichment analyses, which are based on the known function of genes. The function of uncharacterized genes that were found to be differentially expressed in INT1 vs INT2-9 cell types, e.g., the ShKT proteins, was not explored in this study. In addition, overreliance on pathway enrichment tools (instead of functional validation) has resulted in several conflicting findings. For example, one of the main messages of this study is that INT1 cells specialize in immune and stress response in response to fasting, which relies on pathway analysis in Figs 5E and 5F. However, pathway analysis at a different time point (shown in Figure S5A) indicates that INT2-9 cells show a much stronger increase in translation of stress and pathogen-responsive genes compared to INT1 cells. Hence, some of the results should be interpreted as different translational effects in INT1 vs INT2-9 cells after different lengths of food deprivation, without making broad claims about selective pathways being affected only in specific cell types.

      The authors have compared their TRAP-seq results with genes enriched in the anterior and posterior intestine clusters from a previously published whole-animal adult scRNA dataset (PMID: 37352352). They claim that their TRAP-seq results are in agreement with the findings of the scRNA study. However, among the 10 genes from the 'posterior intestine' scRNA cluster in Fig.S1E, six are downregulated in the INT1 vs INT2-9 comparison, while four are upregulated. Hence, there is no clear agreement between the two studies in terms of the top enriched genes in the anterior vs posterior intestine, which should be considered for cross-study comparisons in the future.

      The authors describe in the manuscript that they have performed INT1-specific RNAi for two C-type lectin genes that are upregulated during fasting. Due to a recent expansion of C-type lectin genes in C. elegans, there is a high chance of off-target effects of RNAi that is designed for members of this gene family. More trustworthy results could have been obtained using CRISPR-based loss-of-function alleles for these genes, one of which is publicly available. Also, the authors do not provide any explanation for why knockdown of these stress-response genes, which are activated in INT1 cells in response to food deprivation, results in improved resistance to pathogens. This, in fact, suggests a role of INT1 cells in increasing pathogen susceptibility, and not pathogen resistance, during food deprivation.

      Many of the studies in this field (e.g., references 2-4 in this article) have investigated the effects of food deprivation ranging from 4 hr to 24 hr, which results in activation of starvation responses in C. elegans. In contrast, the authors have used shorter time periods of fasting (30 min and 180 min), and most of their follow-up experiments have used 30 min of food deprivation. Previous work has shown that the effects of food deprivation can either accumulate over time (i.e., the effect gets stronger with longer food deprivation) or can be transient (i.e., only observed briefly after removal of food and not observed during long-term food deprivation). Starvation-induced transcription factors such as DAF-16/FoxO and HLH-30 show strong translocation to the nucleus only after 30 min of fasting. Though gene expression changes in all stages of food deprivation are of biological relevance, the authors have missed the opportunity to explore whether increased INS-7 secretion from the anterior intestine is dependent on these starvation-induced transcription factors (which can be easily tested using loss-of-function alleles) or is due to other fast-acting regulatory mechanisms induced due to the absence of food contents in the gut lumen. A previous study (PMID: 40991693) has shown that DAF-16 activation during prolonged starvation shuts down insulin peptide secretion from the intestinal epithelial cells. Hence, it is not clear if increased INS-7 secretion is only a feature of short-term food deprivation or is also a signature of long-term starvation (e.g., at 8 hr or 16 hr timepoints). Since most of the INS-7 secretion data in this study are for 30 min of fasting, it remains unknown whether the discovered regulators of INS-7 secretion can be generalized for extended food deprivation that triggers major metabolic changes, such as fat loss (e.g., conditions shown in Figure 1D).

      Two previous studies (PMID: 18025456, 40991693) have shown a strong reduction in the expression of ins-7 in the anterior intestine using GFP-based reporters (both promoter fusions and endogenous CRISPR-generated) and in whole-animal RNA-seq data from starved animals. These results are in contrast to the increased INS-7 secretion from INT1 cells during fasting that is reported in this study. The authors here have reported that INS-7 translation is higher in INT1 compared to INT2-9 during fed, acute fasted, and chronic fasted conditions, but they have not shown whether INS-7 translation is upregulated during acute and chronic fasting in INT1 cells in their TRAP-seq analysis. Knowing whether increased INS-7 secretion during acute fasting is due to increased transcription, translation, or secretion of INS-7 is crucial to resolve the discrepancy between these studies.

    1. Document de Synthèse : Réflexions sur la Reconstruction de la Communauté Éducative après un Traumatisme

      Résumé Exécutif

      Ce document de synthèse analyse les réflexions issues de l'ouvrage de Benoît Hommelard, Arras, après l'attentat : manifeste pour une cité scolaire nouvelle, et des échanges avec Luc Ferry, Inspecteur général de l'Éducation nationale.

      L'attentat de la cité scolaire Gambetta-Carnot d'Arras sert de catalyseur à une réflexion profonde sur la résilience, la gestion de crise et la redéfinition du projet éducatif.

      Les points critiques qui émergent sont les suivants :

      1. La Gestion de Crise et la Résilience : L'après-traumatisme exige du temps, un soutien psychologique prolongé et une gestion collective soudée pour se préserver de la pression médiatique.

      Les procédures de gestion de crise, des plus ordinaires aux plus graves, sont fondamentales pour instaurer un sentiment de sécurité durable.

      2. La Cité Scolaire comme "Laboratoire des Possibles" : Les établissements complexes, par la diversité de leurs publics et de leurs filières (collège, lycée, classes préparatoires, BTS, internat), constituent des terrains fertiles pour créer des parcours éducatifs cohérents et inspirants, préfigurant un modèle de "cité éducative" élargie.

      3. Le Plaidoyer pour l'Audace et l'Autonomie : Le système éducatif souffre de blocages administratifs et bureaucratiques qui freinent l'innovation et l'élan des équipes.

      Une plus grande flexibilité, le droit à l'erreur et une prise de décision plus locale ("penser global, agir local") sont nécessaires pour répondre efficacement aux urgences du terrain.

      4. La Centralité de l'Humain : Un management fondé sur la reconnaissance des "richesses humaines" de chaque acteur est essentiel.

      Il s'agit de détecter les talents, de rendre les instances de dialogue véritablement participatives et de placer l'empathie au cœur des relations professionnelles.

      5. La Vision d'une École en Mouvement : La "cité scolaire nouvelle" n'est pas un modèle figé mais un organisme vivant, en constante adaptation.

      Elle se construit sur la flexibilité, le renforcement du collectif et la culture partagée des valeurs républicaines, avec pour objectif la réussite de tous les membres de la communauté éducative.

      1. Le Traumatisme comme Point de Départ pour une Réflexion Nouvelle

      L'attentat survenu à la cité scolaire Gambetta-Carnot d'Arras a été un choc majeur pour la communauté éducative et la nation. L'ouvrage de Benoît Hommelard, ancien personnel de direction de l'établissement pendant neuf ans, ne se veut pas une enquête sur les faits, mais un manifeste pour penser l'avenir.

      Le Sens de l'Écriture : L'écriture a servi de "catharsis" personnelle à l'auteur, mais vise surtout à apporter un soutien aux communautés éducatives. L'objectif est de tracer des perspectives positives, des "lendemains éducatifs plus heureux", et d'éviter de sombrer dans le pessimisme.

      Une Volonté Prospective : Plutôt que de chercher des responsables, le livre s'interroge sur la manière de construire "l'après". Il questionne la capacité de l'école à maintenir les jeunes dans le cadre des valeurs républicaines (liberté, égalité, fraternité, laïcité), notant que l'assaillant, un ancien élève, a basculé après avoir quitté le cursus scolaire.

      Proposer un Nouveau Projet : L'ambition est d'imaginer un nouveau projet collectif, non seulement pour la cité scolaire d'Arras mais pour l'ensemble des établissements, afin de fédérer les énergies après un drame.

      2. La Gestion de l'Après-Crise : Résilience et Humanité

      La gestion d'un drame d'une telle ampleur révèle des défis humains et organisationnels majeurs. L'expérience d'Arras, mise en perspective avec celle de l'assassinat de Samuel Paty, souligne plusieurs impératifs.

      L'Importance du Temps Long : La résilience est un processus très lent. Luc Ferry rappelle que pour le collège de Samuel Paty, les professeurs n'ont pu commencer à parler collectivement des événements qu'au bout de deux ans.

      La Préservation du Collectif : Face au drame, la priorité est le soutien collectif immédiat, en évitant de chercher des coupables. La communauté de Gambetta-Carnot a su se préserver en limitant les témoignages "à chaud", refusant le "sensationnel" médiatique. Un an après, cette posture de protection était toujours active.

      L'Accompagnement Psychologique : La mise en place de cellules d'écoute est cruciale, et leur action doit s'inscrire dans la durée (plus d'un an dans certains cas) pour accompagner l'apaisement et la reconstruction psychologique de tous les acteurs (personnels et élèves).

      Le Décalage de Perception : Un " hiatus extrêmement violent" peut survenir entre les personnels et les élèves. Ces derniers peuvent donner l'impression que "la vie reprend le dessus" rapidement (rires dans la cour trois jours après le drame), alors que le traumatisme reste présent mais non verbalisé.

      La Nécessité des Procédures : La prévention et la gestion des crises se construisent sur des actes ordinaires. La mise en place de procédures claires et partagées pour gérer les incidents du quotidien (retards, insultes, alarmes incendie) est ce qui fonde le sentiment de sécurité. Savoir qu'il existe une réponse collective et structurée permet à chacun de ne pas se sentir seul face à une difficulté.

      3. La Cité Scolaire comme "Laboratoire des Possibles"

      Benoît Hommelard reprend l'expression "laboratoire des possibles" pour décrire le potentiel unique d'une structure complexe comme la cité scolaire Gambetta-Carnot. Cette diversité devient un atout pour construire des parcours et renforcer la cohésion.

      | Caractéristiques de la Cité Scolaire | Potentiel Éducatif | | --- | --- | | Fusion Collège-Lycée | Facilite les liaisons inter-cycles et la continuité des parcours. | | Diversité des Publics et Filières | Collégiens, lycéens (général, STMG, STI2D), étudiants (classes prépa, BTS). | | Offres Spécifiques | Sections bilangues rares (russe, chinois) dès la 6e pour attirer des profils variés. | | Internat Mixte | L'internat, accueillant collégiens, lycéens pré-bac et post-bac, est vu comme le "moteur" de l'ensemble, favorisant la mixité et la découverte de parcours. |

      Un Modèle de Réseau Territorial : Cette structure est un exemple de travail en réseau. Elle préfigure le modèle des "cités éducatives", qui visent à fédérer tous les partenaires d'un territoire (écoles, collèges, lycées, associations, ville) pour mutualiser les moyens et construire des parcours plus cohérents pour les élèves.

      4. Défis Systémiques : Le Plaidoyer pour l'Audace et l'Autonomie Locale

      Un chapitre de l'ouvrage, intitulé "De l'audace, encore de l'audace, toujours de l'audace", met en lumière les freins structurels qui entravent les initiatives au sein de l'Éducation nationale.

      Les Freins à l'Initiative :

      La Peur du Risque : Une culture où l'on craint de lancer un projet non inscrit dans une circulaire ou une injonction hiérarchique, par peur d'être "pointé du doigt".    ◦ La Lourdeur Administrative : Des projets innovants sont souvent bloqués par des "méandres" administratifs, des dossiers complexes et des délais de réponse très longs.    ◦ L'Exemple Concret : Un projet sur le climat scolaire, initié suite à une urgence, peut se retrouver enlisé pendant plus d'un an et demi en attente de validations budgétaires, perdant ainsi tout son sens.

      Le Droit à l'Erreur : Il est essentiel d'instaurer une culture où l'on peut "tenter des choses et reconnaître quand ça n'a pas marché".

      La Nécessité d'une Décision Locale : Pour être efficace, la décision doit être prise au plus près du terrain. La maxime "penser global, agir local" implique de réduire le nombre d'intermédiaires (départementaux, académiques, nationaux) qui rallongent les délais et déconnectent la solution du problème initial.

      5. Le Facteur Humain : Pilier de la Reconstruction et du Management

      Au cœur de la vision proposée se trouve l'humain. Le management éducatif ne peut être purement administratif ; il doit reposer sur la qualité des relations.

      Aimer les Gens : La base d'un management réussi est la capacité à créer des liens, à partager les événements heureux comme les plus douloureux. C'est ce qui permet de trouver des leviers pour résoudre les problématiques.

      Le "Directeur des Richesses Humaines" : L'auteur rejette le terme "DRH" dans son sens managérial classique pour adopter la formule d'un jury de mémoire : "Directeur des Richesses Humaines". Le rôle du chef d'établissement est de détecter les talents, la plus-value et la richesse de chaque personnel pour que l'organisation fonctionne mieux.

      Rendre les Instances Vivantes : Pour "humaniser" le pilotage, les instances officielles (Conseil de la Vie Collégienne, Conseil de la Vie Lycéenne, etc.) doivent devenir de réels espaces d'expression et de co-décision, et non des réunions formelles pour "cocher les cases". L'exemple d'un projet d'animal au collège, porté par les élèves, illustre comment associer la communauté aux décisions.

      6. La Formation Continue : Un Levier Stratégique pour l'Évolution

      La formation est présentée comme un outil essentiel pour accompagner le changement et faire évoluer les pratiques.

      Accompagner les Réformes : Face à des réformes comme la mise en place des groupes de besoins, le rôle du chef d'établissement est d'organiser la formation pour que ses équipes "s'y retrouvent" et adaptent la commande nationale au contexte local ("penser globalement, actionner localement").

      Un Processus Continu : Se Former, Se Déformer, Se Reformer : La formation ne doit pas être un événement ponctuel. C'est une "obsession" nécessaire pour tous les acteurs afin de s'adapter à une société et à une jeunesse qui évoluent très rapidement.

      Le Rôle Actif du Pilote : Le chef d'établissement doit non seulement identifier les besoins, mais aussi assurer un suivi pour voir comment la formation se traduit concrètement dans les classes. Il doit encourager les personnels formés à "essaimer" leurs nouvelles compétences auprès de leurs collègues.

      7. Perspectives sur la "Cité Scolaire Nouvelle"

      La conclusion des échanges ne dessine pas le portrait d'une école idéale figée, mais celui d'un système dynamique et adaptable.

      Un Organisme en Mouvement : La cité scolaire idéale n'existe pas. Selon Luc Ferry, l'idéal réside dans le "mouvement" : un organisme qui vit, se développe et progresse vers plus de cohérence et de cohésion.

      Quatre Sentiments Fondamentaux : Un établissement réussi renforce quatre sentiments chez ses membres :

      1. Le sentiment de sécurité.    2. Le sentiment de reconnaissance.    3. Le sentiment de justice.    4. Le sentiment d'appartenance.

      La Flexibilité comme Clé : Benoît Hommelard ajoute la notion de flexibilité comme condition essentielle : flexibilité dans les emplois du temps, dans les réponses administratives, dans l'architecture scolaire (classes flexibles) et dans la hiérarchie pour permettre une action locale plus agile.

      Un Objectif Partagé : La finalité de cette nouvelle cité scolaire est de "faire réussir" non seulement les élèves, mais aussi l'ensemble des équipes et des parties prenantes qui constituent la communauté éducative. L'échange se conclut sur une note d'espérance, passant du drame à une vision positive pour l'avenir du système éducatif.

    1. Note d'information : Éco-délégués, le pouvoir d'agir

      Résumé Exécutif

      Cette note d'information synthétise les perspectives et les analyses issues du podcast "Éco-délégués : donnons-leur le pouvoir d'agir".

      Le document met en lumière le rôle complexe des éco-délégués, les attentes élevées placées en eux — qualifiés de "héros ordinaires" — et les multiples facettes de leur engagement.

      Les motivations des élèves sont profondes, allant du désir d'agir pour la planète, souvent nourri par une certaine éco-anxiété, à un sentiment de responsabilité et à l'influence de leur entourage.

      L'analyse révèle que l'efficacité du dispositif repose de manière critique sur l'accompagnement des adultes. Un écueil majeur, l'« adultisme », où les projets sont imposés par les adultes, doit être évité au profit d'une approche qui laisse les élèves proposer, construire et piloter leurs propres initiatives.

      Le rôle du référent est de trouver un équilibre délicat entre l'écoute, le soutien logistique et l'impulsion, afin de transformer les idées en actions concrètes.

      Les projets menés varient considérablement, des éco-gestes classiques (tri des déchets) à des transformations ambitieuses de l'établissement (végétalisation de la cour, création de zones de bien-être), s'étendant au-delà de l'écologie pour englober l'ensemble des Objectifs de Développement Durable (ODD), comme l'égalité filles-garçons.

      Cependant, de nombreux obstacles freinent leur action : la lenteur administrative ("le temps des adultes"), les contraintes financières, le manque de reconnaissance par les pairs et l'absence d'un temps institutionnalisé pour leurs activités.

      La dynamique dépend fortement de la gouvernance de l'établissement, décrite comme des "montagnes russes".

      Malgré ces défis, l'engagement en tant qu'éco-délégué est profondément formateur. Il développe la confiance en soi, le sens de la citoyenneté et le "pouvoir d'agir" des élèves.

      Ce dispositif transforme également les adultes impliqués, modifiant leur regard sur les élèves et leurs propres pratiques pédagogiques, et a le potentiel de catalyser un changement positif à l'échelle de l'établissement et du territoire.

      --------------------------------------------------------------------------------

      Analyse Détaillée

      1. Portrait de l'Éco-délégué : Motivations et Identité

      Les Motivations de l'Engagement

      L'engagement des élèves en tant qu'éco-délégués est mû par un ensemble de motivations profondes, identifiées par la chercheuse Eveline Bois :

      Agir pour la planète : La motivation principale est la volonté de "sauver le monde", de le changer à leur échelle.

      Les élèves expriment une conscience aiguë de la dégradation de l'environnement et de l'urgence climatique, ce qui peut générer de l'éco-anxiété. L'action devient alors un moyen de la combattre.

      Sentiment de responsabilité : Les jeunes se sentent responsables de l'avenir et perçoivent l'établissement scolaire comme une bonne échelle pour commencer à agir.

      Influences externes : La famille et les amis jouent un rôle significatif. Des élèves s'engagent pour suivre l'exemple de parents impliqués dans des associations (ex: "zéro déchet") ou pour partager une expérience avec leurs camarades.

      Utilité et participation : Comme l'exprime Laur, éco-déléguée depuis quatre ans, le désir de "se rendre utile à la vie au collège" et de s'investir est un moteur important.

      Méthodes de Recrutement et Profils

      Le mode de désignation des éco-délégués influence la dynamique du groupe. Sandrine Aoussour, enseignante référente, a opté dans son collège pour un système basé sur le volontariat, ouvert tout au long de l'année. Ce choix vise à :

      • Garantir d'avoir des élèves "réellement motivés".

      • Éviter la compétition inhérente à une élection.

      • Créer un "noyau vraiment d'élèves motivés" tout en permettant une flexibilité (possibilité de rejoindre ou de quitter le groupe).

      2. Le Rôle Crucial de l'Accompagnement Adulte

      L'Écueil de l'« Adultisme »

      Eveline Bois met en garde contre l'« adultisme », une tendance des adultes à concevoir des dispositifs pour les élèves en minimisant leur capacité à faire des choix et des propositions.

      Le témoignage du professeur Raphaël Grass est emblématique : il a commencé par apporter lui-même les projets avant de réaliser qu'ils ne correspondaient pas aux attentes des élèves et de leur "donner la parole".

      Conséquence : Un décalage se crée entre les désirs des élèves (sauver le monde) et les actions qu'on leur propose (installer un cendrier devant le lycée).

      Solution : L'accompagnement doit évoluer pour faire confiance aux élèves et leur laisser l'initiative.

      Le Référent : un Équilibriste entre Soutien et Autonomie

      Le rôle de l'enseignant ou du CPE référent est central et complexe. Il ne s'agit pas de diriger mais de faciliter.

      Partir des préoccupations des élèves : Sandrine Aoussour insiste sur l'importance de partir des idées des élèves (créer un potager, un "coin zen") et d'aider à les concrétiser en trouvant des solutions (budgets participatifs, partenariats).

      Proposer sans imposer : L'adulte peut aussi être force de proposition (installation d'une ruche via une fondation), mais ces propositions sont soumises aux élèves.

      Besoin d'adultes "entreprenants" : Du point de vue de l'éco-déléguée Laur, les élèves attendent des adultes qu'ils soient encore plus proactifs pour les aider à réaliser leurs projets les plus ambitieux, comme la végétalisation de la cour.

      Les témoignages d'anciens éco-délégués de lycée confirment ce besoin d'un équilibre : ils préconisent une "instance autonome avec une certaine flexibilité", où ils peuvent travailler seuls pour libérer la parole, tout en bénéficiant de l'accompagnement des adultes pour les aspects logistiques et financiers.

      3. Des Éco-gestes à la Transformation Durable

      Le Spectre des Actions

      Les projets menés par les éco-délégués couvrent un large éventail, de l'action symbolique à la transformation structurelle de l'établissement.

      | Type d'Action | Exemples Concrets du Podcast | | --- | --- | | Éco-gestes classiques | Tri des bouchons, du papier, lutte contre le gaspillage alimentaire, ramassage des poubelles de tri. | | Amélioration de l'environnement scolaire | Installation d'un "coin zen", de plantes dans les classes, d'un apiscope, d'hôtels à insectes. | | Projets ambitieux et structurels | Création d'un potager, d'une zone de biodiversité, projet de végétalisation de la cour de récréation. | | Sensibilisation et citoyenneté | Campagnes de sensibilisation dans les classes, organisation d'une "manif au collège pour le climat". | | Actions sociales (ODD) | Recherche de sponsors pour un distributeur de protections périodiques, aménagement de la cour pour une meilleure égalité filles-garçons, collecte pour le Secours populaire. |

      Dépasser la "Vitrine Verte"

      Eveline Bois souligne le risque que les projets ne soient qu'une "vitrine verte". Le passage à une transformation durable dépend de plusieurs facteurs :

      1. La qualité de l'accompagnement : Un bon accompagnement permet de dépasser les actions de surface pour s'attaquer à des projets de plus grande ampleur.

      2. L'élargissement des thématiques : L'engagement va au-delà de l'écologie stricte pour inclure les 17 ODD, comme le bien-être animal ou l'égalité des genres.

      Sandrine Aoussour cite l'exemple d'un projet de réaménagement de la cour initié par les filles pour contrer l'occupation de l'espace par les garçons.

      3. Le frottement au réel : Les projets ambitieux confrontent les élèves aux réalités du monde adulte : recherche de financements (devis, sponsors), complexité des règles (mobilier urbain), et temporalité administrative.

      4. Obstacles et Limites à l'Action

      L'engagement des éco-délégués se heurte à des difficultés systémiques et culturelles.

      Le Temps et l'Argent :

      Le "temps des adultes" : Les élèves découvrent la lenteur des processus de décision et de mise en œuvre, ce qui peut être une source de frustration.   

      Le financement : La recherche de fonds est un obstacle majeur. Les élèves réalisent que les projets ont un coût élevé (ex : "un banc ça coûte extrêmement cher").   

      L'emploi du temps : Il n'y a pas de temps institutionnel dédié. Les réunions ont lieu sur la pause méridienne, après les cours ou, plus rarement, sur le temps de classe, ce qui pose des questions d'organisation et d'équité.

      Les Freins Institutionnels et Sociaux :

      La gouvernance : Le soutien de la direction est crucial mais fluctuant. Sandrine Aoussour parle de "montagnes russes" selon les équipes de direction en place.  

      Le manque de reconnaissance : Les éco-délégués peuvent souffrir d'un manque de reconnaissance de la part de leurs camarades ("vous servez à quoi, il y a déjà les délégués").  

      La valorisation : La question de la valorisation de leur engagement (par exemple, sur le dossier scolaire) reste à creuser pour éviter le désengagement.

      5. Le "Pouvoir d'Agir" : Impacts et Bénéfices

      Malgré les obstacles, le dispositif, lorsqu'il fonctionne bien, a un impact profondément positif sur tous les acteurs.

      Pour les élèves :

      Développement personnel : Gain de confiance en soi, joie de partager et de réaliser des projets collectifs.  

      Développement de compétences : Prise de parole en public, gestion de projet, argumentation, etc.    ◦ Développement citoyen : Le dispositif est un apprentissage concret de la citoyenneté. Certains élèves poursuivent leur engagement en dehors du collège (ex: au Secours populaire).  

      Sentiment d'empowerment : "Les jeunes interrogés qui se sentent libres et à qui on fait confiance entretiennent un fort sentiment du pouvoir agir" (Eveline Bois).

      Pour les enseignants et l'établissement :

      Épanouissement professionnel : Les référents parlent de "joie" et de "contact privilégié" avec les élèves.  

      Transformation des pratiques : L'engagement en tant que référent modifie le regard des adultes sur le potentiel des élèves et peut transformer leurs pratiques de classe.   

      Dynamique collective : Un projet réussi peut rayonner et impliquer toute la communauté éducative (gestionnaires, direction, agents), devenant un véritable projet d'établissement.

      Concepts Clés et Inspirations

      De la responsabilité de surface à la responsabilité intégrale : Eveline Bois cite la chercheuse Luce Sauvé pour distinguer deux approches de l'écocitoyenneté :

      1. Responsabilité de surface : Limitée aux "bons gestes" et à une vision normative (écocivisme).

      2. Responsabilité intégrale : Implique une "réflexion critique, un pouvoir d'agir et la participation à la vie démocratique".

      L'objectif est de tendre vers cette seconde approche.

      Le bonheur comme projet collectif : Sandrine Aoussour s'inspire d'un rapport de l'UNESCO pour souligner que le bonheur à l'école est un projet communautaire.

      La "joie de se découvrir capable d'être au service d'un collectif" est un levier d'apprentissage puissant.

      L'empouvoirement : Ce néologisme résume l'objectif final du dispositif : donner réellement du pouvoir aux élèves et aux enseignants pour qu'ils deviennent les moteurs du changement.

    1. Briefing : Synthèse de la Rencontre avec Émilie Hanrot

      Résumé Exécutif

      Ce document de synthèse analyse les thèmes et les idées clés de la rencontre avec Émilie Hanrot, professeure des écoles depuis 20 ans et créatrice de contenu éducatif.

      L'échange met en lumière sa philosophie "Kiffer l'école", qui repose sur le plaisir mutuel de l'enseignant et des élèves dans l'apprentissage.

      Au cœur de sa démarche se trouve la primauté de l'enfant sur l'élève, impliquant une prise en compte holistique de ses besoins physiologiques, émotionnels et de mouvement.

      Hanrot redéfinit l'autorité comme une relation de confiance mutuelle et d'autonomie, plutôt qu'un rapport de force.

      Elle insiste sur l'importance du bien-être de l'enseignant, cultivé par un travail personnel sur la sérénité et la joie, comme prérequis à un climat de classe positif.

      Enfin, elle clarifie son rôle de créatrice de contenu, se positionnant non comme une "formatrice" institutionnelle, mais comme une praticienne qui partage son expérience de terrain, répondant ainsi à un besoin crucial de soutien et de ressources pratiques exprimé par sa communauté.

      1. La Philosophie "Kiffer l'école"

      La notion centrale développée par Émilie Hanrot est celle de "kiffer l'école".

      Ce choix de mot, bien que parfois perçu comme non académique, reflète fidèlement son approche pédagogique.

      Principe Fondamental : Le plaisir doit être au cœur de l'expérience scolaire, tant pour les élèves que pour l'enseignant.

      Elle déclare : _"Je ne me vois pas faire ce métier sans moi aussi prendre du plaisir.

      Donc le kiff il est dans les deux sens, j'essaie d'en donner à ma classe et j'en reçois beaucoup aussi."_

      Genèse du Projet : L'idée a germé à partir d'une accumulation d'anecdotes de classe notées sur son smartphone, qui ont d'abord donné lieu à un livre auto-édité, "C'est quand l'avait cré".

      Cette envie de raconter le quotidien de la classe s'est ensuite étendue à un blog, puis à des plateformes vidéo.

      Développement sur les Réseaux Sociaux :

      YouTube : Lancé pendant le confinement pour garder le lien avec les familles de sa classe de petite section en zone prioritaire.

      Les vidéos, initialement privées, sont passées en mode public suite à une demande, marquant le début de sa communauté.  

      Instagram : Utilisé ensuite pour des formats plus courts (Reels), ce qui a considérablement accéléré la croissance de son audience.

      2. L'Enfant au Cœur du Système : Au-delà de l'Élève

      Un thème majeur de l'intervention est la distinction cruciale entre la notion d'enfant et celle d'élève, souvent prédominante dans le système scolaire français.

      Le Rappel Essentiel : Hanrot cite une phrase de son conjoint qui a marqué ses débuts : "N'oublie pas que ce sont des enfants".

      Elle souligne que ce ne sont pas "que des enfants" mais bien "des enfants", avec tout ce que cela implique.

      L'école ne doit pas seulement s'adresser à des "cerveaux qu'il faut nourrir", mais à des individus complets.

      Prise en Compte des Besoins :

      Besoins Physiologiques : Il est impossible d'enseigner efficacement à un enfant qui a faim, soif, sommeil ou une envie pressante.  

      Besoins Émotionnels : Un enfant qui vient de vivre un conflit ne peut pas se concentrer sereinement sur un apprentissage.  

      Besoin de Mouvement (Corporéité) : En tant que personne ayant elle-même un grand besoin de mouvement, elle aménage systématiquement ses classes pour permettre aux enfants de bouger, de s'allonger, et met à disposition des casques anti-bruit ou des objets à manipuler.

      L'Analogie du Coach Sportif : Elle compare un bon enseignant à un bon coach sportif, qui ne se concentre pas uniquement sur la performance technique, mais prend en compte l'individu dans sa globalité, s'assurant que les participants s'amusent même pendant des exercices répétitifs et difficiles.

      3. L'Autorité par la Confiance et l'Autonomie

      Émilie Hanrot propose une vision de l'autorité qui se détache du contrôle pour se fonder sur une relation de confiance.

      Définition de l'Autorité : L'autorité ne vient pas de la peur ou d'une voix forte. "Avoir de l'autorité en fait, c'est ça, c'est d'avoir une confiance mutuelle."

      Elle se construit en donnant de la confiance, de l'autonomie et des responsabilités aux élèves.

      Flexibilité et Cadre : L'enseignant représente le cadre, mais doit savoir être souple.

      L'autorité se manifeste dans la capacité à obtenir l'écoute et le calme lorsque c'est nécessaire, précisément parce que la confiance a été établie.

      Exemple Concret : Elle raconte avoir laissé deux élèves travailler sous une table car ils s'y sentaient mieux ("moins de bruit, c'est plus facile").

      Cet acte de confiance, ce "lâcher-prise", renforce le respect mutuel et l'autorité de l'enseignante pour les moments où un cadre strict est requis.

      La "Cape d'Enseignante" : Ce concept décrit le rôle multifacette que l'enseignant endosse.

      Le Guide : Celui qui "dirige le bateau", garde le cap, affiche l'emploi du temps et s'assure que chacun sait pourquoi il est là.  

      Le Fédérateur : Celui qui crée une ambiance de groupe positive et unie.  

      Le Garant des Règles : Celui qui intervient systématiquement face à des propos ou comportements inadmissibles.   

      Le Transmetteur : Celui qui enseigne le programme de l'Éducation Nationale.   

      Le Magicien : Celui qui éveille la curiosité, donne envie et sait faire rire pour détendre l'atmosphère.

      L'exemple de l'expérience sur les états de l'eau (solide, liquide, gazeux avec une bouilloire) illustre cette capacité à transformer un apprentissage en moment "magique".

      4. La Relation avec les Parents

      La construction d'une alliance avec les familles est une pierre angulaire de sa pratique, bien qu'elle reconnaisse que c'est un travail constant.

      Construire la Confiance : Elle insiste sur la nécessité de créer un lien de confiance dès le début de l'année, notamment avec les parents d'élèves aux comportements difficiles.

      Anecdote Clé : Face à une mère qui décrivait son fils de petite section comme "difficile", Hanrot a choisi de ne pas abonder dans ce sens, mais de reformuler positivement le comportement de l'enfant : "Je crois que votre enfant est très content d'être à l'école [...] il est très curieux votre fils".

      Ce choix de mots a permis d'établir une relation positive et de confiance.

      Rendre l'École Transparente : Elle souligne que de nombreux parents sont éloignés du système scolaire et n'en comprennent pas les codes. Il est donc crucial de :

      ◦ Accueillir les parents chaque matin avec un mot personnel.  

      ◦ Les inviter explicitement à entrer dans la classe pour observer les affichages ou rester un moment.  

      ◦ Prendre le temps d'expliquer le fonctionnement de l'école.

      5. Le Bien-être de l'Enseignant : Sérénité et Joie

      Hanrot affirme que la capacité à créer un climat de classe serein dépend en grande partie du bien-être personnel de l'enseignant.

      Ressources pour la Sérénité :

      Travail Personnel : Une psychothérapie l'a aidée à "dégager de l'espace sur sa bande passante" et à s'apaiser.  

      Vision Positive : Elle cultive une tendance naturelle à voir le positif, "un cercle vertueux".   

      Cultiver la Contemplation : Savoir s'arrêter pour apprécier les petites choses (des vieilles pierres, un rayon de soleil).  

      Relations Sociales : En tant que personne extravertie, elle puise son énergie dans le contact avec les autres.  

      Savoir dire non : Apprendre à refuser des situations inconfortables pour se préserver.

      Gestion des Émotions en Classe : Elle reconnaît ne pas être "exemplaire" et que la patience est plus facile avec les enfants qu'avec les adultes.

      Lorsqu'elle élève la voix, elle n'hésite pas à s'excuser auprès des enfants : "Je vous demande pardon d'avoir élevé le ton".

      6. Développement Professionnel et Rôle sur les Réseaux

      Émilie Hanrot détaille son parcours de formation continue et sa perception de son rôle en tant que créatrice de contenu.

      Parcours de Formation : Son développement professionnel s'est largement construit en autonomie, nourri par sa curiosité. | Ressource | Description | | :--- | :--- | | Livres | Initiée par sa sœur à la Communication Non Violente (CNV). | | Stages Payants | A suivi des stages de CNV sur son temps personnel. | | Formations Institutionnelles| A suivi une formation de 3 jours sur la Discipline Positive dans le cadre de son poste. | | Conférences | Sur des thèmes comme les violences éducatives ordinaires. | | Auto-formation | Écoute intensive de podcasts et de conférences TED sur l'éducation et les neurosciences. | | Pédagogie Alvarez | A suivi une formation de 3 jours avec Céline Alvarez, dont elle s'est beaucoup inspirée (en s'autorisant à adapter et abandonner certaines pratiques comme l'ellipse au sol).|

      Le Rôle de "Partageuse" vs. "Formatrice" :

      ◦ Elle ne se sent pas légitime en tant que "formatrice", car elle n'a pas été "estampillée" comme telle et son processus est différent : elle partage seule face à son téléphone des expériences de terrain.  

      ◦ Elle se voit plutôt comme une paire-aidante qui partage ce qui a fonctionné dans sa classe. La légitimité vient du "vécu", du fait qu'elle est une enseignante à plein temps.  

      ◦ Le passage à la formation en présentiel (synchrone) est perçu comme "tellement plus difficile" que la création de contenu (asynchrone) en raison de l'interaction directe et de la nécessité de gérer la dynamique d'un groupe d'adultes.

      Besoins de sa Communauté : Les retours de ses abonnés pointent principalement vers des difficultés liées à la gestion des élèves à comportements difficiles et au besoin de formation sur les compétences psychosociales.

      7. Stratégies Pédagogiques et Conseils Pratiques

      Au fil de la discussion, plusieurs stratégies concrètes ont été partagées pour la gestion de classe.

      Gestion des Comportements Perturbateurs :

      Agir, ne pas réagir : Prendre un temps de pause pour choisir sa réaction plutôt que de répondre impulsivement.   

      Co-construction de solutions : Impliquer les élèves dans la résolution d'un problème (ex: le bruit dans le couloir).

      En étant honnête sur son propre ressenti ("je me sens pas très bien"), on les responsabilise et on accepte toutes leurs idées, même farfelues, pour trouver une solution ensemble.   

      Renforcement Positif : Féliciter explicitement ceux qui adoptent le comportement attendu.  

      Se Focaliser sur la Compétence à Acquérir : Plutôt que de dire "ne cours pas", accompagner l'enfant en lui donnant la main et en verbalisant l'action positive : "bravo tu as marché dans le calme et en silence".

      Organisation de la Classe :

      Autonomie des Élèves : La clé est d'organiser la classe pour que les élèves non supervisés directement soient occupés de manière autonome et intelligente (ateliers, jeux, etc.), permettant à l'enseignante de travailler en très petits groupes (3-4 élèves maximum).  

      Aménagement de l'Espace : Il faut oser expérimenter (enlever les bancs, travailler au sol, utiliser des chaises pour former un cercle) et s'adapter au nombre d'élèves.

      Pour les classes surchargées, des solutions comme des pôles de travail debout ("mange-debout") peuvent libérer de l'espace.

    1. Droits de l'enfant : Transformer l'École de l'intérieur

      Synthèse exécutive

      Ce document de synthèse analyse les stratégies et les impacts de l'intégration des droits de l'enfant au cœur du fonctionnement de l'école.

      Basée sur les témoignages d'experts et de praticiens, l'analyse révèle que si la France a ratifié la Convention Internationale des Droits de l'Enfant (CIDE) depuis 1990, son application reste inégale, notamment en ce qui concerne le droit à l'expression et à la participation des élèves.

      L'approche préconisée dépasse le simple enseignement théorique des droits pour les incarner dans la posture des adultes, les relations interpersonnelles et l'organisation même de l'établissement.

      Le programme "École amie des droits de l'enfant" de l'UNICEF sert de modèle central, illustrant une démarche qui vise un changement de culture profond et durable.

      Cette méthode s'appuie sur un diagnostic participatif, l'implication de toute la communauté éducative (enseignants, élèves, personnels, parents) et l'utilisation d'outils concrets comme la "marche exploratoire" pour évaluer l'environnement scolaire du point de vue de l'enfant.

      Les bénéfices identifiés sont significatifs : amélioration notable du climat scolaire, renforcement du respect de soi et des autres, et développement précoce des compétences citoyennes.

      Les données issues d'expériences internationales démontrent une augmentation du sentiment de sécurité et de l'écoute perçue par les élèves, ainsi que de leur capacité à influencer les décisions qui les concernent.

      Cependant, la mise en œuvre se heurte à des défis majeurs, tels que la prévalence de l' "adultisme" – la tendance des adultes à décider à la place des enfants – et la perception d'une surcharge de travail pour les enseignants.

      La clé du succès réside dans un engagement sur le temps long, considérant ces programmes non comme une initiative ponctuelle mais comme un investissement fondamental pour former des citoyens actifs et responsables.

      État des lieux des droits de l'enfant dans le système éducatif français

      La Convention Internationale des Droits de l'Enfant (CIDE) : Un cadre juridique sous-appliqué

      La CIDE, adoptée par les Nations Unies en 1989 et ratifiée par la France en 1990, constitue le socle juridique des droits de l'enfant.

      Ce texte de 54 articles protège les individus de 0 à 18 ans et couvre l'ensemble de leurs droits fondamentaux.

      Cependant, selon Valérie Becket, professeure en sciences de l'éducation, l'application de cette convention en France est "inégale selon les domaines".

      La France n'est pas considérée comme un "bon élève", particulièrement sur les enjeux d'expression et de participation.

      Des enquêtes comparatives à l'échelle européenne montrent que, malgré l'existence de dispositifs comme les conseils d'école ou les conseils d'enfants, un décalage persiste entre les droits permis et le ressenti réel des enfants, plaçant parfois la France en bas du classement.

      Julie Zarlot, de l'UNICEF France, précise que si la France est exemplaire dans certains domaines comme le droit global à la santé ou à l'éducation, des manques subsistent pour certains enfants qui n'ont pas un accès suffisant à l'école, à la santé ou à la protection.

      La perception des droits à l'école

      L'environnement scolaire présente des tensions inhérentes à l'application des droits de l'enfant. Richard Côtier, directeur d'école, souligne que "l'organisation prend le pas sur le respect de chacun".

      La focalisation sur les objectifs d'apprentissage peut parfois occulter la nécessité de garantir les droits fondamentaux des élèves.

      L'équilibre Droits/Devoirs : Une réaction fréquente des adultes (enseignants, parents) à l'évocation des droits de l'enfant est la question des devoirs.

      La réponse apportée est que le droit de l'un implique le devoir pour l'autre de le respecter. "Le devoir, c'est le devoir de respecter les droits de tous, y compris les siens propres et ceux des autres."

      Écart de perception : Les diagnostics menés en amont des projets révèlent souvent un décalage entre la perception de l'école par les élèves, qui la vivent de l'intérieur, et celle de leurs parents, qui sont à l'extérieur.

      Cette différence justifie la nécessité de recueillir le point de vue de toutes les parties prenantes.

      Le programme "École amie des droits de l'enfant" : Une approche transformative

      Philosophie et approche pédagogique

      Le programme de l'UNICEF est présenté comme une démarche de prévention positive.

      Plutôt que de se concentrer sur la lutte contre des problèmes (comme le harcèlement) par une approche "par la négative", il vise à "motiver tout le monde pour faire en sorte que les droits de tous soient respectés".

      L'approche pédagogique de l'UNICEF, qualifiée d'"approche par les droits", repose sur trois piliers :

      1. Apprendre sur les droits : Acquérir la connaissance de la CIDE.

      2. Apprendre par les droits : Expérimenter les droits dans la pratique quotidienne, via la posture de l'enseignant et le fonctionnement de l'école.

      3. Apprendre pour les droits : Devenir capable de défendre ses propres droits et ceux des autres.

      L'objectif est un "changement de comportement" et un "renforcement des capacités" des adultes comme des enfants. Il ne s'agit pas simplement d'un apport de connaissances, mais d'une transformation profonde du fonctionnement de l'école.

      Mise en œuvre concrète à l'école L. Martine

      L'école dirigée par Richard Côtier, engagée dans le programme depuis un an et demi, illustre cette mise en œuvre.

      Comité de Pilotage (Copil) : Un comité a été créé pour piloter le projet, rebaptisé "Conseil de vie citoyenne" pour préparer les élèves au collège.

      Sa particularité est d'inclure un large éventail d'acteurs : élèves, enseignants, AVS, personnel d'entretien, animateurs du périscolaire.

      Ce lieu permet de "penser tous avec nos regards différents le fonctionnement de l'école du point de vue des droits".

      La "Marche Exploratoire" : Cet outil concret consiste à parcourir l'école en se posant des questions spécifiques sous l'angle d'un droit (ex: la sécurité).

      Les élèves et adultes observent et analysent des lieux précis pour déterminer s'ils s'y sentent en sécurité, si les adultes sont perçus comme un secours potentiel, etc.

      Cette démarche permet d'objectiver le diagnostic initial en se basant sur la perception et le vécu de l'enfant.

      Une approche modeste et progressive : La première phase a consisté à assurer une formation sur les droits de l'enfant dans toutes les classes et à mettre en place les structures participatives.

      L'accent est mis sur la modestie des objectifs annuels pour assurer leur réalisation concrète et maintenir la confiance dans le processus.

      Le temps long comme condition du succès

      Richard Côtier insiste sur le fait que la transformation d'une culture scolaire est un processus long.

      Il considère la durée de trois ans du programme UNICEF comme "juste la piqûre, juste le vaccin". Selon lui, il faudra "peut-être encore 5, 10 ans derrière" pour qu'un établissement puisse affirmer avoir durablement intégré cette culture.

      Le but est de créer une dynamique pérenne où la communauté éducative constate un changement profond et irréversible dans son fonctionnement.

      Impacts, défis et généralisation

      Impacts mesurables sur le climat scolaire et les élèves

      L'expérience du Royaume-Uni, où le programme existe depuis plus de dix ans dans 4500 écoles, fournit des données quantitatives sur son impact.

      | Indicateur d'impact (Évaluation au Royaume-Uni) | Chiffres clés | | --- | --- | | Amélioration du respect de soi et des autres | 93 % des enfants | | Augmentation du sentiment de sécurité à l'école | \+ 5 % | | Augmentation du sentiment d'être écouté à l'école | \+ 5 % | | Augmentation de la capacité à influencer les décisions | \+ 14 % des enfants | | Augmentation du sentiment d'être respecté par les adultes/pairs | \+ 11 % | | Connaissance supérieure de leurs droits | \+ 37 % des enfants |

      Au-delà des chiffres, l'impact qualitatif est la formation de futurs citoyens qui ne sont pas "relativement passifs", mais qui ont expérimenté que leur parole peut avoir un effet sur le monde qui les entoure et que le changement nécessite un engagement collectif.

      Surmonter les freins et les obstacles

      L' "Adultisme" et la peur de la contestation : Valérie Becket identifie un frein majeur dans la tendance des adultes à voir les "risques" (désordre, contestation) de la participation des élèves plutôt que les "bénéfices" à long terme.

      Cette posture, qui consiste à décider "à la place de l'enfant", peut priver ce dernier d'expériences nécessaires à son développement.

      La charge de travail des enseignants : La crainte que ces programmes représentent une "couche" supplémentaire de travail est une objection fréquente. Julie Zarlot répond que les outils pédagogiques de l'UNICEF sont conçus pour être directement liés aux programmes scolaires, permettant aux enseignants de "piocher" dans différentes disciplines pour illustrer les droits "sans en avoir l'air".

      Au-delà du primaire : Application au collège et au lycée

      Valérie Becket note que l'enseignement secondaire dispose déjà de nombreuses structures de participation (Conseil de la Vie Collégienne, Conseil de la Vie Lycéenne, délégués). Cependant, leur existence ne garantit pas une meilleure application des droits ni une meilleure écoute des élèves.

      Elle suggère que des outils comme la "marche exploratoire" seraient très pertinents pour les adolescents afin d'analyser leur vécu de l'établissement.

      Surtout, elle insiste sur la nécessité de créer des "passerelles" entre le primaire et le secondaire pour assurer une continuité. Sans cela, un élève habitué à participer et à être écouté risque de subir un choc ("patatra") en arrivant dans un environnement où il "ne peut plus rien dire".

      Citations clés

      Valérie Becket, sur le droit le plus important à travailler à l'école : "Le droit d'avoir un point de vue."

      Richard Côtier, sur la nécessité d'un engagement sur le long terme : "Le programme UNICEF par exemple il est prévu sur 3 ans et moi je pense que 3 ans c'est juste la piqûre, c'est juste le vaccin.

      Ce n'est pas le temps qu'il va falloir pour construire un système où vraiment on aura pris en compte ce phénomène là."

      Julie Zarlot, sur l'intégration des droits dans le quotidien : "On peut parler des droits de l'enfant et les rendre quotidien, effectif, presque sans en avoir l'air."

      Richard Côtier, sur le risque de ne pas favoriser la participation : "Le risque de faire grandir des élèves qui sont pas dans la participation [...] ça veut dire que on fait des enfants qui sont relativement passifs, qui laissent prendre les autres des initiatives parce que finalement on leur demande pas leur avis."

    1. Santé Mentale à l'École : État des Lieux, Enjeux et Stratégies

      Résumé Exécutif

      Ce document de synthèse analyse l'état critique de la santé mentale des élèves dans le système éducatif français, en s'appuyant sur les témoignages d'experts de terrain.

      Il met en lumière une crise croissante, caractérisée par une augmentation des troubles dépressifs, des addictions et des tentatives de suicide chez les jeunes.

      Face à ce phénomène, l'institution scolaire, bien que consciente de l'enjeu, peine à déployer une réponse à la hauteur, confrontée à un manque de ressources humaines (médecins, infirmières scolaires) et à un déficit de formation généralisée des personnels.

      Dans ce contexte, les enseignants se retrouvent en première ligne, agissant comme des "sentinelles" essentielles mais souvent démunies.

      Deux stratégies d'action complémentaires émergent :

      d'une part, la structuration de l'intervention par des formations certifiées en premiers secours en santé mentale, comme le protocole "AÉRÉ" de PSSM France, qui vise à donner aux adultes un cadre d'action sécurisé.

      D'autre part, le développement de projets pédagogiques holistiques, à l'image de l'initiative "On se bouge", qui intègrent le bien-être et le "vivre ensemble" au cœur des apprentissages, améliorant ainsi la qualité de vie des élèves et des équipes.

      La conclusion est claire : une approche combinant la formation des adultes, la création d'un environnement scolaire bienveillant et l'implication des jeunes eux-mêmes est indispensable pour transformer l'école en un lieu promoteur de santé mentale.

      --------------------------------------------------------------------------------

      I. Le Constat : Une Crise de Santé Mentale Croissante chez les Jeunes

      A. L'Ampleur du Phénomène

      La question de la santé mentale des jeunes n'est plus un sujet tabou et s'impose avec une urgence inédite, au point d'être désignée "grande cause nationale pour 2025".

      Les professionnels de l'éducation dressent un constat alarmant, corroboré par de multiples études.

      Une souffrance inédite : Damien Duran, IAPR Établissement et Vie Scolaire, témoigne : « Je travaille dans l'éducation nationale depuis 1979 [...] et je n'ai jamais vu autant de jeunes en souffrance dans les établissements scolaires : de jeunes dépressifs, ayant des troubles du comportement, des addictions diverses et beaucoup de tentatives de suicide. »

      Une visibilité accrue : Anaïs Mangin, professeure d'EPS, observe que les élèves « vont de plus en plus mal et le montrent corporellement ou même via les réseaux sociaux ».

      Une problématique précoce : La crise ne se limite pas aux adolescents.

      Les phénomènes de harcèlement et les troubles du comportement sont de plus en plus présents et "massifiés" dès le premier degré (école maternelle et élémentaire).

      B. Les Facteurs Aggravants Identifiés

      Plusieurs facteurs sociétaux et relationnels sont identifiés comme contribuant à la détérioration de la santé mentale des élèves.

      L'impact des réseaux sociaux : La frontière entre la sphère scolaire et la sphère privée s'est estompée, privant les jeunes de moments de répit.

      Selon Anaïs Mangin, « les jeunes n'ont pas de pause en fait ».

      La défiance envers les adultes : Un point jugé fondamental par Damien Duran est la « défiance croissante à l'égard des adultes ».

      Cette perte de confiance constitue « le terreau de l'agressivité » et une marque d'inquiétude face à l'avenir.

      II. La Réponse Institutionnelle : Entre Prise de Conscience et Manque de Moyens

      L'Éducation Nationale reconnaît l'ampleur du défi, mais sa capacité d'action reste limitée par des contraintes structurelles et un manque de préparation historique.

      A. Une Institution en Difficulté

      Selon Damien Duran, la réponse institutionnelle est actuellement « faible par rapport à l'ampleur du phénomène ». Plusieurs points de friction sont soulignés :

      Pénurie de personnel qualifié : L'institution fait face à un manque criant de médecins scolaires, difficiles à recruter, et à un nombre insuffisant d'infirmières, qui ne sont pas présentes dans tous les établissements.

      Déficit de formation : La majorité des personnels n'a pas été formée pour aborder ces problématiques.

      Des plans de formation se déploient progressivement, sur le modèle du programme Phare contre le harcèlement, mais cela « prend du temps ».

      B. Le Rôle Central mais Complexe des Enseignants

      Les enseignants sont au cœur du dispositif de repérage et de premier soutien, un rôle qu'ils assument avec engagement malgré les difficultés.

      Des "sentinelles" en première ligne : Les professeurs d'EPS, comme Anaïs Mangin, se perçoivent comme des "sentinelles" capables de détecter un mal-être par l'expression corporelle des élèves, souvent avant leurs collègues.

      Le défi de l'intégration : La principale difficulté pour les enseignants est de trouver le temps d'intégrer la prévention et le soutien en santé mentale aux exigences de leurs programmes scolaires.

      Un engagement massif : Malgré ces obstacles, Damien Duran souligne l'implication remarquable des enseignants, qui sont « massivement présents » dans les formations sur le harcèlement et la santé mentale, y compris hors temps scolaire.

      III. Stratégies d'Action : Formation et Projets de Terrain

      Pour répondre à cette crise, deux approches complémentaires se dessinent : la formation structurée des personnels et la mise en place de projets pédagogiques innovants.

      A. Le Secourisme en Santé Mentale : Structurer l'Intervention

      L'objectif est de permettre à chaque adulte d'intervenir "à bon escient", en évitant les maladresses qui peuvent aggraver une situation.

      Le Protocole PSSM France ("AÉRÉ") : Issu d'une méthode australienne, ce programme de formation de deux jours propose un protocole d'intervention en quatre étapes pour la prise en charge d'une personne en difficulté.

      Approcher la personne, évaluer et assister en cas de crise.  

      Écouter activement et sans jugement.    ◦ Réconforter et informer sur les aides existantes.  

      Aller vers des professionnels (médecin, psychologue, etc.).

      Le Protocole du Ministère (DGESCO) : Ce protocole est décrit comme plus "organisationnel". Il vise à identifier et cartographier les ressources et partenaires disponibles dans et hors de l'établissement pour orienter les équipes.

      Les deux démarches sont vues comme parfaitement complémentaires.

      B. L'Exemple du Projet "On se bouge" : Une Approche Holistique

      Le projet mené par Anaïs Mangin au collège Croix de Metz est un exemple concret d'une école qui prend soin de ses élèves en intégrant le bien-être aux apprentissages.

      | Caractéristiques du Projet "On se bouge" | Description | | --- | --- | | Concept de base | "Apprendre autrement" en associant l'EPS à d'autres disciplines (histoire-géo, maths, physique) lors de sorties hebdomadaires. | | Objectifs atteints | Renforcer le "vivre ensemble" et créer un fort sentiment d'appartenance (à la classe, à une petite équipe). | | Actions sur la santé mentale | Création de six ateliers sportifs sur l'estime de soi, la confiance et la gestion des émotions, en partenariat avec le Centre Médico-Psychologique (CMP) local. | | Impact | Amélioration notable du bien-être des élèves, mais aussi des enseignants qui travaillent en équipe et partagent la charge de travail. |

      C. L'Importance des Gestes Quotidiens et de la Formation des Jeunes

      Au-delà des programmes structurés, l'amélioration de la santé mentale passe par des actions simples et par l'implication directe des élèves.

      Le pouvoir des micro-interactions : Anaïs Mangin insiste sur l'impact de gestes simples comme dire "bonjour", "bonne journée" ou "as-tu bien dormi ?", qui peuvent amorcer une relation de confiance et changer le déroulement de la journée d'un élève.

      Former les jeunes : Damien Duran plaide pour la formation des jeunes eux-mêmes au secourisme en santé mentale.

      Son anecdote d'une élève de 6e en pleurs, inaccessible pour lui (l'adulte) mais accompagnée par une camarade, illustre que les pairs sont souvent les mieux placés pour apporter un premier soutien.

      IV. Perspectives et Recommandations

      A. Changer de Paradigme : De "Climat Scolaire" à "Qualité de Vie au Travail"

      Damien Duran propose une évolution sémantique et conceptuelle :

      « Je pense qu'on devrait s'intéresser à la qualité de vie au travail des élèves et [...] faire le parallèle avec la qualité de vie au travail des personnels. »

      Cette approche positionne le bien-être comme une condition structurelle et non comme une simple "ambiance", reconnaissant que la souffrance des personnels et celle des élèves sont interconnectées.

      B. Les Bénéfices de la Formation pour les Personnels

      La formation en secourisme en santé mentale offre des avantages concrets et profonds, tant sur le plan professionnel que personnel.

      Réduction de l'anxiété et gain de compétence : Elle permet de se sentir "moins inquiet" face à une situation de crise et de savoir comment réagir.

      Un changement de posture : Damien Duran raconte comment sa formation lui a permis de "switcher" de la panique à l'action lors d'une tentative de suicide de sa voisine, en appliquant un raisonnement structuré.

      Une aide pour soi et pour les autres : La formation change le regard sur autrui, renforce l'empathie et donne des outils pour mieux accompagner ses collègues et ses proches.

      C. Ressources Clés

      Les intervenants ont partagé plusieurs ressources pour approfondir le sujet et passer à l'action :

      1. Le Cartable des compétences psychosociales : Une plateforme proposant des outils et des activités pratiques (jeux, exercices de 10 à 30 minutes) pour les enseignants, utilisables en classe pour travailler sur la gestion des conflits ou d'autres compétences.

      2. Le site de PSSM France (pssmfrance.fr) : Pour s'informer sur les modules de formation en premiers secours en santé mentale, accessibles aux personnels ou à titre privé.

      3. Le Protocole Santé Mentale des Élèves (DGESCO) : Document officiel ("Du repérage à la prise en charge") qui doit être disponible et renseigné dans chaque établissement, recensant les partenaires locaux et la marche à suivre en cas de crise.

    1. Dossier de Synthèse : Le Jeu Libre comme Outil Pédagogique Essentiel à l'École

      Synthèse Exécutive

      Ce document synthétise les perspectives d'experts sur le rôle fondamental du jeu libre dans le développement de l'enfant et sa mise en œuvre en milieu scolaire.

      L'analyse révèle que le jeu libre est une activité essentielle, souvent mal comprise et sous-évaluée, qui participe directement à la construction de soi, au développement de la pensée et à l'acquisition de compétences transversales.

      Les points critiques à retenir sont les suivants :

      1. Nature du Jeu Libre : Le jeu, par définition, est libre. Il est caractérisé par la décision du joueur, l'établissement d'un cadre de "second degré" (distinct de la réalité), l'absence de conséquences réelles ("frivolité"), une organisation interne et une incertitude quant à son issue.

      Il se distingue radicalement des "jeux" éducatifs structurés qui sont en réalité des formes de travail déguisé avec des objectifs et des attentes externes.

      2. Rôle de l'Enseignant : La posture professionnelle requise est celle d'un observateur disponible et d'un architecte du cadre ludique, et non celle d'un intervenant directif.

      L'enseignant doit agir sur l'environnement (aménagement de l'espace, choix des objets, règles de fonctionnement) pour permettre au jeu d'advenir, plutôt que de diriger l'activité des enfants.

      3. Bénéfices Pédagogiques : Bien que 80% des processus à l'œuvre dans le jeu soient invisibles, ses bénéfices sont profonds.

      Le jeu libre favorise la concentration, la socialisation, la créativité, la libération de la parole et l'expérimentation sans crainte de l'échec.

      Il constitue un espace d'expression émotionnelle crucial et un terrain d'observation privilégié pour déceler les besoins des élèves, notamment ceux à besoins éducatifs particuliers.

      4. Mise en Pratique : L'instauration du jeu libre en classe repose sur un aménagement réfléchi : des espaces bien définis, des objets de qualité et réalistes en quantité raisonnable, et une préparation soignée qui valorise l'activité.

      Le rangement devient un acte pédagogique à part entière.

      5. Portée Universelle : Le besoin et les bienfaits du jeu libre ne se limitent pas à l'école maternelle.

      Il est tout aussi pertinent et nécessaire à l'école élémentaire, offrant un espace d'exploration et de complexification des apprentissages adapté à chaque âge.

      En conclusion, redonner sa place au jeu libre à l'école n'est ni un luxe ni une perte de temps, mais une stratégie pédagogique fondamentale qui, en faisant confiance à l'enfant, permet l'émergence d'apprentissages informels profonds et la construction d'un rapport positif à l'école.

      --------------------------------------------------------------------------------

      1. Définir le Jeu Libre : Une Affaire Sérieuse

      Le concept de "jeu libre" souffre d'une mécompréhension fondamentale, souvent perçu à tort comme une simple récréation.

      Les experts soulignent qu'il s'agit d'une activité essentielle au développement de l'être humain.

      1.1. La Nature Intrinsèque du Jeu

      Selon Nadège Aberbuche, ludo-pédagogue, le terme "jeu libre" est un pléonasme.

      S'appuyant sur les travaux du sociologue Gilles Brougère, elle affirme que le jeu, par définition, est libre.

      Il n'est pas une simple activité, mais un cadre spécifique que le joueur décide de créer et d'habiter.

      Citation clé : "On rate l'essentiel [...] c'est-à-dire que le jeu participe à la construction de soi, à la construction de la pensée, donc ça n'est pas que pour se distraire et s'amuser, c'est absolument essentiel au développement de l'être humain." - Nadège Aberbuche

      1.2. Les Cinq Caractéristiques du Jeu (selon Gilles Brougère)

      Pour clarifier ce qu'est le jeu, cinq caractéristiques principales sont identifiées :

      1. Le Second Degré : Le jeu n'est pas la réalité. Le joueur adhère à un cadre fictionnel, une réalité alternative, pour la durée du jeu.

      2. La Prise de Décision : Le jeu n'existe que par les décisions des joueurs.

      Ils décident de tout : entrer dans le jeu, en définir les contours, et même en sortir à tout moment.

      3. La Frivolité : Ce qui se passe dans le jeu n'a pas de conséquences directes sur la réalité du joueur.

      Cette caractéristique est cruciale car elle autorise l'exploration, le tâtonnement, l'erreur et l'invention sans pression ni enjeu réel.

      4. Les Mécanismes d'Organisation : Tout jeu, même le plus simple, est structuré. Les joueurs définissent des règles, des rôles, des scénarios et des limites.

      5. L'Incertitude : L'issue du jeu n'est jamais connue à l'avance, ce qui en fait son "sel" et motive les joueurs à recommencer.

      2. La Confusion Fondamentale : Jeu contre Travail Déguisé

      Un obstacle majeur à la mise en place du jeu libre à l'école est la confusion entre le jeu authentique et les activités d'apprentissage ludifiées.

      2.1. L'instrumentalisation du Jeu

      Cécile Beautier Richard, enseignante en toute petite section, observe que de nombreux enseignants utilisent le mot "jeu" pour désigner des activités avec des objectifs pédagogiques précis (travailler les couleurs, les mathématiques) et parfois même une évaluation.

      Le point de vue de l'enseignant : Il s'agit d'un "travail" visant l'acquisition de compétences définies dans les programmes.

      Le point de vue de l'enfant : L'enfant, qui sait intuitivement ce qu'est jouer, peut se sentir "trompé" lorsque l'activité annoncée comme un jeu se révèle être un exercice scolaire.

      Ce sentiment peut conduire à un désinvestissement de l'enfant vis-à-vis du jeu lui-même.

      2.2. La Nécessité de la Clarté

      Les intervenantes s'accordent sur l'importance d'être clair avec les enfants.

      Il n'y a pas de honte à proposer des "ateliers" ou du "travail", car les enfants ont un désir naturel d'apprendre.

      La distinction sémantique et conceptuelle est essentielle pour préserver l'intégrité et la puissance du jeu libre.

      3. L'Importance Capitale du Jeu Libre pour le Développement de l'Enfant

      Le jeu libre est un espace-temps où se déroulent des apprentissages informels, invisibles mais cruciaux.

      3.1. Un Laboratoire Cognitif et Émotionnel

      Cécile Beautier Richard illustre ce point avec l'exemple d'un élève de 3 ans manipulant des aimants pendant 15 minutes en totale concentration.

      Citation clé : "80 % du jeu de l'enfant [...] n'est pas visible en fait à l'œil nu. [...] je ne sais pas ce qui se passait dans sa tête [...] visiblement il a l'air de se passer 1000 connexions à la seconde dans son cerveau et c'est super." - Cécile Beautier Richard

      Ce temps, qui peut sembler improductif, est en réalité un moment de construction intense de la pensée, de la représentation spatiale et d'autres compétences non-identifiables sur le moment.

      3.2. Un Espace d'Expression et de Transformation

      Libération de la parole : En grande section, des enfants peu locuteurs dans un cadre formel se mettent à parler abondamment lorsqu'ils jouent librement, distribuant les rôles et créant des scénarios complexes.

      Expression des émotions : Le jeu permet de "faire semblant" et d'exprimer des émotions ou des pulsions (colère, agressivité) de manière symbolique et sans conséquence.

      Nadège Aberbuche insiste sur le fait que jouer à la bagarre ou à la guerre est un exutoire nécessaire qui, en étant autorisé dans le "faux", peut prévenir des passages à l'acte dans le "vrai".

      Il est crucial de ne pas confondre ces jeux symboliques avec des activités dangereuses (comme le "jeu du foulard") qui ne sont pas des jeux.

      Inclusion : Le jeu libre est particulièrement bénéfique pour les élèves à besoins éducatifs particuliers.

      Comme le souligne Cédric Guerro, directeur du Centre national de formation au métier du jeu et du jouet, le jeu "accepte l'autre tel qu'il est", sans les exigences parfois écrasantes des situations d'apprentissage formelles.

      4. La Posture Professionnelle de l'Enseignant : De l'Intervention à l'Observation

      Le succès du jeu libre dépend entièrement de la posture de l'adulte.

      4.1. La Métaphore du "Culbuto"

      Cédric Guerro propose la métaphore du "culbuto" (jouet qui revient toujours à sa base) pour décrire la posture de l'enseignant.

      Sa position de base doit être celle de l'observateur disponible.

      Toute intervention doit être une réponse à une observation et à l'interprétation d'un besoin, et non une action par défaut.

      4.2. Agir sur le Cadre, pas sur l'Enfant

      L'enseignant doit se concentrer sur la création et le maintien d'un cadre propice au jeu. Ce cadre comprend :

      • L'aménagement de l'espace.

      • Le choix et la disposition des objets.

      • Les règles de fonctionnement claires (distinction entre "faire semblant" et "faire pour de vrai").

      En agissant sur ce cadre, l'enseignant influence indirectement et positivement le comportement des enfants, leur permettant de développer leur jeu en autonomie et en sécurité.

      4.3. Un Nouveau Regard sur l'Élève

      L'observation du jeu libre permet de découvrir les élèves sous un autre jour, de voir émerger des compétences (concentration, socialisation, leadership) insoupçonnées dans un cadre scolaire classique.

      L'enseignant voit alors "l'enfant plus que l'élève".

      5. Mise en Pratique en Classe : Aménager un Environnement Propice

      La mise en place du jeu libre n'est pas une improvisation mais le résultat d'un travail pédagogique rigoureux en amont.

      5.1. L'Aménagement de l'Espace

      Cécile Beautier Richard donne plusieurs conseils concrets :

      Se mettre à hauteur d'enfant pour concevoir les espaces.

      Définir clairement les zones de jeu (par exemple avec des morceaux de lino de couleurs différentes).

      Ne pas surcharger les espaces.

      5.2. Le Choix des Objets

      Privilégier la qualité à la quantité. Des objets réalistes, fonctionnels et en bon état sont essentiels.

      Une poêle doit être à la taille des aliments factices, une poupée ne doit pas être cassée.

      Organiser de manière logique et accessible. Éviter d'empiler les puzzles ; les objets doivent être facilement préhensibles.

      5.3. Le Rangement comme Acte Pédagogique

      Le temps passé à ranger et à préparer l'espace de jeu après le départ des enfants (30-40 minutes par jour pour Cécile Beautier Richard) est fondamental.

      • Cela valorise l'activité aux yeux de l'enfant.

      • Cela donne envie de jouer le lendemain.

      • Cela constitue la préparation de la séance, au même titre que la préparation d'un atelier dirigé.

      Citation clé : "Le jeu libre c'est pas ce qu'on fait quand les enfants ils ont plus rien à faire [...] Non, il faut le considérer comme un atelier à part entière." - Cécile Beautier Richard

      6. Au-delà de la Maternelle : Le Jeu Libre pour Tous les Âges

      L'importance du jeu libre ne s'arrête pas aux portes de l'école élémentaire. Nadège Aberbuche qualifie ce combat de "même combat" pour tous les niveaux.

      • À la ludothèque "Les enfants du jeu", des classes jusqu'au CM2 sont accueillies.

      • Les élèves plus âgés se réapproprient des espaces de jeu de "petits" (bacs à sable, etc.), mais pour y mener des expérimentations plus complexes, adaptées à leur développement cognitif. Il ne s'agit pas d'une régression.

      • C'est une occasion rare pour les enseignants du primaire de voir leurs élèves jouer, une activité qui a largement disparu des cours de récréation, souvent au profit de tensions et de violences.

      --------------------------------------------------------------------------------

      7. Recommandations et Ressources

      Les expertes proposent des ressources pour les enseignants souhaitant se lancer ou approfondir leur pratique du jeu libre.

      | Type | Titre | Auteur / Source | Description | Recommandé par | | --- | --- | --- | --- | --- | | Livre (Théorique) | Jouer/Apprendre | Gilles Brougère | Une référence pour comprendre la distinction et les passerelles entre éducation formelle et informelle. | Cécile Beautier Richard | | Documents Pédagogiques | Jouer et apprendre | Eduscol | Des documents très bien faits (cadrage général, volets par type de jeu, vidéos) pour se lancer. | Cécile Beautier Richard | | Livre (Psychologie) | Libre pour apprendre | Peter Grey | Un ouvrage d'un psychologue américain qui requestionne la notion d'apprentissage, avec un chapitre important sur le jeu. | Nadège Aberbuche | | Film | Permis de jouer | \- | Un film tourné à la ludothèque "Les enfants du jeu", centré sur le jeu symbolique des enfants d'âge élémentaire, avec des témoignages d'enseignants. | Nadège Aberbuche |

    1. Briefing : Comprendre et Accompagner les Troubles Dys et le TDAH à l'École

      Résumé Exécutif

      Ce document de synthèse analyse les troubles neurodéveloppementaux — spécifiquement le Trouble Développemental de la Coordination (TDC ou dyspraxie), les dyscalculies et le Trouble du Déficit de l'Attention avec ou sans Hyperactivité (TDAH) — et présente des stratégies d'accompagnement en milieu scolaire.

      Les points critiques à retenir sont les suivants :

      1. Nature des Troubles : Ces troubles ne sont ni le fruit d'une paresse, ni d'un manque d'intelligence, mais des conditions neurodéveloppementales qui affectent la manière dont le cerveau traite l'information, automatise les compétences et régule le comportement.

      2. Impact Global : L'impact de ces troubles dépasse largement le cadre académique.

      Ils affectent la vie quotidienne, sociale et familiale de l'enfant, générant fatigue, anxiété et une estime de soi fragile dès le plus jeune âge.

      3. Dyspraxie (TDC) : Le Coût de la Double Tâche : La dyspraxie est un trouble de l'automatisation du geste.

      Chaque action, notamment l'écriture, requiert un contrôle attentionnel intense et coûteux, plaçant l'enfant en situation de double tâche permanente.

      La dysgraphie en est une conséquence directe et handicapante.

      4. Dyscalculies : Un Trouble Pluriel : Il n'existe pas une mais des dyscalculies (spatiale, linguistique, dysexécutive, etc.), chacune liée à des mécanismes cognitifs distincts.

      Le lien fondamental entre la représentation des nombres et l'espace est une clé de compréhension majeure. Un diagnostic précis est essentiel pour une remédiation ciblée.

      5. TDAH : Un Trouble de la Régulation : Le TDAH n'est pas un déficit d'attention mais un trouble de la régulation de l'attention, du comportement et des émotions.

      Il est sous-tendu par des difficultés au niveau des fonctions exécutives (inhibition, flexibilité, mémoire de travail).

      6. Stratégies et Posture Pédagogique : L'accompagnement efficace repose sur des aménagements pédagogiques qui contournent la difficulté (privilégier l'oral, fournir des supports adaptés, utiliser des outils numériques) et sur une posture bienveillante.

      Le rôle de l'enseignant est celui d'un observateur expert des manifestations du handicap, dont l'objectif est de valoriser les efforts, renforcer les comportements positifs et préserver à tout prix l'estime de soi de l'élève.

      --------------------------------------------------------------------------------

      1. Le Trouble Développemental de la Coordination (TDC) ou Dyspraxie

      Présenté par Emmanuel Ploie-Maës, psychologue clinicienne spécialisée en neuropsychologie, le TDC, ou dyspraxie, est un trouble moteur qui affecte profondément le parcours de l'enfant.

      1.1. Définition et Mécanismes Cognitifs

      Le geste est défini comme un "ensemble intentionnel de mouvements coordonnés dans le temps et dans l'espace en vue de réaliser une action finalisée".

      Chez un individu neurotypique, la planification et la programmation motrice d'un geste sont des processus non conscients et automatisés, ne nécessitant que peu de ressources cognitives.

      Chez l'enfant dyspraxique, cette automatisation ne se fait pas. Le TDC est un "trouble spécifique de la programmation et de la réalisation des gestes complexes".

      En conséquence :

      Le geste reste sous contrôle attentionnel : Chaque action, même simple, est laborieuse et fatigante.

      L'enfant est en situation de double tâche permanente : Il doit allouer une part considérable de ses ressources cognitives à la réalisation du geste, ce qui laisse très peu de ressources disponibles pour les tâches de plus haut niveau.

      Exemple : Un élève de CE2 avec une écriture automatisée utilise peu de ressources pour tracer les lettres et peut se concentrer sur l'orthographe.

      L'élève dyspraxique utilise l'essentiel de ses ressources pour former les lettres, ce qui entraîne des erreurs orthographiques non pas par méconnaissance, mais par manque de ressources attentionnelles disponibles.

      Une étude menée à l'hôpital Robert Debré a mis en évidence deux grands types de dyspraxie :

      Dyspraxie avec troubles gestuels purs.

      Dyspraxie mixte (gestuelle et visuospatiale), qui associe aux troubles du geste des difficultés dans les traitements visuospatiaux.

      1.2. Manifestations et Impacts

      Le TDC a des conséquences sévères sur l'ensemble du développement de l'enfant, car "l'enfant qui est en difficulté dans le développement de ses gestes, il est en difficulté dans sa vie tout le temps, dès le moment où il émet un pied sur le sol jusqu'au moment où il s'endort le soir".

      | Domaine d'Impact | Manifestations Concrètes | | --- | --- | | Parcours Scolaire | Dysgraphie sévère (cahiers "sales et brouillons", lenteur, fatigabilité), difficultés en géométrie, en arts plastiques, manipulation des outils (règle, compas). Les écrits sont souvent inutilisables pour l'apprentissage. | | Vie Quotidienne | Difficultés pour s'habiller (boutons, lacets), manger proprement, utiliser des couverts. Lenteur pour se préparer, ce qui peut entraîner des moqueries. | | Vie Sociale & Loisirs | Difficultés dans les jeux de construction, les sports collectifs. L'enfant peut être mis à l'écart ou être le "dernier choisi" dans les équipes. | | Développement Global | Atteinte de l'estime de soi très précoce (dès la maternelle), anxiété, troubles du sommeil, de l'alimentation. L'enfant a souvent conscience de ses difficultés, ce qui accroît sa souffrance. |

      1.3. Processus Diagnostique et Outils

      Le diagnostic doit être posé par un médecin spécialiste suite à une synthèse complète incluant :

      • L'anamnèse (parole des parents).

      • Les observations de l'école (cahiers, bulletins, écrits des enseignants).

      • Un bilan neuropsychologique (souvent basé sur le WISC-5, qui révèle un profil caractéristique avec de bonnes capacités verbales contrastant avec des difficultés graphiques et visuospatiales).

      • Des bilans complémentaires (ergothérapie, psychomotricité).

      Un outil simple, le questionnaire DCDQ-F, est accessible en ligne et peut être proposé par les équipes pédagogiques aux familles pour amorcer un dialogue et orienter vers une consultation spécialisée en cas de forte probabilité de TDC.

      1.4. Stratégies d'Aménagement Pédagogique

      L'objectif est de contourner la difficulté pour atteindre le même but par un autre chemin.

      Principes Généraux :

      Privilégier le canal auditivo-verbal : Utiliser l'oral pour l'apprentissage et la restitution des connaissances.

      Soulager de la tâche graphique : Limiter drastiquement la copie. Le geste graphomoteur n'est pas un outil d'apprentissage pour ces enfants.

      Adapter les supports : Utiliser des polices de caractères lisibles (Arial, Verdana), agrandir les interlignes, aérer la présentation, isoler les exercices sur la page.

      Tenir compte de la lenteur : Alléger la quantité de travail (supprimer des exercices) ou accorder du temps supplémentaire.

      Valoriser les efforts : Faire preuve d'indulgence sur la présentation et la propreté.

      Adaptations par Niveau et Matière :

      Cycle 2 (CP-CE1) :

      Lecture/Écriture : Épeler oralement les mots pour en apprendre l'orthographe plutôt que de les copier. Utiliser des lignages adaptés (type Gurvan).  

      Mathématiques :

      Éviter le comptage sur les doigts. Privilégier la manipulation où l'objet compté est déplacé ou barré.

      Utiliser des gabarits pour poser les opérations.

      Expliciter verbalement les symboles (< devient "plus petit que").

      Cycles 3 et 4 (Primaire et Collège) :

      Toutes matières : Fournir des supports de cours de qualité (photocopies, fichiers numériques sur l'ENT). Autoriser l'enregistrement audio des cours. Utiliser des surligneurs plutôt que de souligner à la règle.  

      Outils Numériques : L'ordinateur ou la tablette (plus pratique pour photographier le tableau) devient un outil de compensation indispensable, avec des logiciels de correction orthographique et des outils comme le ruban du Cartable Fantastique.  

      Mathématiques/Géométrie : Autoriser la calculatrice. Utiliser des logiciels comme GeoGebra. Faire tracer les figures par l'AESH ou un pair. Évaluer la connaissance des propriétés des figures à l'oral.   

      EPS : Proposer des rôles alternatifs (capitaine d'équipe, arbitre, organisateur) et évaluer la progression personnelle plutôt que la performance brute.

      Rôle de l'AESH : L'AESH est un soutien essentiel dont le rôle est de préparer les supports, lire les consignes, encourager la participation orale et manipuler les outils, mais non de "faire à la place" de l'élève.

      --------------------------------------------------------------------------------

      2. Les Dyscalculies : Un Trouble Pluriel

      Présentées par Michel Mazaux, les dyscalculies sont un ensemble hétérogène de troubles spécifiques du calcul et du traitement des nombres.

      2.1. Le Lien Fondamental entre Nombre et Espace

      Le cerveau traite les nombres en s'appuyant massivement sur des représentations spatiales.

      Les régions cérébrales dédiées au nombre et à l'espace sont étroitement intriquées.

      La Ligne Numérique Mentale : Nous organisons inconsciemment les nombres sur une ligne mentale, où les petits nombres sont à gauche et les grands à droite.

      Le calcul mental s'apparente à un déplacement sur cette ligne. Cette représentation se développe avec la scolarisation, passant d'une échelle "tassée" (logarithmique) à une échelle régulière (linéaire) pour les nombres maîtrisés.

      Procédures Visuospatiales : Le dénombrement (compter des objets) et l'écriture des nombres (système positionnel) sont des activités intrinsèquement visuospatiales.

      2.2. Les Différents Types de Dyscalculie

      Il est crucial de distinguer plusieurs types de dyscalculies, car elles n'ont pas la même origine et ne se traitent pas de la même manière.

      1. Le Trouble du Sens du Nombre : Atteinte du "petit réseau de neurones" inné dédié au traitement de la numérosité. L'enfant a du mal à estimer des quantités et à comprendre l'ordre de grandeur.

      2. La Dyscalculie Spatiale : Souvent associée au TDC avec troubles visuospatiaux. L'enfant a des difficultés avec le dénombrement, l'alignement des chiffres dans les opérations et la compréhension du système positionnel.

      3. La Dyscalculie Linguistique : Associée à un Trouble du Développement du Langage Oral (TDLO/dysphasie).

      La difficulté réside dans la maîtrise de la suite verbale des mots-nombres et le transcodage (passage de l'oral à l'écrit, ex: "soixante-dix-sept").

      4. La Dyscalculie Dysexécutive ou Attentionnelle : Associée à un TDAH.

      L'enfant fait des erreurs dues à un manque d'inhibition (une routine additive s'immisce dans une multiplication), une mauvaise planification des étapes ou des oublis (retenues).

      Il est essentiel de différencier ces troubles "dys" des troubles logico-mathématiques, qui relèvent d'une intelligence logique plus faible et s'apparentent à une déficience intellectuelle légère, et non à un trouble neurodéveloppemental spécifique.

      2.3. De la Difficulté au Trouble : Le Modèle de la Réponse à l'Intervention (RAI)

      Pour distinguer un élève en difficulté d'un élève présentant un trouble, une approche en trois niveaux est préconisée :

      Niveau 1 : Un enseignement explicite et validé (ex: méthode de Singapour) pour toute la classe.

      Niveau 2 : Pour les 15-20% d'élèves qui ne progressent pas assez, un renforcement pédagogique en petits groupes (plus de temps, plus de manipulations, plus d'exercices) pendant 3-4 mois.

      Niveau 3 : Si 5-8% des élèves sont toujours en grande difficulté malgré ce renforcement, un bilan complet (psychologique, neuropsychologique, orthophonique) est nécessaire pour poser un diagnostic de dyscalculie.

      2.4. L'Importance du Diagnostic Différentiel

      Savoir de quel type de dyscalculie souffre un enfant est fondamental car les pistes de remédiation seront différentes.

      Par exemple, un enfant avec une dyscalculie spatiale bénéficiera d'aides visuelles et de gabarits, tandis qu'un enfant avec une dyscalculie linguistique nécessitera un travail intensif sur le langage mathématique oral.

      --------------------------------------------------------------------------------

      3. Les Fonctions Exécutives et le Trouble du Déficit de l'Attention (TDAH)

      Présenté par Jessica Sav-Pebos, neuropsychologue, le TDAH est un trouble de l'autorégulation dont les racines se trouvent dans le fonctionnement des fonctions exécutives.

      3.1. Les Fonctions Exécutives : Le "Chef d'Orchestre" du Cerveau

      Les fonctions exécutives sont les processus de haut niveau qui nous permettent de réguler nos pensées, nos émotions et nos comportements pour atteindre un but. Elles sont essentielles à l'organisation, la planification et l'adaptation. Les principales sont :

      L'Initiation : La capacité à démarrer une tâche.

      La Planification : L'organisation des étapes pour atteindre un but.

      L'Inhibition : La capacité à freiner les impulsions et à résister aux distractions.

      La Flexibilité Mentale : La capacité à changer de stratégie, à s'adapter à l'imprévu et à voir les choses sous un autre angle.

      La Mémoire de Travail : La capacité à maintenir et manipuler plusieurs informations en tête simultanément.

      La Régulation Émotionnelle : La gestion de l'intensité et de l'expression des émotions.

      3.2. Le TDAH : Un Trouble de la Régulation

      Le TDAH n'est pas un "déficit" d'attention, mais une incapacité à la réguler efficacement.

      L'enfant a du mal à la diriger et à la maintenir sur une cible non stimulante. On distingue trois formes cliniques (DSM-5) : inattentive, hyperactive-impulsive, et mixte.

      Le diagnostic, posé par un médecin, est libérateur car il remplace des étiquettes négatives ("paresseux", "dans la lune") par une explication neurobiologique.

      3.3. Les Trois Axes de la Dysrégulation dans le TDAH

      1. La Dysrégulation Attentionnelle :

      Procrastination : Difficulté extrême à initier une tâche, non par manque de motivation mais par un fonctionnement cérébral atypique. Il faut "activer le corps pour que le cerveau suive".  

      Distractibilité : Manque d'inhibition face aux distractions internes (pensées) et externes (bruits).  

      Mémoire de travail "passoire" : Difficulté à retenir des consignes multiples, d'où l'importance de décomposer les tâches et d'utiliser des aides visuelles (post-it, schémas).

      2. La Dysrégulation Comportementale :

      Impulsivité : L'enfant agit sans réfléchir aux conséquences car le "frein" (inhibition) est défaillant. Il connaît la règle mais ne parvient pas à l'appliquer au bon moment.  

      Rigidité : Le manque de flexibilité mentale peut entraîner des réactions explosives face aux imprévus ou aux changements, car l'enfant ne parvient pas à ajuster son "plan A".

      3. La Dysrégulation Émotionnelle :

      Hypersensibilité : Les émotions sont vécues avec une grande intensité et peuvent "pirater" toute l'attention disponible.  

      Fenêtre de disponibilité étroite : L'enfant passe très rapidement de l'ennui (si la tâche n'est pas assez stimulante) à la surcharge (si la tâche est trop complexe), ce qui le fait sortir de sa zone d'apprentissage optimal.

      3.4. Pistes d'Intervention et Posture de l'Enseignant

      Renforcer plutôt que punir : La posture la plus efficace est de "prêter attention à ce qu'on veut voir davantage". Il faut systématiquement relever et verbaliser les efforts et les comportements positifs, même minimes.

      Structurer l'environnement : Aider l'enfant à organiser son temps, son matériel et ses tâches en apportant des aides externes (minuteurs, plannings visuels, consignes décomposées).

      Respecter la neurodiversité : Comprendre que le système nerveux d'un enfant hyperactif a besoin de se décharger avant de pouvoir se calmer.

      Proposer des pauses motrices (pousser un mur, s'étirer) est plus efficace que d'imposer une relaxation.

      Être un "détective" : Le rôle de l'enseignant n'est pas de diagnostiquer, mais d'observer précisément le retentissement fonctionnel du trouble ("le handicap") en classe.

      Ces observations concrètes sont extrêmement précieuses pour l'ensemble de l'équipe d'accompagnement.

    1. Le Conseil de Classe Participatif : Analyse d'une Initiative Pédagogique

      Résumé Exécutif

      Ce document de synthèse analyse le projet de "conseil de classe participatif" mis en œuvre par Émilie Roger, professeure de SVT au collège de la Largue.

      Né du constat de l'inefficacité et du manque d'engagement suscités par les conseils de classe traditionnels, ce dispositif vise à transformer cette instance en un outil pédagogique centré sur l'élève.

      En s'appuyant sur les sciences cognitives et la métacognition, le projet prépare les élèves de 6ème à auto-évaluer leurs compétences, à formuler un bilan personnel et à définir des objectifs de progression.

      Les principaux résultats montrent un engagement accru des élèves, qui développent une conscience juste de leurs points forts et de leurs difficultés.

      Le format, bien que lourd sur le plan organisationnel, génère des moments d'échange d'une grande richesse, valorisant l'élève et renforçant le dialogue pédagogique.

      Les défis majeurs résident dans la logistique complexe, qui limite son déploiement à un seul niveau, et dans la nécessité d'un suivi régulier pour ancrer les objectifs fixés.

      L'implication des parents, expérimentée ponctuellement, est identifiée comme un levier majeur pour décupler l'impact du dispositif.

      1. Contexte et Genèse du Projet

      L'initiative du conseil de classe participatif a été développée en réponse à une double insatisfaction concernant le format traditionnel de cette instance.

      Le Constat d'Inefficacité

      Émilie Roger, en tant que professeure de SVT participant à de nombreux conseils de classe, a identifié plusieurs limites au format classique :

      Rôle passif des enseignants : Hormis le professeur principal, les autres enseignants assistent principalement à une "lecture d'appréciation globale" avec très peu d'échanges pédagogiques de fond.

      Absence de focus sur les compétences : Les discussions sont rarement centrées sur les compétences de l'élève et les moyens de les améliorer.

      Manque d'impact sur l'élève : Le déclencheur du projet fut la révélation qu'un élève n'avait même pas lu les conseils formulés sur son bulletin scolaire.

      L'Objectif de Transformation

      Face à ce constat, l'objectif était clair : "comment finalement transformer un conseil de classe classique en quelque chose qui pourrait être utile à l'élève où l'élève pourrait s'engager dans son évaluation de son parcours et pouvoir s'asseoir dessus pour progresser".

      Le projet vise à rendre l'élève acteur de son évaluation et de sa progression.

      2. Le Dispositif du Conseil de Classe Participatif

      Le projet se décompose en une phase de préparation rigoureuse et un déroulement spécifique, repensé pour maximiser l'interaction individuelle.

      La Phase de Préparation

      Avant chaque conseil, trois à quatre séances sont organisées, généralement sur les heures de "devoir fait" ou de "vie de classe", pour préparer les élèves.

      Cette préparation inclut :

      1. Introduction aux Compétences : Explication de ce qu'est une compétence, comment elle est évaluée et comment atteindre les meilleurs niveaux de maîtrise.

      2. Auto-positionnement : L'élève est invité à se positionner sur les différentes compétences évaluées.

      3. Construction du Bilan : Les élèves apprennent à construire leur propre bilan, en identifiant leurs points forts et les points à améliorer pour la période suivante.

      Le Déroulement Concret

      La session du conseil de classe participatif dure au total entre 1h30 et 1h45.

      Session Plénière (15 minutes) : Un bilan global est présenté par les élèves délégués, puis par la professeure principale.

      Ateliers par Pôles : La classe est ensuite divisée en deux équipes équilibrées (ex: "pôle scientifique" et "pôle français").

      Entretiens Individuels (7 minutes par élève) : Chaque élève présente son bilan personnel aux enseignants du pôle.

      Pour une classe de 30, chaque pôle gère environ 15 élèves.

      3. Fondements Pédagogiques et Approche Cognitive

      Le projet est explicitement ancré dans les apports des sciences cognitives, visant à doter l'élève d'une meilleure compréhension de ses propres mécanismes d'apprentissage.

      Formation en Neuroéducation : L'initiatrice du projet a obtenu un diplôme en neuroéducation et s'est formée auprès de l'association "Apprendre et former avec les sciences cognitives".

      Éducation au fonctionnement du cerveau : L'objectif est de former l'élève sur son propre cerveau : comment il apprend, mémorise et maintient son attention.

      Développement de la Métacognition : L'approche consiste à amener l'élève à réfléchir sur ses propres processus d'apprentissage.

      Il est encouragé à s'auto-évaluer face à une tâche ("Est-ce que c'est facile, difficile ?"), et si elle est difficile, à identifier les stratégies à mettre en place ("Quelle aide tu pourrais demander pour justement atteindre tes objectifs ?").

      4. Résultats, Impacts et Témoignages

      Le dispositif a produit des effets significatifs sur l'engagement, la lucidité et la confiance des élèves.

      L'Engagement et la Prise de Conscience

      Le principal bénéfice observé est une prise de conscience par les élèves de leurs propres difficultés et de leurs capacités à progresser.

      Impact émotionnel sur l'enseignante : Émilie Roger témoigne être systématiquement impressionnée, au point d'avoir "envie de pleurer", en voyant "les plus timides qui osent parler, qui osent dire leur fragilité".

      Transformation des élèves "difficiles" : Même les élèves souvent perçus comme perturbateurs parviennent à verbaliser leurs difficultés (ex: le bavardage), ce qui est considéré comme une victoire pédagogique majeure.

      Le fait qu'ils "s'expriment" sur leurs défis est vu comme "magnifique".

      La Justesse de l'Auto-évaluation

      Il est noté que les élèves font preuve d'une grande lucidité. Il y a rarement une différence entre leur auto-évaluation et les appréciations des enseignants sur le bulletin.

      Témoignages d'Élèves

      Les extraits de dialogues illustrent la capacité des élèves à analyser leur parcours et à se projeter.

      | Thème | Citation de l'élève | Contexte / Analyse | | --- | --- | --- | | Effort et Motivation | "Ce qu'il faut savoir c'est qu'il aime pas l'école en fait. \[...\] Il fait d'énormes efforts pour réussir sans avoir forcément la motivation derrière ça." | Un élève exprime son manque d'intérêt pour certains sujets, tout en fournissant un travail important. | | Identification des Difficultés | "J'ai plus de difficultés à mémoriser l'histoire-géo \[...\] J'ai du mal à redire ce que j'ai appris." | L'élève distingue un problème de mémorisation d'un problème de compréhension. | | Fierté et Résilience | "Je suis fière d'avoir réussi à m'organiser pour réviser pour les contrôles, de ne pas avoir baissé les bras alors que c'est difficile pour moi." | Une élève met en avant sa capacité d'organisation et sa persévérance face à la difficulté. | | Définition d'Objectifs | "Mon objectif pour l'année prochaine serait de rester plus concentré. \[...\] Me mettre pas avec des personnes que j'aime bien forcément à côté." | Un élève identifie le bavardage comme sa difficulté et propose une stratégie concrète pour y remédier. | | Stratégies d'Entraide | "Quand eux \[les copains\] ils t'aident, est-ce que tu arrives mieux à comprendre ? - Oui un petit peu." | Une élève reconnaît que travailler avec ses pairs l'aide à mieux comprendre les exercices de mathématiques. |

      5. Défis, Limites et Perspectives

      Malgré son succès pédagogique, le dispositif fait face à des obstacles importants qui freinent son expansion.

      Les Contraintes Organisationnelles

      La "plus grosse difficulté" est d'ordre logistique.

      Gestion du temps : Le format se déroule sur des créneaux de cours (généralement 15h-17h), ce qui oblige à "libérer des classes" et à réorganiser les emplois du temps des enseignants et des élèves.

      Limitation au niveau 6ème : En raison de cette complexité, le projet est actuellement cantonné aux classes de 6ème. L'équipe pédagogique souhaiterait l'étendre au niveau 3ème, où il serait pertinent pour l'orientation, mais cela n'est pas réalisable pour le moment.

      La Question du Suivi Post-Conseil

      "L'après est plus difficile" et reste un point en cours d'amélioration.

      L'oubli étant "biologique", il est nécessaire de rappeler régulièrement aux élèves leurs objectifs et de les interroger sur les moyens qu'ils mettent en œuvre pour les atteindre, afin d'ancrer durablement la progression.

      Le Potentiel de l'Implication Parentale

      Une expérience a été menée il y a deux ans en faisant venir les parents pour qu'ils écoutent le bilan de leur enfant et échangent avec l'équipe.

      Cette formule est décrite comme "le top du top", car elle combine l'engagement de l'enfant et l'écoute du parent dans une démarche de "valorisation de l'élève".

      L'Adoption par l'Équipe Enseignante

      Le projet est activement soutenu et mis en place par trois professeurs principaux, qui seront quatre l'année prochaine.

      D'autres collègues sont plus réticents, non pas pour des raisons pédagogiques, mais principalement à cause du "beaucoup de temps" que l'organisation requiert.

    1. Synthèse sur l'Éducation à la Citoyenneté Numérique : S'appuyer sur les Pratiques des Jeunes

      Résumé Exécutif

      Ce document de synthèse analyse les perspectives et stratégies d'éducation à la citoyenneté numérique, basées sur les interventions d'experts en sociologie, en éducation au numérique et d'un praticien en milieu scolaire.

      L'idée centrale est un changement de paradigme : passer d'une approche "riscocentrée", focalisée sur la protection et l'interdiction, à une posture d'accompagnement qui s'appuie sur les pratiques réelles et les centres d'intérêt des jeunes.

      Les intervenants soulignent que les jeunes utilisent le numérique pour des raisons profondes liées à la construction identitaire, à la régulation du stress et à la recherche de réponses que les adultes ne fournissent pas toujours.

      Pour être efficaces, les éducateurs doivent adopter une posture d'empathie, de légitimation des cultures numériques des jeunes et de co-construction des savoirs.

      L'objectif final est de développer leur réflexivité, leur esprit critique et leur pouvoir d'agir, en les aidant à comprendre les mécanismes des plateformes, leurs droits, leurs devoirs et le potentiel émancipateur du numérique, plutôt que de se limiter à une posture de méfiance.

      --------------------------------------------------------------------------------

      1. Redéfinir la Citoyenneté Numérique au-delà des Risques

      Le point de départ de la discussion est le constat que la notion de citoyenneté numérique est souvent perçue par les adultes à travers le prisme de l'inquiétude et de la protection.

      Les intervenants s'accordent sur la nécessité d'élargir cette vision.

      Une Définition Élargie : La définition du Conseil de l'Europe est citée comme un modèle, incluant des dimensions positives telles que l'inclusion, la créativité, l'empathie et la participation active.

      Faire "avec eux" plutôt que "pour eux" : Il y a une prise de conscience croissante de l'importance d'impliquer les jeunes dans la construction de leur citoyenneté numérique.

      Un Vocabulaire Inadapté : Selon Nicolas Bourgeon, professeur documentaliste, le terme "citoyenneté numérique" est un jargon institutionnel qui ne résonne pas chez les élèves.

      L'approche efficace consiste à "utiliser leur mot à eux".

      Priorités pour l'Éducation au Numérique

      Chaque intervenant définit une priorité pour l'éducation à la citoyenneté numérique :

      | Intervenant | Organisation | Priorité | Citation Clé | | --- | --- | --- | --- | | Axel Dein | Directrice, Internet sans crainte | Comprendre | "Comprendre l'espace numérique dans lequel on évolue, comprendre les services qu'on utilise, comprendre les algorithmes pour être un utilisateur éclairé." | | Jocelyn Lachance | Sociologue, Crédat | Valoriser | "Ce qu'on oublie souvent, c'est que la plupart des jeunes se comportent quand même bien à l'heure du numérique et la question c'est en tant qu'adulte qu'est-ce qu'on est capable de valoriser les bonnes pratiques." | | Nicolas Bourgeon | Professeur Documentaliste | S'adapter | "Ce sont des mots qui appartiennent au vocabulaire plutôt institutionnel et l'approche que j'essaie d'avoir bah d'utiliser leur mot à eux." |

      2. Changer le Regard des Adultes sur les Pratiques Numériques des Jeunes

      Une critique fondamentale adressée à l'approche actuelle est le regard que les adultes portent sur les usages numériques des jeunes, souvent teinté de méconnaissance et de fantasmes.

      Le Regard "Riscocentré" et ses Limites

      Jocelyn Lachance identifie que l'intérêt des adultes pour les pratiques des jeunes est souvent "riscocentré", se concentrant sur les aspects délétères.

      Cette focalisation a plusieurs conséquences négatives :

      Elle occulte les bénéfices : Les jeunes utilisent le numérique pour des raisons essentielles à leur développement : construction de l'identité, gestion de questions existentielles, socialisation.

      Elle crée un décalage : Les jeunes ont l'impression que les adultes "passent à côté de ce qui est l'essentiel pour eux", à savoir le sens et les avantages qu'ils trouvent en ligne.

      La Solitude des Jeunes et l'Indisponibilité des Adultes

      Un thème récurrent est le sentiment de solitude des jeunes face au numérique.

      Manque d'accompagnement : Selon Axel Dein, les jeunes "sont extrêmement seuls" et "n'identifient pas les adultes autour d'eux comme des personnes qui sont susceptibles de les accompagner".

      Le numérique comme palliatif : Jocelyn Lachance confirme que les jeunes vont chercher en ligne ce qu'ils ne trouvent pas auprès des adultes.

      Une recherche sur l'usage de l'IA par les jeunes montre qu'ils s'en servent pour obtenir "une réponse structurée et rassurante" lorsqu'ils perçoivent les adultes comme indisponibles ou que le sujet est délicat (sexualité, mort).

      La Question de l'Interdiction

      L'interdiction est une pratique éducative structurante, mais son application au numérique soulève des questions complexes.

      Jocelyn Lachance met en garde contre une approche simpliste :

      1. Le Sens : Les adultes doivent s'interroger sur leurs motivations réelles derrière une interdiction.

      2. L'Efficacité et le Déplacement : Interdire l'accès à un espace peut pousser les jeunes vers un autre espace potentiellement moins sécurisé.

      3. La Perte de Bénéfices : L'interdiction peut supprimer des pratiques bénéfiques pour les jeunes, comme la régulation du stress.

      L'exemple d'un lycée québécois interdisant les smartphones est parlant : les élèves ont révélé qu'ils utilisaient leur téléphone pour écouter de la musique et s'isoler afin de gérer leur stress avant les examens.

      3. De la Prévention à l'Accompagnement : S'appuyer sur les Pratiques Réelles

      La deuxième partie de la discussion se concentre sur les méthodes pour passer d'une posture de simple prévention à un véritable accompagnement, en partant des usages concrets des jeunes.

      La Co-construction et l'Immersion

      Internet sans crainte, dirigé par Axel Dein, développe des ressources (serious games, scénarios interactifs) en impliquant directement les jeunes.

      Le rôle des panels de jeunes : Ils sont essentiels pour assurer la justesse et l'authenticité des ressources.

      Les jeunes poussent souvent les scénarios à être plus intenses pour refléter la réalité ("Mais là c'est trop tiède ce qu'on vit c'est plus intense c'est plus dur que ça.").

      Susciter l'esprit critique : L'objectif n'est pas de donner des leçons de manière descendante, mais de "les amener à prendre du recul, à se questionner".

      Ces séances collectives permettent une "autorégulation" bienveillante entre pairs.

      Partir des Centres d'Intérêt des Élèves

      Nicolas Bourgeon mène un projet avec des élèves de 6ème sur les influenceurs, un sujet qui les passionne. La démarche est la suivante :

      1. Point de départ : Les élèves choisissent un influenceur qu'ils apprécient.

      2. Analyse guidée : Ils décryptent le modèle économique (économie de l'attention, monétisation), les partenariats commerciaux (encadrés par la loi de 2023) et les techniques pour capter l'audience.

      3. Prise de conscience : Ce travail leur fait réaliser que lorsqu'ils consultent du contenu, "ils créent de la valeur". Les élèves identifient facilement les circuits financiers (produits, boutiques, microdons).

      4. La Citoyenneté Numérique en Action : Vers l'Émancipation

      La dernière partie explore les moyens de donner aux jeunes un réel pouvoir d'agir (empowerment) et de développer leur réflexivité.

      L'Expérience "Digital Practice Awareness" (DPA)

      Une recherche menée par Mélina Solari Landa auprès de lycéens offre des enseignements clés :

      La primauté du désir : "Le désir est le meilleur moteur de l'usage des adolescents."

      Le besoin de socialiser et les émotions l'emportent sur une évaluation rationnelle des risques, même lorsque les jeunes sont informés de l'utilisation de leurs données.

      Difficulté avec la temporalité et la distance : Les jeunes ont du mal à percevoir comment leurs actions en ligne actuelles peuvent avoir des conséquences à long terme ou affecter des personnes à une échelle globale.

      L'inefficacité des approches prescriptives : Les logiques restrictives ne permettent pas de développer la réflexivité.

      Développer la Réflexivité et la Confiance

      L'objectif de réflexivité : Pour Jocelyn Lachance, le but est d'amener les jeunes à réfléchir à ce qu'ils vivent et ressentent avant qu'une situation problématique ne survienne.

      Un "carnet de déconnexion" accompagné est plus efficace qu'un simple défi.

      Le risque de briser la confiance : Une approche trop axée sur les risques peut être contre-productive.

      Une jeune fille, après avoir reçu de la prévention, n'a pas osé parler à ses parents d'une expérience sur une application de rencontre par peur de se faire gronder.

      Légitimer leur culture : Pour Nicolas Bourgeon, établir la confiance passe par la reconnaissance de la légitimité de la "culture geek" des élèves.

      Éduquer aux Droits, aux Devoirs et au Pouvoir d'Agir

      Axel Dein insiste sur la nécessité de former les jeunes à la compréhension de leurs droits et devoirs en ligne, car leur première activité numérique est souvent sociale.

      Internet sans crainte a développé une mallette pédagogique qui aborde trois axes :

      1. Comprendre ses droits et devoirs.

      2. Comprendre le rapport à l'autre en ligne (limites public/privé, liberté d'expression).

      3. Comment le numérique donne le pouvoir d'agir.

      S'inspirer des Codes Numériques

      Jocelyn Lachance suggère de s'inspirer des raisons du succès des YouTubeurs et Twitchers auprès des jeunes. Ces créateurs donnent le sentiment de créer un espace sécurisant où :

      • Les jeunes sentent que les discussions partent d'eux ("on peut poser les vraies questions").

      • Leur parole est comprise et valorisée.

      • Leur culture n'est pas "délégitimée".

      L'enjeu pour l'adulte est de s'interroger sur sa propre posture :

      "Est-ce que moi je suis personnellement dans une posture qui délégitime les pratiques numériques et qui fait un espèce de mouvement de répulsion par rapport aux jeunes ?"

    1. Améliorer l'Engagement des Élèves en Collège et Lycée : Synthèse des Stratégies de Hassan Nassiri

      Résumé Exécutif

      Ce document synthétise les stratégies et les réflexions partagées par Hassan Nassiri, professeur et formateur, pour améliorer l'engagement des élèves dans le second degré.

      L'approche préconisée repose sur quatre leviers d'action fondamentaux :

      Ritualiser les cours pour créer un cadre sécurisant,

      Varier les supports et les modalités pour maintenir l'attention,

      Donner des responsabilités pour impliquer les élèves, et

      Valoriser la progression plutôt que la seule performance.

      Pour les élèves les plus réticents, la méthode consiste à identifier les causes de leur décrochage et à proposer des "entrées progressives" via des micro-tâches pour créer des premiers succès.

      La création d'une dynamique de classe collective, à travers des projets interdisciplinaires et une charte de classe co-construite, est également essentielle.

      La posture de l'enseignant est déterminante : elle doit incarner une alchimie entre exigence et bienveillance, en établissant un cadre clair tout en offrant des encouragements constants.

      La gestion de l'erreur doit être dédramatisée, celle-ci étant présentée comme une étape nécessaire à l'apprentissage.

      Enfin, il est crucial de ne pas rester isolé et de s'appuyer sur l'équipe pédagogique (collègues, CPE, direction) pour gérer les situations complexes et assurer une cohérence éducative.

      1. Introduction et Contexte

      Hassan Nassiri, professeur en établissement à mi-temps et formateur pour le Réseau Canopé et l'inspection académique, aborde la question centrale de l'engagement des élèves en collège et lycée.

      Fort de son expérience, notamment en lycée professionnel, il partage des gestes professionnels concrets et des retours de terrain destinés à aider les enseignants, particulièrement les débutants, à "ne laisser personne sur le bord de la route".

      La problématique principale est de savoir comment mettre en activité tous les élèves, y compris ceux qui semblent les moins coopératifs, afin de créer et de pérenniser une dynamique de classe positive tout au long de l'année.

      Ses conseils s'appliquent aussi bien aux classes dédoublées (12-15 élèves) qu'aux classes à effectif plus lourd (24-30 élèves et plus).

      2. Fondements Théoriques et Pédagogiques

      Pour nourrir sa réflexion, Hassan Nassiri s'appuie sur plusieurs références clés qui soulignent l'importance de la pédagogie et de l'organisation dans la gestion de classe :

      François Dubet ("Les lycéens") : Cet ouvrage analyse finement le rapport des élèves au travail scolaire, montrant la grande variété des profils et l'influence de leur histoire personnelle sur leur engagement.

      Philippe Merieu ("La pédagogie différenciée") : Nassiri retient de Merieu l'idée fondamentale d'adapter les dispositifs pédagogiques pour que chaque élève trouve sa place et que personne ne se sente exclu.

      Ressources Eduscol et Réseau Canopé : Ces ressources rappellent un principe essentiel : "la gestion de classe, ce n'est pas que de la discipline, c'est avant tout de l'organisation et de la pédagogie".

      3. Les Quatre Levier Fondamentaux de l'Engagement

      Hassan Nassiri identifie quatre leviers concrets pour transformer ces idées en actions en classe.

      a. Ritualiser

      Instaurer des rituels en début et en fin de cours permet de créer un cadre rassurant pour les élèves, notamment les plus effacés.

      Début de séance : Commencer par une "question flash" ou un "mot d'actualité".

      Fin de séance : Terminer par un rapide tour de table pour synthétiser ce qui doit être retenu.

      b. Varier

      Pour éviter la routine et l'ennui, il est crucial de varier les supports et les modalités de travail.

      Alternance des supports : Combiner des supports écrits traditionnels ("la bonne vieille méthode du papier") avec des outils numériques (quiz, etc.). Hassan Nassiri insiste sur l'importance de faire écrire les élèves, estimant qu'ils "n'écrivent pas assez".

      Alternance des activités : Le but est de casser la routine durant l'heure de cours pour capter l'attention.

      Travaux de groupe : Cette modalité est jugée "très intéressante" pour responsabiliser les élèves et impliquer ceux qui sont plus effacés ou timides.

      c. Donner (des responsabilités)

      Attribuer des rôles spécifiques aux élèves, notamment dans le cadre des travaux de groupe, modifie radicalement leur implication.

      Exemples de rôles : Gardien du temps, rapporteur, responsable du matériel.

      Impact : "Quand tes élèves se sentent utiles, leur implication change." Cela fonctionne particulièrement bien pour les élèves timides.

      Pédagogie de projet : Mettre les élèves en projet les rend "vraiment acteurs de leur formation".

      Des exemples concrets incluent la création d'une mini-entreprise ou l'organisation de mobilités internationales (Erasmus).

      d. Valoriser

      Ce levier est jugé "très, très important". Il s'agit de valoriser la progression des élèves et pas uniquement leur performance finale.

      Signal fort : Féliciter un élève en difficulté non seulement pour une bonne réponse, mais aussi pour une démarche claire ou une progression par rapport à la séance précédente.

      Message transmis : "Cela montre que l'effort compte autant que le résultat."

      Impact sur l'élève : L'encouragement régulier et la reconnaissance de l'effort boostent l'élève et renforcent sa confiance.

      4. Stratégies pour les Élèves Réticents et en Retrait

      Face aux élèves qui résistent à l'engagement, Hassan Nassiri propose une approche ciblée.

      Identifier la cause : Il faut se demander ce qui motive le refus (peur de l'échec, rejet de l'école, historique personnel). "Il y a toujours une explication."

      Proposer des "entrées progressives" : Commencer par une "micro-tâche" simple, par exemple en binôme, puis augmenter progressivement la difficulté.

      L'objectif est de "créer un premier succès, même petit", pour encourager l'élève.

      Accompagnement personnalisé : Profiter des moments en demi-groupe pour s'approcher physiquement des élèves les plus timides, s'asseoir à côté d'eux pour les rassurer et les accompagner de manière individualisée.

      Valoriser la participation : L'élève doit sentir qu'il a le droit à l'erreur. Il faut l'encourager pour sa tentative, même s'il se trompe.

      Lui confier une mission simple, comme expliquer une réponse au tableau, le rend visible et le valorise.

      5. La Force du Collectif : Créer une Dynamique de Classe

      Au-delà des actions individuelles, il est primordial de construire une culture de classe collective.

      Charte de classe : Élaborer une charte avec les élèves sur les valeurs et les attitudes à adopter.

      Cette démarche, bien que chronophage, les rend "complètement acteurs de leur apprentissage".

      Projets communs : Lancer des projets interdisciplinaires permet de travailler avec d'autres collègues, de ne pas rester seul, et de montrer aux élèves les liens entre les disciplines.

      Cela crée du lien tant pour les élèves que pour les enseignants.

      6. Thèmes Spécifiques Abordés (Session Q&A)

      | Thème | Stratégies et Conseils | | --- | --- | | Gestion des binômes | Il n'y a pas de "formule miracle" (faibles ensemble vs. mixité). L'enseignant connaît ses élèves et doit adapter la composition. Hassan Nassiri privilégie les groupes par affinité et insiste : "il ne faut jamais imposer les binômes", sauf cas exceptionnel. La supervision de l'enseignant est clé. | | Gestion de l'erreur | L'erreur doit être dédramatisée et valorisée comme un moteur d'apprentissage. Il faut affirmer aux élèves : "vous avez le droit de vous tromper. L'erreur n'est pas négative, justement l'erreur permet d'avancer". L'erreur peut aussi provenir d'un manque de clarté dans les consignes de l'enseignant. | | Engagement inter-matières | Pour contrer la tendance des élèves à négliger les matières à faible coefficient, il faut leur expliquer, en s'appuyant sur le référentiel du baccalauréat, que "toutes les matières comptent". Ce discours doit être porté par toute l'équipe pédagogique pour être efficace. | | Usage du numérique | L'alternance papier/numérique est essentielle, car le "tout numérique" peut lasser les élèves. Pour les outils comme Kahoot en collège (sans smartphone autorisé), l'enseignant doit fixer un cadre et des règles très claires avant de lancer l'activité et sanctionner si elles ne sont pas respectées pour garantir sa crédibilité. | | Gestion des classes agitées | Face à une classe très remuante (26 élèves et plus) : ne pas rester seul. S'appuyer sur l'équipe (collègues, CPE, direction), identifier les meneurs, les interroger en individuel pour comprendre leur comportement, et utiliser des outils comme le plan de classe. | | Intelligence Artificielle (IA) | Il est préconisé d'adopter une approche proactive : plutôt que d'interdire, il faut accompagner les élèves. Cela passe par une séance dédiée pour leur apprendre à "rédiger un prompt" et à utiliser l'IA de manière "raisonnée", en comprenant les réponses générées. La fixation d'un cadre par l'enseignant est impérative. |

      7. La Posture Enseignante : Clé de Voûte de l'Engagement

      La réussite de ces stratégies repose fondamentalement sur la posture de l'enseignant.

      L'alchimie de l'exigence et de la bienveillance : C'est le principe central. Il faut donner un cadre clair et être exigeant, mais toujours accompagné d'encouragements constants et d'une écoute bienveillante.

      Accessibilité et crédibilité : Soyez accessible et tenez parole. "Quand vous dites quelque chose, bah faites-le", car ne pas le faire fait perdre toute crédibilité.

      La gentillesse ne doit pas être perçue comme de la faiblesse, mais comme une partie d'un cadre respecté.

      S'appuyer sur l'équipe : Il est essentiel de collaborer avec les collègues expérimentés et surtout avec le ou la CPE, qui a une connaissance fine des élèves et peut apporter une aide précieuse sur l'aspect psychologique.

      La liberté pédagogique : Hassan Nassiri conclut en rappelant que la "fameuse liberté pédagogique" est un atout précieux qui permet aux enseignants de mettre en place ces stratégies et de donner tout son sens à leur métier.

    1. Micro-violences et Micro-attentions en Milieu Éducatif : Analyse et Perspectives

      Synthèse Exécutive

      Ce document de synthèse analyse les concepts de micro-violences et de micro-attentions en milieu éducatif, en s'appuyant sur l'expertise de Laurent Muller, maître de conférences en sciences de l'éducation, et de Lucie Perrin, inspectrice de l'Éducation nationale (faisant fonction d'IEN).

      Les micro-violences sont définies comme des gestes, paroles, attitudes ou oublis quotidiens, souvent banalisés et passant sous les radars, qui dégradent la personne à petit feu.

      Elles ne sont pas seulement interpersonnelles mais aussi institutionnelles, découlant d'une logique qui privilégie les intérêts de l'institution sur ceux des usagers.

      L'impact de ces "presque-riens" est considérable car ils heurtent des besoins psychiques fondamentaux et universels (autonomie, appartenance, compétence), particulièrement chez des élèves en pleine construction identitaire.

      La prise de conscience par les enseignants est un processus complexe, souvent freiné par un sentiment de jugement ou de culpabilité, qui peut mener au déni.

      Les facteurs systémiques, tels que la culture de conformité à l'autorité (l'état agentique de Milgram), la gestion du temps collectif au détriment du temps individuel, et la reproduction sociale par des enseignants "survivants" du système scolaire, entretiennent ces pratiques.

      En contrepoint, les micro-attentions — un sourire, un mot bienveillant, une écoute active — sont présentées comme des outils puissants pour prévenir et restaurer le lien éducatif.

      Des stratégies concrètes sont proposées, comme la Communication Non-Violente, la création d'espaces de parole pour les élèves et la nécessité pour les enseignants de prendre soin de leurs propres besoins avec le soutien de l'institution.

      La transformation des pratiques passe par une posture d'humilité, une analyse réflexive et une volonté de "perdre du temps" pour en gagner sur le plan des apprentissages et du bien-être.

      --------------------------------------------------------------------------------

      1. Définition et Impact des Micro-violences Éducatives

      1.1. Nature et Caractéristiques des Micro-violences

      Les micro-violences sont décrites comme des "presque-riens qui ne sont pas des riens". Il s'agit de violences banalisées, normalisées et souvent invisibles, qui prennent la forme de :

      Paroles : Remarques blessantes, humour humiliant, expressions toutes faites. Exemples cités : "Hélène, ne te leurre pas, tu ne feras jamais de science", "c'est pas grave, c'était pour rire".

      Attitudes : Regards qui éteignent, souffles exaspérés, postures de supériorité.

      Gestes : Classer les copies par ordre de notes.

      Oublis et silences : Ne pas dire bonjour, ignorer un élève, créer des silences qui excluent.

      Selon Laurent Muller, ces actes dégradent la personne "à petit feu" et ne doivent pas être confondus avec la notion de "micro-agression", qui est plus subjective.

      L'objectivité de la micro-violence réside dans sa capacité à heurter des besoins psychiques universels.

      1.2. La Double Dimension : Interpersonnelle et Institutionnelle

      Les micro-violences ne se limitent pas aux interactions entre enseignants et élèves.

      Elles possèdent une dimension institutionnelle profonde.

      Violence institutionnelle : Laurent Muller, citant Eliane Corbet, la définit comme le fait de "privilégier l'intérêt de l'institution sur l'intérêt des usagers".

      Logique biopolitique : Au sens de Michel Foucault, il s'agit d'une "gestion des flux de population qui sert à normaliser les corps et les pensées".

      Les enseignants et les directions peuvent eux-mêmes être victimes de cette logique systémique.

      Cette double dimension explique pourquoi les enseignants peuvent être à la fois auteurs et victimes de micro-violences, pris dans des logiques qui les dépassent.

      1.3. L'Impact sur les Élèves : Le Heurt des Besoins Psychiques

      L'impact puissant des micro-violences, même subtiles, s'explique par deux facteurs principaux :

      1. L'âge des élèves : Ils sont en pleine construction identitaire, ce qui les rend particulièrement vulnérables.

      2. Le heurt des besoins psychiques : Considérés comme des "nutriments psychiques", leur non-satisfaction produit une dégradation de l'état psychique.

      Laurent Muller s'appuie sur les travaux de Deci et Ryan pour identifier trois besoins fondamentaux et universels :

      | Besoin Psychique | Description | Conséquence du Heurt | | --- | --- | --- | | Autonomie | Besoin de se sentir à l'origine de ses propres actions. | Sentiment d'aliénation, perte de motivation intrinsèque. | | Appartenance | Besoin de se sentir respecté, reconnu, accueilli, en lien. | Isolement, qui est un facteur majeur de morbidité. | | Compétence | Besoin de se sentir efficace et capable d'agir sur son environnement. | Sentiment d'échec, dévalorisation, décrochage. |

      Lucie Perrin confirme que partir des besoins de l'élève est essentiel pour créer les conditions favorables à l'apprentissage.

      2. La Prise de Conscience : Un Processus Délicat

      2.1. Réactions des Enseignants et Obstacles

      Lors des formations, Lucie Perrin observe que les enseignants sont souvent "étonnés" et "bouche bée" face à la liste des violences pédagogiques ordinaires (recensées par Christophe Marcellier), car "ils se reconnaissent".

      Cette reconnaissance peut entraîner deux réactions problématiques :

      Le sentiment d'être jugé : Les enseignants peuvent se sentir accusés, ce qui entrave la réflexion.

      La culpabilisation : Laurent Muller avertit que la culpabilité "risque de conduire au déni" et de renforcer les mécanismes de défense.

      L'objectif n'est pas de culpabiliser mais de responsabiliser, c'est-à-dire de "reprendre des marges de liberté" pour éviter d'entretenir le cycle de la violence.

      2.2. Le Rôle du Langage et de l'Humour

      Des automatismes de langage, analysés par Hannah Arendt dans le contexte du cas Eichmann, fonctionnent comme des "mécanismes de défense" qui invisibilisent la souffrance de l'autre et autorisent à "faire mal pour faire faire".

      | Type d'Expression | Exemples | Fonction | | --- | --- | --- | | Anticipation positive | "C'est pour ton bien", "Tu me remercieras plus tard" | Justifier une action douloureuse par un bénéfice futur. | | Version accusatoire | "C'est à moi que ça fait mal" | Inverser la culpabilité. | | Fatalisme | "C'est la vie", "On n'a pas le choix" | Se déresponsabiliser en invoquant une force supérieure. | | Minimisation | "On n'en est pas mort", "Moi aussi, je suis passé par là" | Nier l'impact du ressenti de l'autre. | | Exagération/Ironie | "C'est bon, t'exagères", "Mon pauvre chou, tu fais ta princesse" | Ridiculiser l'émotion de l'autre. | | Verdict de facilité | "Allez-y, c'est facile" (ajouté par Lucie Perrin) | Créer une pression et un sentiment d'incompétence chez l'élève en difficulté. |

      L'humour est un vecteur particulièrement puissant, car il permet de "détruire l'autre en l'accusant de manquer d'humour s'il ne rigole pas à l'humiliation qu'il est en train de subir".

      2.3. Stratégies de Conscientisation

      Pour prendre conscience de ces gestes sans se filmer, plusieurs pistes sont évoquées :

      Reconnaître l'écart entre intention et action : Accepter que de bonnes intentions ne garantissent pas des pratiques bienveillantes.

      L'analyse réflexive : Se remémorer les micro-violences subies et celles que l'on a pu commettre.

      Inviter des collègues en classe : Obtenir un regard extérieur sur ses pratiques.

      Donner la parole aux élèves : Leur permettre d'exprimer leur ressenti, comme l'a expérimenté Laurent Muller.

      3. Les Facteurs Systémiques d'Entretien des Micro-violences

      3.1. Conformisme et Soumission à l'Autorité

      Laurent Muller s'appuie sur les travaux de Stanley Milgram sur la "conversion à l'état agentique" pour expliquer une tendance au conformisme dans l'Éducation nationale.

      Dans cet état, un individu ne se sent plus à l'origine de son action et devient un "agent d'exécution" d'une volonté extérieure jugée légitime.

      Cela conduit à une "culture de la reproduction des attitudes".

      Ce phénomène est renforcé par le fait que les enseignants sont des "survivants du système scolaire" et donc porteurs d'un "biais particulier" qui les incline à reproduire les normes qui ont assuré leur propre succès.

      3.2. L'Influence de la Forme Scolaire

      La structure même de l'école ("forme scolaire") est un terreau fertile pour les micro-violences.

      La gestion du temps : La priorité donnée au temps collectif (finir les programmes) sur le temps propre de chaque élève est une source majeure de micro-violence.

      Comme le dit Rousseau cité par L. Muller, le paradoxe de l'éducation est de "savoir en perdre [du temps]".

      La taille des classes : Une classe de 30 ou 35 élèves rend la prise en compte des besoins individuels extrêmement difficile, favorisant une approche normalisatrice.

      L'espace : Lucie Perrin évoque la posture de l'enseignant "systématiquement debout face à ses élèves" comme un geste sécurisant pour lui, mais qui peut instaurer une distance.

      Le contexte de l'enseignement spécialisé (SEGPA), avec des effectifs réduits, montre a contrario que lorsque les conditions le permettent, la création de lien et l'attention aux besoins individuels deviennent prioritaires.

      4. Stratégies de Transformation : Les Micro-attentions

      4.1. Le Pouvoir des Micro-attentions

      Face aux micro-violences, les micro-attentions sont les "véritables petits moteurs du lien".

      Elles préviennent et peuvent restaurer la relation.

      Exemples : "Je t'écoute", "Tu as raison de dire ça", un bonjour et un sourire à l'accueil, une main sur l'épaule, un mot sympathique.

      L'importance de l'accueil : Pour Lucie Perrin, tout se joue dans les premières minutes.

      Un "bonjour" et un "sourire" peuvent "instaurer un climat de confiance et mettre les élèves dans de bonnes conditions".

      4.2. Outils et Postures

      Plusieurs approches sont proposées pour cultiver une pédagogie de la micro-attention :

      La Communication Non-Violente (CNV) : Développée par Marshall Rosenberg, elle propose un processus pour clarifier les pratiques langagières violentes.

      Laurent Muller précise que ce n'est pas une "solution mécanique" ou "miraculeuse" et qu'elle doit être "irriguée par une culture éthique de l'attention".

      Donner du temps et la parole aux élèves : Consacrer 10 minutes en début de cours pour demander aux élèves comment ils vont n'est pas du temps perdu, mais un investissement qui facilite les apprentissages en créant un climat de bien-être.

      La posture d'humilité : Lucie Perrin insiste sur la nécessité d'être prudent et humble, de reconnaître que l'on a pu soi-même commettre des erreurs, et de contextualiser les réactions des enseignants, qui font face à des adolescents aux vécus parfois complexes.

      4.3. Restaurer la Relation et Soutenir les Enseignants

      Lorsqu'une micro-violence a été commise, il est possible d'agir.

      Restaurer, non réparer : Laurent Muller préfère le terme "restaurer" ou "raccommoder" à "réparer", car il s'agit du vivant et non d'un mécanisme.

      La reconnaissance et les excuses : Le processus de restauration commence par "la reconnaissance explicite de ce qui a été fait" et le fait de "présenter simplement ses excuses".

      C'est en mettant des mots (M-O-T-S) que l'on peut soigner les maux (M-A-U-X).

      Le soutien institutionnel : Pour que les enseignants puissent prodiguer des micro-attentions, il est crucial que "l'institution puisse également soutenir les enseignants".

      La bienveillance doit commencer par soi-même : les enseignants doivent pouvoir prendre soin de leurs propres besoins pour pouvoir s'occuper de ceux de leurs élèves.

      5. Inspirations et Références Clés

      Pour approfondir la réflexion et l'action, les intervenants proposent les pistes suivantes :

      Laurent Muller :

      La psychologie humaniste : Les travaux de Carl Rogers et Marshall Rosenberg (fondateur de la CNV).  

      L'écoute des élèves : "Ils ont tout à nous apprendre par rapport à cette question-là."

      Lucie Perrin :

      Les travaux de Rebecca Shankland : Spécialiste du bien-être à l'école.  

      La qualité du temps passé à l'école : Reconnaître que les élèves voient parfois plus leurs enseignants que leur famille, et que ce temps doit être de qualité, empreint de bienveillance.

    1. Développer les Compétences Psychosociales à l'École : Synthèse de la Table Ronde "Osons la Communication NonViolente"

      Résumé Exécutif

      Ce document de synthèse analyse les points clés de la table ronde organisée par le Réseau Canopé autour de l'ouvrage "Développer les compétences psychosociales à l'école - Osons la Communication NonViolente".

      Confrontée à un contexte sociétal anxiogène et violent, l'école doit opérer une révolution éducative en intégrant pleinement les compétences psychosociales (CPS) et socio-émotionnelles.

      Loin d'être une simple opinion, cette approche repose sur des fondements scientifiques solides issus des neurosciences, qui démontrent le lien direct entre la sécurité affective, l'empathie et le développement cérébral optimal pour l'apprentissage.

      La Communication NonViolente (CNV) est présentée comme un levier majeur de cette transformation. Elle offre des outils concrets pour réguler les émotions, gérer les conflits et changer fondamentalement la posture des adultes.

      Il ne s'agit pas de renoncer à l'autorité, mais de la redéfinir comme une auctoritas inspirante, basée sur la confiance et le respect, plutôt que sur un pouvoir coercitif.

      Le point de départ de ce changement réside dans la formation des adultes de la communauté éducative (enseignants, personnels de direction, administratifs), qui, par leur propre exemplarité et leur capacité à l'auto-empathie, deviennent des modèles pour les élèves.

      La systématisation de formations volontaires, inter-catégorielles et leur intégration dans la formation initiale sont identifiées comme des conditions essentielles pour répondre à l'urgence actuelle tout en construisant une vision éducative durable.

      --------------------------------------------------------------------------------

      1. Contexte et Urgence : L'École Face aux Défis Sociétaux

      La discussion s'est ouverte sur le constat d'une pression croissante sur le système scolaire, qui doit naviguer entre des crises multiples et la nécessité de maintenir un climat apaisé.

      Le Reflet des Tensions Mondiales : Christophe Kéréro, Recteur de Paris, souligne que l'école, en tant que "reflet de la société", subit de plein fouet les agressions et la violence d'un contexte international et sociétal "extrêmement complexe".

      Les élèves, réceptacles de ces tensions (géostratégiques, climatiques), vivent dans un environnement anxiogène qui impacte leur construction en tant qu'individus et citoyens.

      L'Impératif d'un Climat Scolaire Apaisé : Face à ce constat, l'institution est sommée de garantir la sérénité dans les établissements.

      Cela s'inscrit dans un cadre plus large de lutte contre des phénomènes comme le harcèlement, mais ne peut faire abstraction des "fractures" qui traversent la société.

      La Double Temporalité : Un défi majeur réside dans la gestion d'une double temporalité. D'une part, une "société très impatiente" demande des résultats rapides face à l'urgence.

      D'autre part, le développement des compétences psychosociales est un "travail sur le temps long", s'étalant sur "une voire deux générations". L'enjeu pour l'Éducation nationale est donc de "gérer à la fois l'urgence et le travail sur le temps long".

      2. Le Lien Indissociable entre Émotions et Apprentissages : Fondements Scientifiques

      L'intégration des émotions à l'école, souvent perçue comme une perturbation, est en réalité un prérequis fondamental pour l'apprentissage, soutenu par des décennies de recherche scientifique.

      Les Neurosciences Affectives et Sociales comme Base : Catherine Gueguen, pédiatre, insiste sur le fait que l'importance des compétences émotionnelles et sociales "ne sont pas des opinions ni des croyances, c'est fondé sur des recherches scientifiques".

      Ces recherches montrent que l'empathie favorise le développement global du cerveau, à la fois intellectuel et affectif.

      Le Rôle Crucial de l'Empathie dans le Développement Cérébral : Des études précises sont citées pour étayer ce propos :

      ◦ Une étude hollandaise a montré que chez les enfants de 7 ans dont les parents sont empathiques, "toute la substance grise du cerveau se développe" avec un "épaississement du cortex préfrontal".  

      ◦ La bienveillance et l'empathie développent le cortex orbitofrontal, siège de fonctions humaines essentielles : l'empathie, la gestion des émotions, la capacité à faire des choix et le sens éthique et moral.  

      ◦ À l'inverse, les punitions et humiliations (physiques ou verbales) entravent le développement de ce cortex.

      L'Adulte comme Modèle par Imprégnation : La formation doit prioritairement concerner les adultes de la communauté éducative, car "les enseignants sont des modèles très puissants pour les enfants". Une fois les enseignants formés, les enfants "vont imiter par imprégnation".

      3. La Communication NonViolente (CNV) comme Levier de Transformation

      La CNV est présentée comme une approche pratique et profonde pour mettre en œuvre le développement des compétences socio-émotionnelles.

      Un Changement de Regard : des Comportements aux Besoins : Catherine Schmid-Gherardi explique que le principe fondamental de la CNV est de comprendre que "toute parole et tout comportement sert à nourrir un besoin".

      Cette prise de conscience permet de "beaucoup moins prendre les paroles et les comportements contre nous" et de voir derrière une maladresse une tentative de prendre soin de soi.

      La Régulation Émotionnelle pour Libérer l'Espace d'Apprentissage : Lorsque l'enfant est submergé par une émotion, "ça prend toute la place et [...] il n'y a pas d'espace pour que les apprentissages se fassent".

      Accueillir et nommer l'émotion (la sienne ou celle de l'élève) permet de la libérer et de rendre "l'espace à nouveau ouvert pour les apprentissages". La CNV lie l'émotion à un besoin, ce qui permet à l'enfant de devenir "proactif" et autonome.

      La Gestion des Conflits et la Désescalade : La CNV transforme la gestion des conflits en déplaçant le focus de la recherche du "coupable" ("qui a commencé ?") vers l'écoute empathique de deux individus en souffrance.

      En accueillant les émotions et besoins de chacun, on "désamorce les tensions" et on amène les élèves à "trouver des solutions par eux-mêmes".

      4. Redéfinir l'Autorité : De la Puissance sur l'Autre à l'« Auctoritas »

      La table ronde a unanimement rejeté l'idée que l'empathie serait incompatible avec l'autorité, proposant au contraire une vision plus mature et efficace de celle-ci.

      Compatibilité entre Empathie et Cadre : Catherine Gueguen précise que l'empathie "n'a strictement rien à voir avec le laxisme".

      L'adulte doit transmettre des valeurs, savoir dire non et rappeler ce qui est "permis ou interdit", mais il peut le faire "en comprenant les émotions et les besoins de l'enfant et sans l'humilier".

      Équilibre entre Verticalité et Horizontalité : Patrice Noy, enseignant, témoigne de la recherche d'un équilibre entre la "verticalité" (être garant du cadre et transmettre un savoir) et une "horizontalité" pour créer "une relation à l'élève qui lui permet un épanouissement de l'autonomie".

      L'Enseignant comme Figure d'Inspiration : Véronique Gaspard distingue le "pouvoir sur l'autre" de l'autorité que les élèves "accordent" à un adulte inspirant.

      Elle cite une autrice : "partout où il y a violence, il y a perte d'autorité". L'enjeu est de faire en sorte que les adultes deviennent inspirants et donnent envie aux jeunes de grandir à leur contact.

      5. La Posture de l'Adulte : Point de Départ du Changement

      Le succès de l'intégration des CPS et de la CNV dépend avant tout du travail que les adultes font sur eux-mêmes.

      L'Auto-Empathie et la Responsabilité Émotionnelle : Patrice Noy explique que la CNV lui a permis de prendre conscience que "mes émotions m'appartenaient" et de ne pas "en faire porter la responsabilité aux autres".

      La formation offre "ce temps entre la colère qui peut monter [...] et de voir d'où ça vient", permettant d'éviter de pénaliser l'acte pédagogique. Il souligne également l'importance de savoir s'excuser ("être désolé") après avoir "débordé".

      L'Impact de la Formation sur le Bien-Être des Enseignants : Catherine Gueguen rapporte que les études montrent que lorsque les enseignants sont formés, "ils vont beaucoup mieux, ils se sentent plus compétents [...] et ensuite ça prévient le burnout".

      L'Exemplarité au Sein de Toute la Communauté Éducative :

      Catherine Piel, ancienne personnel de direction, insiste : "si on veut que les enseignants le fassent vivre à leurs élèves, ça me semble indispensable que les personnels de direction le fassent vivre aussi à leur équipe".

      François Moutapa ajoute qu'il y a une "urgence à ce que l'ensemble des adultes de la communauté éducative, particulièrement les personnels administratifs, soit exposé et sensibilisé".

      L'impact est systémique : l'amélioration des relations entre adultes a un effet direct sur le climat scolaire et le comportement des élèves.

      6. Stratégies de Déploiement : La Formation comme Clé de Voûte

      Le déploiement de ces compétences passe par une stratégie de formation réfléchie, systémique et ambitieuse.

      Une Approche Volontaire et non Injonctive : Plusieurs intervenants, dont Véronique Gaspard, insistent sur le fait que la formation doit partir d'un "espace choisi" et non d'une injonction. Ces approches étant "bouleversantes", il est crucial que les personnes soient prêtes à s'engager dans ce processus.

      L'Importance des Formations Inter-Catégorielles : Le format privilégié est celui qui rassemble différents corps de métier d'un même établissement (enseignants, direction, administratifs, etc.).

      Patrice Noy et Catherine Schmid-Gherardi soulignent que cela crée "une entité qui partage des expériences en local" et une "qualité de relation" qui transforme la dynamique d'équipe.

      Vers une Intégration dans la Formation Initiale (INSPÉ) : Un consensus fort émerge sur la nécessité d'intégrer ces compétences "dès la formation initiale".

      Les enseignants formés plus tardivement déplorent unanimement de ne pas avoir eu ces outils plus tôt, ce qui leur aurait permis de "gagner du temps" et d'adopter directement une posture plus constructive.

      L'Écosystème de Formation : La table ronde met en lumière les initiatives et collaborations existantes pour déployer ces formations sur le territoire.

      | Organisation/Programme | Rôle et Initiatives | | --- | --- | | Réseau Canopé | Publication de l'ouvrage de référence, organisation de conférences et d'ateliers sur la plateforme Canotech et dans les ateliers Canopé (Paris, Alençon, Arras). | | Académie de Paris (Labs'Orbonne, EFC) | Partenariat fort avec Canopé pour construire des formations d'ampleur, professionnaliser les acteurs et créer un pôle de formateurs experts. | | Santé publique France | Mandaté pour le déploiement national des CPS, avec un travail de recensement et de formation par région et par académie. | | Déclic CNV | Association visant à rendre chaque académie autonome avec un pôle de formateurs qualifiés en CNV pour répondre aux demandes des personnels. |

    1. Juego de roles (role-play): Imagina que tú trabajas en uno de estos puestos de comida, y tu compañero de clase viene a comprar algo de comer. Necesitan hablar de qué quiere, cómo quiere su comida, si quiere algo más y cuánto cuesta. Intenta no usar nada de inglés al hacer la transacción. Entonces tú vas a comprar algo de tu compañero de clase de otro puesto de comida.

      ¡Hola! Bienvenido. ¿Qué quieres comer hoy?Quiero una arepa con queso, por favor.¿La quieres con un poco de salsa picante o sin salsa?Sin salsa, gracias. Yo: Muy bien. ¿Quieres algo de beber también? Compañero: Sí, un refresco, por favor. Perfecto. Son cinco dólares en total. Compañero:Aquí tiene. Gracias. ¡Gracias a ti! Que disfrutes tu comida.

      Ahora yo voy a comprar algo del puesto de salchipapas de mi compañero: Hola, quiero una porción de salchi papas, por favor. ¿Con salsa de ajo o ketchup? Con salsa de ajo, gracias. ¿Quieres algo de beber? Sí, un jugo de naranja, por favor. Muy bien. Son seis dólares en total. Aquí tiene. Gracias. ¡Gracias! Que disfrutes.

    2. Compara estos puestos de comida callejera rápida con la comida rápida en tu pueblo. ¿Cuál es más similar? ¿Cuándo comes este tipo de comida? ¿Hay días feriados o eventos cuando comes más comida callejera?

      Los puestos de comida callejera rápida que vemos en las imágenes son diferentes a la comida rápida de mi pueblo. En mi pueblo, la comida rápida suele ser hamburguesas, pizzas o papas fritas, mientras que en los puestos hispanos venden arepas, empanadas o salchipapas.

      Este tipo de comida callejera lo como cuando quiero algo rápido o diferente, como en la calle o en festivales. También como más comida callejera en días feriados o en fiestas locales, porque hay más puestos y es parte de la celebración.

    3. ¿De dónde son las fresas? ¿Y el maíz? ¿Y los tomates? ¿Y las manzanas? ¿Y las naranjas? ¿Y las cebollas? ¿Y el arroz? ¿Y la lechuga? ¿Y el azúcar? ¿Y las patatas?

      Las fresas son de Europa. El maíz es de América, especialmente de México. Los tomates son de América, de la región de México y Perú. Las manzanas son de Asia, especialmente de Asia Central. Las naranjas son de Asia, principalmente de China. Las cebollas son de Asia. El arroz es de Asia. La lechuga es de la región del Mediterráneo. El azúcar es de India, donde se cultivaba la caña de azúcar. Las patatas son de América, especialmente de Perú.

    4. Imagina que vas a tener una fiesta en tu clase de español. ¿Quieres servir este plato en la fiesta?

      Sí, quiero servir este plato en la fiesta porque el gazpacho es un plato tradicional de España y es refrescante. Creo que a mis compañeros les va a gustar y es una buena forma de compartir la cultura española.

    5. El plato que investigas, ¿es picante? ¿Prefieres la comida picante o suave?

      El plato que investigo no es picante, porque el gazpacho no lleva chile ni especias fuertes. Yo prefiero la comida suave, así que este plato es perfecto para mí.

    6. ¿Tienes un plato similar en la comida tradicional de tu pueblo?

      En la comida tradicional de mi pueblo no hay un plato exactamente igual al gazpacho, pero sí tenemos sopas frías o platos con tomate que se comen en verano. Son un poco parecidos, pero el gazpacho es único por su sabor y su forma de preparación.

    7. ¿Te gustan todos los ingredientes de la receta? ¿Te va a gustar ese plato?

      Sí, me gustan casi todos los ingredientes de la receta, como los tomates, el pepino, el pimiento verde y el aceite de oliva. El ajo crudo es un poco fuerte para mí, pero creo que en general sí me va a gustar ese plato, especialmente en un día caluroso.

    8. ¿Cuál es el ingrediente principal de tu receta? ¿Puedes encontrar el ingrediente en el supermercado de tu barrio?

      El ingrediente principal de mi receta de gazpacho es el tomate. Sí, puedo encontrar tomates fácilmente en el supermercado de mi barrio, especialmente en la sección de frutas y verduras.

    1. Créer une Culture de l'Encouragement : Synthèse et Points Clés

      Résumé Exécutif

      Ce document synthétise les concepts, stratégies et fondements théoriques pour l'instauration d'une "culture de l'encouragement" à l'échelle d'un établissement scolaire.

      Face à un constat alarmant de dégradation de la santé mentale et d'un manque d'encouragement ressenti tant par les élèves que par les personnels, une approche systémique est proposée.

      Elle vise à remplacer un paradigme du découragement, basé sur des réactions négatives aux erreurs, par une spirale vertueuse où l'encouragement génère des émotions positives et des actions constructives.

      Cette culture s'appuie sur trois piliers théoriques : l'Anatomie de l'Encouragement de Wong, la Théorie de l'Autodétermination de Ryan et Deci, et l'État d'Esprit de Développement de Carol Dweck.

      Elle se traduit par un encouragement spécifique, objectif et centré sur le processus, qui renforce l'autonomie, la compétence et le lien social.

      Les Compétences Psychosociales (CPS) sont identifiées comme le levier de choix pour déployer cette culture, en raison de leur cadre institutionnel solide et des effets probants démontrés par de nombreuses méta-analyses scientifiques.

      Le projet "J'y arrive", axé sur le calcul mental, illustre une mise en application réussie de ces principes, parvenant à dédramatiser l'évaluation et à renforcer la confiance des élèves.

      L'instauration d'une telle culture est un processus qui se cultive sur le long terme, impliquant tous les acteurs de la communauté éducative et un leadership actif à tous les niveaux (individuel, collectif et institutionnel).

      --------------------------------------------------------------------------------

      1. Le Constat : Un Besoin Généralisé d'Encouragement

      Les enquêtes sur le climat scolaire révèlent un besoin prégnant d'encouragement chez les élèves, souvent doublé d'un sentiment d'injustice face aux sanctions et aux évaluations.

      Ce manque d'encouragement n'est pas limité aux élèves ; il touche l'ensemble de l'institution éducative, créant une situation de découragement généralisée.

      Les données statistiques confirment cette tendance et soulignent une dégradation de la santé mentale à tous les niveaux.

      | Catégorie | Donnée Statistique | Source | | --- | --- | --- | | Personnels de direction | 78% des directions ont un moral bas. | Fotinos, 2023 | | Enseignants | 48% des enseignants ont des relations difficiles avec leur hiérarchie. | Debarbieux et Moignard, 2022 | | Jeunes (11-24 ans) | 30% présentent des risques de troubles anxio-dépressifs. | Rapport Sénat n° 787, juin 2025 | | Collégiens et Lycéens | 14% des collégiens et 15% des lycéens présentent un risque de dépression. | Santé publique France, 2024 | | Collégiens et Lycéens | Plus de 50% présentent des plaintes psychologiques ou somatiques hebdomadaires. | Santé publique France, 2024 | | Enfants (6-11 ans) | 5,6% trouble émotionnel probable, 6,6% trouble oppositionnel probable, 3,2% TDAH probable. | Enabee Santé Publique, 2023 |

      Le rapport du Sénat qualifie la dégradation de la santé mentale de "tendance de fond qui ne s'est pas améliorée depuis la fin de la crise sanitaire".

      2. Le Changement de Paradigme : De la Spirale du Découragement à l'Élan de l'Encouragement

      La démarche propose de passer d'un paradigme du découragement à un paradigme de l'encouragement.

      Spirale du Découragement : Un stimulus négatif (face à une erreur, un manque de coopération, une émotion qui déborde) engendre des émotions désagréables, qui à leur tour provoquent des comportements indésirables.

      Spirale de l'Encouragement : Un encouragement actif génère des émotions agréables, qui favorisent des actions constructives, des prises de conscience, de meilleures relations et une motivation accrue.

      Définition de l'Encouragement

      L'encouragement ne se limite pas à des mots.

      Il s'agit d'un ensemble de gestes, mots et attitudes visant à renforcer l'espoir, la confiance, la persévérance ou le courage d'une personne pour surmonter des difficultés et atteindre son plein potentiel, dans une optique de contribution au bien commun.

      3. Fondements Théoriques de la Culture de l'Encouragement

      Trois cadres théoriques majeurs soutiennent cette approche :

      1. L'Anatomie de l'Encouragement (Wong) : Un encouragement efficace agit sur quatre dimensions :

      Prise de conscience : Permettre à la personne de voir les choses différemment.  

      Confiance en soi : Renforcer le sentiment d'auto-efficacité et de compétence.  

      Valorisation du potentiel : Aider la personne à voir qu'elle est capable d'aller plus loin.  

      Affection, soutien, empathie : Manifester un lien et un support.  

      Point de vigilance : Il existe des biais de genre dans l'application de ces encouragements, qu'il convient de corriger.

      2. La Théorie de l'Autodétermination (Ryan & Deci) : Pour être encouragé, un individu a besoin de voir ses trois besoins psychologiques fondamentaux satisfaits :

      Autonomie : Le sentiment de penser par soi-même et pour soi-même, et d'agir par choix.  

      Compétence : Le sentiment d'être capable et efficace.  

      Proximité sociale : Le sentiment d'être connecté et en lien avec les autres.

      3. L'État d'Esprit de Développement (Carol Dweck) : Cette théorie, ou Growth Mindset, vise à développer :

      ◦ Le sens de l'effort. 

      ◦ La conviction que les défis aident à apprendre.    ◦ L'idée d'une amélioration continue.   

      ◦ Cette mentalité doit être partagée par les adultes et les élèves pour créer une véritable "communauté d'apprentissage".

      Pratique de l'encouragement : Il doit être précis et objectif, en évitant les superlatifs, les jugements de valeur ou la focalisation unique sur le résultat. Il ne s'agit pas de flatterie, mais d'une reconnaissance descriptive du processus.

      4. Mise en Œuvre : Leviers et Stratégies

      Étude de Cas : Le Projet "J'y arrive"

      Présenté par Steven Calvez, ce projet de recherche sur le calcul mental dans la circonscription 10 000 est un exemple concret d'une culture de l'encouragement en action.

      Objectif : Mettre les élèves en réussite en mathématiques pour améliorer leur confiance en soi.

      Méthodologie :

      Enseignement explicite et valorisation des progrès.  

      Ritualisation et répétition des tâches avec une progression adaptée.  

      Évaluation dédramatisée via des tests très courts et quotidiens.    ◦ Feedback systématiquement bienveillant avec correction collective.

      Impacts visés :

      ◦ Réduire l'anxiété mathématique.  

      ◦ Développer un goût pour l'activité.  

      ◦ Rééquilibrer les stéréotypes de genre en mathématiques.

      Dimension systémique : Le projet est inscrit dans le projet d'école et de circonscription, favorisant l'implication des équipes et la mesure de l'impact sur plusieurs années.

      Les Compétences Psychosociales (CPS) comme Levier Stratégique

      Nadine Gaudin identifie les CPS comme un "levier de choix" pour déployer la culture de l'encouragement.

      Définition (OMS, 1994) : "La capacité d’une personne à répondre avec efficacité aux exigences et aux épreuves de la vie quotidienne [...] en adoptant un comportement approprié et positif."

      Pourquoi ce levier ?

      Cadre institutionnel : Recommandé par l'OMS, Santé Publique France et l'Éducation Nationale (via une instruction interministérielle de 2022).  

      Preuves scientifiques : De nombreuses méta-analyses (Durlak 2011, Cipriano 2023, etc.) démontrent des effets probants. 

      Réponse aux besoins : Les CPS répondent aux enjeux de santé mentale, de discipline et de lien social.

      | Effets Probants Démontrés par les Méta-Analyses sur les CPS | | --- | | Climat scolaire plus favorable | | Meilleure réussite scolaire et professionnelle | | Sécurité augmentée | | Santé globale augmentée | | Moins de comportements à risque | | Moins de mal-être et d'addictions | | Moins de problèmes de santé |

      5. Cultiver la Culture : Une Approche Systémique et Intentionnelle

      "Une culture ne se décrète pas. Elle se cultive jour après jour."

      Pour être efficace, le déploiement doit être systémique et impliquer tous les acteurs de l'écosystème scolaire : parents, enseignants, AED, CPE, personnel non enseignant, inspection, partenaires, médico-social, commune, accueil de loisir.

      Le Rôle Clé du Leadership

      Les cadres et toute personne en position de leadership jouent un rôle essentiel sur trois niveaux :

      1. Individuel : Être encourageant avec soi-même, travailler sur sa posture et ses gestes professionnels.

      2. Collectif : Mettre en projet, soutenir les actions existantes et assurer un suivi constructif des difficultés.

      3. Institutionnel : Intégrer la culture de l'encouragement dans les modalités des réunions, des conseils et des relations avec les parents.

      6. Gérer les Points de Cristallisation

      L'encouragement est plus difficile dans certaines situations critiques, qui sont autant d'opportunités de renforcer la culture.

      1. Les Erreurs : Doivent être vues comme des opportunités d'apprentissage. Qu'il s'agisse d'une erreur scolaire ou de comportement, la stratégie doit viser à développer de nouvelles compétences ou à réparer le lien.

      2. Les Émotions : Doivent être considérées comme des alliées. Face à une émotion débordante, la réaction (de l'adulte ou de l'élève) doit être respectueuse de soi, de l'autre et de l'environnement.

      3. La Coopération : Le manque de coopération se gère en soutenant l'autonomie ("penser par eux-mêmes pour eux-mêmes") plutôt qu'en imposant une solution, ce qui peut générer de la réactance.

      7. Principes Directeurs et Points de Vigilance

      Leviers de Déploiement

      Prendre soin des équipes et des individus.

      Respecter la temporalité de chacun (chacun à son rythme).

      Adapter l'approche aux besoins spécifiques du terrain.

      Utiliser l'effet domino : commencer avec les personnes volontaires et laisser leur succès inspirer les autres.

      Points de Vigilance

      Garder une pensée nuancée : Éviter les débats polarisés ("c'est bien" vs "c'est pas bien") et analyser les effets de chaque pratique pour faire des choix éclairés et ajustés.

      Maintenir la boussole du bien commun : Toujours veiller à l'équilibre entre les besoins de l'individu et ceux du collectif.

      Articuler avec la mission d'apprentissage : La culture de l'encouragement n'est pas une finalité en soi, mais un moyen qui soutient les apprentissages et la réussite des élèves.

      8. Questions-Réponses : Précisions sur la Pratique

      Sur les récompenses (félicitations, etc.) : Elles constituent un référentiel externe qui peut freiner le développement de l'autonomie de la pensée chez l'élève.

      Bien qu'elles puissent avoir des effets positifs à court terme, elles risquent de nuire à l'émancipation intellectuelle et d'accroître les inégalités sur le long terme.

      Sur l'encouragement en public : Il faut éviter les compliments généraux et porteurs de jugement de valeur ("Tu es génial").

      Cela peut engendrer de la comparaison et de la jalousie. Il est préférable d'utiliser des descriptions objectives et spécifiques ("Tu as travaillé sérieusement sur ton rapport, tu as donné des exemples concrets"), qui peuvent être partagées publiquement sans effet contre-productif.

    1. La Métacognition : Stratégies pour des Apprentissages Réussis

      Résumé Exécutif

      Ce document de synthèse analyse les stratégies pédagogiques fondées sur la métacognition pour favoriser la réussite de tous les élèves.

      La métacognition est définie comme l'ensemble des processus par lesquels un individu régule ses propres activités cognitives, devenant ainsi le "pilote de sa cognition".

      Elle se décline en deux facettes principales : la métacognition explicite, qui est la connaissance consciente de ses propres processus d'apprentissage ("apprendre à apprendre"), et la métacognition implicite, qui repose sur les sentiments et la motivation intrinsèque.

      Face aux constats partagés de difficultés d'attention, d'oubli des savoirs et d'un manque de motivation chez les élèves, l'enseignement direct des stratégies métacognitives apparaît comme un levier puissant.

      Les approches concrètes incluent l'explication du fonctionnement du cerveau, la gestion de l'attention, la régulation de la mémorisation et le développement de la flexibilité cognitive pour résister aux automatismes.

      Un point central est la relation entre succès et motivation. Plutôt que de postuler que la motivation précède la réussite, les expériences de terrain suggèrent que c'est la réussite qui engendre la motivation et l'envie d'apprendre.

      En mettant les élèves en situation de succès, en leur proposant des tâches accessibles et en clarifiant les objectifs d'apprentissage, on crée un cercle vertueux d'engagement.

      Cette démarche ne constitue pas une révolution, mais une évolution des pratiques professionnelles vers un enseignement plus ciblé ("moins mais mieux") et un outil efficace pour lutter contre les inégalités scolaires.

      --------------------------------------------------------------------------------

      1. Fondements de la Métacognition

      La métacognition est présentée comme une méthode pédagogique efficace, s'appuyant sur la recherche, pour prévenir les difficultés scolaires et favoriser la réussite de tous les élèves.

      1.1. Définition et Capacités Clés

      La métacognition englobe l'ensemble des processus par lesquels un individu régule son apprentissage.

      Selon Frédéric Guy, chargé de mission au Cézanne, cela inclut les capacités à :

      • Réguler son attention

      • Choisir de s'informer

      • Planifier et résoudre un problème

      • Repérer et corriger ses propres erreurs

      Ces processus permettent de prédire la faisabilité d'une tâche et d'évaluer ses propres performances. Ils reposent sur quatre capacités fondamentales :

      1. Fixer des buts et identifier les actions nécessaires pour les atteindre.

      2. Détecter et identifier les erreurs pour y remédier.

      3. Évaluer ses résultats et ses conclusions.

      4. Réviser les stratégies utilisées.

      1.2. Les Deux Facettes de la Métacognition

      Il est essentiel de distinguer deux aspects complémentaires de la métacognition :

      | Type de Métacognition | Description | Caractéristiques | | --- | --- | --- | | Explicite (ou Déclarative) | L'approche classique de la "cognition sur la cognition". C'est la capacité de l'élève à verbaliser ses stratégies et ses connaissances sur l'apprentissage. | • Consciente et conceptuelle.<br>• Repose sur des méta-représentations (ex: "pour apprendre, je dois faire cela").<br>• Concerne les perceptions sur les tâches ("c'est difficile") ou sur soi ("je suis bon en maths"). | | Implicite | Une régulation qui se fait sur la base de sentiments dédiés à l'apprentissage.

      Elle est liée à la motivation et à l'évaluation intuitive de l'effort à fournir. | • Basée sur des sentiments et des intuitions.<br>• Moins consciente, plus automatique.<br>• Influence directement la motivation et l'engagement. |

      2. Pistes Pédagogiques pour la Métacognition Explicite

      L'objectif est de donner aux élèves les outils pour devenir autonomes dans leur apprentissage.

      La citation clé de Marie Bridenne, Conseillère Pédagogique, résume cette ambition :

      « Développer ses compétences métacognitives, c’est devenir pilote de sa cognition. »

      2.1. Comprendre le Fonctionnement du Cerveau

      Pour que les élèves puissent réguler leur cognition, il faut d'abord qu'ils en comprennent les mécanismes de base.

      Action : Parler du cerveau en classe, à tous les niveaux, et questionner les élèves sur leurs représentations ("A-t-on tous le même cerveau ?", "Comment fonctionne-t-il ?").

      Outils : Utilisation de ressources pédagogiques comme les ouvrages Découvrir le cerveau à l'école (Canopé), _Kididoc :

      Explore ton cerveau_, ou C'est (pas) moi, c'est mon cerveau !.

      2.2. Gérer et Adapter son Attention

      L'attention est une ressource limitée qui doit être maîtrisée.

      Action : Mettre en place des programmes attentionnels pour faire découvrir aux élèves ce qu'est l'attention, ses limites, et comment la maîtriser de façon autonome (équilibre attentionnel, retour au calme).

      Outils : Programmes structurés comme ATOLE (Apprendre l'ATtention à l'écOLE) pour les cycles 2 et 3, et ADOLE pour le collège et le lycée.

      2.3. Réguler les Processus de Mémorisation

      La mémorisation efficace repose sur trois piliers : comprendre, se questionner, répéter.

      Action : Mettre en place des routines et des outils pour structurer la mémorisation et la révision.

      Outils :

      Fiches mémo pour synthétiser les savoirs.  

      Cartes quiz rédigées par les élèves pour s'auto-interroger.  

      Boîtes de Leitner pour organiser la répétition espacée des notions.  

      Calendrier de reprises expansées pour planifier les révisions.

      2.4. Résister aux Automatismes et Être Flexible

      Apprendre, c'est acquérir des automatismes, mais c'est aussi savoir y résister pour progresser.

      Action : Entraîner les élèves à inhiber leurs réflexes pour développer de nouvelles stratégies, un regard critique et une plus grande tolérance à l'erreur.

      Exemples :

      ◦ Comprendre que la lettre "O" ne produit pas systématiquement le son [o].    ◦ Changer de procédure en calcul mental (ex: pour ajouter 9, ajouter 10 puis retirer 1).

      3. Motivation et Métacognition Implicite : Le Cercle Vertueux de la Réussite

      La motivation est indispensable à l'engagement dans les tâches. Les sources soulèvent une question fondamentale :

      « Faut-il être motivé pour vouloir apprendre et réussir ? Ou faut-il réussir pour vouloir apprendre et se motiver ? » La réponse apportée par l'expérience de terrain est que la réussite est le principal moteur de la motivation.

      3.1. Les Levier pour Vouloir Apprendre

      Pour susciter l'envie, il est crucial de créer les conditions de la réussite et du plaisir d'apprendre.

      Mettre les élèves en réussite : Les buts de performance peuvent avoir des effets délétères en cas d'échec. Il faut donc concevoir des tâches que les élèves considèrent comme accessibles.

      Développer des projets motivants : Lier les apprentissages à des projets concrets et stimulants (rallyes mathématiques, balades lexicales, projet CNR "J'y arrive !").

      S'appuyer sur les 4 piliers de la motivation :

      Intérêt : Le plaisir pris à réaliser la tâche.  

      Importance : La valeur accordée à la tâche.  

      Effort : La perception du coût en énergie.   

      Succès : Le sentiment de compétence et la réussite effective.

      3.2. Les Levier pour Pouvoir Apprendre

      Donner aux élèves la capacité d'apprendre passe par la clarification du cadre et des objectifs.

      Clarifier les objectifs d'apprentissage : Différencier l'objectif réel de la consigne.

      L'élève doit comprendre ce qu'il est en train d'apprendre (ex : non pas "colorier une carte", mais "apprendre à réaliser une carte en respectant un code de couleurs").

      Structurer le temps et les activités : Utiliser un "Menu du jour" pour rendre les objectifs de la journée visibles et explicites.

      Verbaliser les apprentissages : Instaurer un "Journal des apprentissages" où l'élève note ce qu'il a compris ("J'ai compris que...").

      Cela aide à la prise de conscience et à l'appropriation des savoirs.

      4. Mise en Œuvre Stratégique

      L'intégration de la métacognition dans les pratiques pédagogiques doit être pensée de manière systémique et progressive.

      4.1. Exemple d'une Dynamique de Circonscription (2022-2025)

      | Année | Actions Clés | Objectifs | | --- | --- | --- | | 2022-2023 | • Conférences "Talents du cerveau".<br>• Séminaire sur les neuromythes et la flexibilité. | Développement d’une culture commune autour de la métacognition. | | 2023-2024 | • Diffusion auprès des équipes (conseils de maîtres).<br>• Ateliers pratiques (F. Guilleray).<br>• Séminaire sur les pratiques évaluatives. | Acculturation des enseignants et déploiement des outils. | | 2024-2025 | • Conseil-École-Collège sur les compétences attentionnelles et mémorielles.<br>• Projet CNR "J'y arrive" (accompagné par JF Chesné).<br>• Accompagnement des enseignants débutants. | Ancrage des pratiques et suivi des effets sur les élèves. |

      4.2. Une Évolution des Pratiques Professionnelles

      L'approche métacognitive n'est « pas une révolution mais une évolution des gestes professionnels ».

      Elle invite à une rationalisation des pratiques sous le principe « MOINS MAIS MIEUX », en se concentrant sur les stratégies qui ont le plus d'impact.

      Conclusion

      Enseigner les connaissances et les stratégies métacognitives est un levier puissant pour lutter contre les inégalités éducatives et favoriser la réussite scolaire de TOUS les élèves. En leur donnant les clés pour comprendre et réguler leur propre fonctionnement cognitif, l'école leur permet de passer d'un statut d'apprenant passif à celui d'acteur autonome et conscient de ses apprentissages. Cette démarche outille les élèves pour qu'ils puissent, tout au long de leur vie, apprendre de manière plus efficace et plus sereine.

    1. Se bem que, algumas vezes, a humanidade haja alcançado uma compreensão da Trindade das três pessoas da Deidade, a coerência exige que o intelecto humano perceba que há algumas relações entre todos sete Absolutos. Entretanto, nem tudo que é verdadeiro sobre a Trindade do Paraíso é necessariamente verdadeiro sobre uma triunidade, pois uma triunidade é algo diferente de uma trindade. Sob certos aspectos funcionais, uma triunidade pode ser análoga a uma trindade, mas não é nunca homóloga à natureza de uma trindade.

      “nem tudo que é verdadeiro sobre a Trindade do Paraíso é necessariamente verdadeiro sobre uma triunidade

    1. nearby cl barrie belleville brantford chatham-kent cornwall guelph hamilton kingston kitchener london montreal niagara region ottawa owen sound peterborough quebec saguenay sarnia sault ste marie sherbrooke sudbury thunder bay trois-rivieres windsor canada barrie calgary comox valley edmonton fraser valley halifax hamilton kamloops kelowna kingston kitchener kootenays lethbridge london montreal nanaimo new brunswick niagara region ottawa PEI peterborough prince george quebec red deer saskatoon sherbrooke st john's sudbury thunder bay toronto vancouver victoria whistler / squamish windsor winnipeg ca provs alberta brit columbia manitoba n brunswick newf & lab nova scotia nw territories ontario pei quebec saskatchwn yukon ca cities abbotsford calgary edmonton halifax hamilton kitchener montreal ottawa toronto vancouver victoria winnipeg us cities atlanta austin boston chicago dallas denver detroit houston las vegas los angeles miami minneapolis new york orange co philadelphia phoenix portland raleigh sacramento san diego seattle sf bayarea wash dc cl worldwide

      The navigation structure is not also clearly separated or defined and labeled which makes it harder for screen reader users to understand the page structure and move well between the sections of the content and generally navigate through the page.

    1. , la lista no ordenada se construirá a partir de una colección de nodos, cada uno vinculado al siguiente mediante referencias explícitas. Siempre y cuando sepamos dónde encontrar el primer nodo (que contiene el primer ítem), cada ítem posterior se puede encontrar sucesivamente siguiendo los enlaces subsiguientes. Con esto en mente, la clase ListaNoOrdenada debe mantener una referencia al primer nodo. El Programa 2 muestra el constructor. Tenga en cuenta que cada objeto de la lista mantendrá una sola ref

      la otra parte de la implementación de una lista enlazada, consiste en construir esa colección de nodos cada uno vinculado, para ello se crea la clase ListaEnlazada con referencia al primer nodo, es decir a la cabeza, que contiene el primer ítem de la lista. A su vez este nodo contiene la referencia al siguiente nodo y asi sucesivamente.

    1. buys terefah [nonkosher] meat and then brings it home.On Rosh Ha-Shanah [New Year] and on Yom Kippur [“the Day of Atone-ment”] the people worshipped here without one sefer torah [Scroll of theLaw”}, and not one of them wore the tallit [a large prayer shaw] worn inthe synagogue] or the arba kanfot [the small set of fringes worn on thebody], except Hyman and my Sammy’s godfather. The latter is an old manof sixty, a man from Holland. He has been in America for thirty yearsalready; for twenty years he was in Charleston, and he has been living herefor four years. He does not want to remain here any longer and will go withus to Charleston.

      The people practice religion but not with all the proper rules. There is no tradition except for in the Samuel family and an older Jewish man.

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      We were very pleased to see the very positive evaluation of our work by all 3 reviewers and appreciate their constructive comments and suggestions. We have now addressed all reviewers’ comments by making changes and clarifications to the manuscript.

      Reviewer #1 (Evidence, reproducibility and clarity (Required)):

      In the present manuscript, the authors present an in-depth study on the effect of a heat-shock response on the ability of yeast to regain viability after quiescence when their ability to respire is inhibited. They nicely demonstrate that these effects correlate with the measured diffusion coefficients, providing deeper insight into the (at least partially) responsible environmental stress response and the molecular players involved. This work is an important contribution to the growing (or resurging) field of the physical properties of the cell.

      We thank this reviewer for this very positive evaluation.

      My two main comments are the following:

      • The authors determine the diffusion coefficients from the MSD, as well as further analyze them all the way up to the confinement size. As far as I can judge from the manuscript, these analyses are for 2D systems and were initially developed for processes on membranes. How does this change for 3D systems? I understand that for a straightforward qualitative comparison of apparent MSD, this assumption is acceptable, but it may deviate more strongly with the additional analyses the authors present.

      This is indeed an important point, and the reviewer is correct that the trajectories are analyzed in 2D (x,y) while the cytoplasm is a 3D environment. We fully agree that this requires careful interpretation, particularly for metrics beyond the short-lag diffusion coefficient.

      First, for the diffusion coefficient, it is well established that for isotropic 3D motion the movements in all three dimensions are independent of each other and the projected 2D MSD satisfies:

      = 4*D*τ

      Thus, estimating from the short-lag slope of the 2D MSD yields the correct diffusivity of the underlying 3D process (up to standard experimental corrections such as localization error and motion blur). This approach is therefore widely used in cytoplasmic SPT and GEM studies, including in yeast, and is not restricted to membrane diffusion [1, 2].

      Regarding confinement-related metrics derived from longer time lags, we agree that these were originally developed and most rigorously interpreted for 2D systems. In our study, these quantities are intentionally used as effective in-plane (x,y) descriptors of particle motion rather than as a full reconstruction of a 3D confinement geometry. Mapping a 2D MSD plateau to an absolute 3D confinement size depends on assumptions about geometry and isotropy and cannot be done uniquely without full 3D tracking. Nevertheless, MSD-based analyses have been successfully extended to explicitly model and quantify 3D confined diffusion in previous studies, provided that full 3D trajectories or well-defined confinement geometries are available. [2, 3]

      [1] Gómez-García, P.A., Portillo-Ledesma, S., Neguembor, M.V., Pesaresi, M., Oweis, W., Rohrlich, T., Wieser, S., Meshorer, E., Schlick, T., Cosma, M.P., Lakadamyali, M., 2021. Mesoscale Modeling and Single-Nucleosome Tracking Reveal Remodeling of Clutch Folding and Dynamics in Stem Cell Differentiation. Cell Rep. 34. https://doi.org/10.1016/j.celrep.2020.108614

      [2] Delarue, M., Brittingham, G.P., Pfeffer, S., Surovtsev, I. V., Pinglay, S., Kennedy, K.J., Schaffer, M., Gutierrez, J.I., Sang, D., Poterewicz, G., Chung, J.K., Plitzko, J.M., Groves, J.T., Jacobs-Wagner, C., Engel, B.D., Holt, L.J., 2018. mTORC1 Controls Phase Separation and the Biophysical Properties of the Cytoplasm by Tuning Crowding. Cell 174, 338-349.e20.

      [3] Lerner, J., Gómez-García, P.A., McCarthy, R.L., Liu, Z., Lakadamyali, M., Zaret, K.S., 2020. Two-parameter single-molecule analysis for measurement of chromatin mobility. STAR Protoc 1.

      Importantly, we do not assume perfect isotropy of the yeast cytoplasm. Local anisotropies are expected due to organelles, crowding heterogeneity, and cell geometry. However, the system is sufficiently close to isotropic at the length and time scales probed that the extracted confinement radius is highly reproducible across independent biological replicates. In our experiments, we observe consistent radius of confinements across three replicates, indicating that any bias introduced by partial anisotropy or projection into 2D is systematic and small.

      Based on the observed reproducibility and the finite depth of field of our measurements (~100 nm), we estimate that potential errors in the absolute values of confinement-related parameters arising from 2D projection and incomplete isotropy are on the order of We have now clarified this point explicitly in the Methods section, emphasizing that confinement parameters are effective 2D measures, that the cytoplasm is not assumed to be perfectly isotropic, and that the conclusions rely on consistent, comparative measurements obtained under identical imaging and analysis conditions. The updated Methods paragraph is as follows:

      […] Trajectory analysis: Radius of Confinement

      The radius of confinement was obtained only for the subgroup of confined trajectories. It quantifies the degree of confinement by estimating the radius of the 2D area explored by the particle in the imaging plane, which serves as a proxy measurement for the 3D volume that it explores. It was measured by fitting a circle-confined diffusion model to the TE-MSD (ensemble of all trajectories) (Wieser and Schütz, 2008).

      TE-MSD = R^2 * (1 - exp(-4*D*t_lag/R^2)) + O

      where R is the radius of confinement and D is the diffusion coefficient at short timescales. O is an offset value that comes from the localization precision limit inherent to localization-based microscopy methods.

      Trajectories were analyzed in the imaging plane (x,y), and confinement metrics were therefore derived from 2D MSDs. Although particles diffuse in a three-dimensional cytoplasmic environment, projection onto 2D does not bias estimation of the short-lag diffusion coefficient for isotropic motion, since the projected MSD follows ⟨Δr_xy²(τ)⟩ = 4Dτ. However, confinement-related parameters derived from longer lag times should be interpreted as effective in-plane descriptors of mobility rather than as a direct reconstruction of a full 3D confinement geometry. Mapping a 2D MSD plateau to an absolute 3D confinement size would require explicit assumptions about geometry or full 3D tracking. Our conclusions rely on comparative analyses performed under identical imaging and analysis conditions, and the extracted confinement radii were highly reproducible across biological replicates, indicating that any bias introduced by 2D projection or moderate anisotropy is systematic and does not affect the validity of the relative differences reported.

      • The authors show data in the supporting information where the GEMs provide larger foci after stress with longer imaging times. Could the authors provide the images of the shorter imaging times that they use? That seems a more equal comparison than Figure C. It is also unclear to me why fixed cells are used in Figure C, as well as the meaning of the x-axis. In line with this, can the authors exclude that GEMs dimerize/oligomerize after stress, and therefore display a lower diffusion coefficient?

      We are happy to include the images acquired at a shorter time interval and have done so (Fig S2A). We apologize for insufficiently explaining the GEM intensity experiment shown in Figure S2C. The fixation was done to immobilize the GEMs, since they are rapidly diffusing in live cell imaging and the diffusion speed relative to camera exposure time will impact the brightness (any movement of a particle during exposure causes the signal on the detector to become “blurred” and reduces the intensity per pixel). Hence, GEM brightness does not solely reflect the monomer or potential aggregate/multimer state, but is also affected by diffusion speed and exposure time: faster moving GEMs will generally appear dimmer than slower moving ones, since the signal detection during the acquisition time is reduced by the particle movement. Another effect is that, since GEMs are moving in live cell imaging, they have a probability of spatially overlapping, enhancing the signal levels of the single detected spots.

      We have quantified the brightness distribution in the different conditions to detect aggregation or multimerization of GEMs, which we expect to be visible as a shoulder on the Gaussian curve. The x-axis shows the intensity which we have determined for each trajectory. We chose to assess GEM intensity in the frame with the highest intensity, and to take the “Total” intensity, meaning we sum up the intensity of the pixels within the Point Spread Function (PSF) of each localization in that frame.

      To clarify these points, we have extended the description of this experiment in the Results and Methods sections:

      Results:

      [...] Additional evidence for this comes from the observation that imaging GEMs at a lower frame rate (i.e., longer exposure time of 100 ms) showed a uniformly diffuse signal in SCD, whereas distinct foci appeared under starvation conditions (Figures S2A and S2B). This might suggest that GEMs aggregate in starvation. However, imaging GEMs at a faster frame rate (used for SPT, 30 ms exposure time) shows GEMs freely diffusing in all conditions (Figure S2A). Furthermore, analyzing GEM particle intensities in fixed cells, to eliminate motion blur-induced intensity attenuation, showed uniform GEM brightness distributions in all conditions (Figure S2C). Rather than aggregates, the bright foci thus represent immobile, single GEM particles that are confined and appear brighter during long exposure times due to their confinement in low-diffusive compartments. [...]

      Methods:

      [...] Trajectory analysis: Track Total Intensity

      To assess GEM brightness, we determined the intensity of each trajectory in fixed cells. Cell fixation eliminates the motion blur-induced intensity attenuation, which would otherwise confound the GEM brightness depending on the movement speed and confinement. For each individual particle trajectory, the frame with the highest signal intensity of the localized particle was determined and the sum of the pixel intensities of the particle in that frame was calculated as the “Track Total Intensity”. In fixed cells, the GEM intensities were comparable in all conditions (Figure S2C). All GEM intensity histograms show a single, bell-shaped distribution of intensities with no indication of several GEM particles aggregating into brighter foci. [...]

      Other comments: - For the precision of the language, the authors equate ribosome content with macromolecular crowding, with the diffusion of the GEMs throughout, and this becomes more conflated in the discussion, where it is compared to viscosity and macromolecular crowding effects, e.g., translation. Is it macromolecular crowding, mesoscale crowding, nano-rheology, or ribosome crowding? What is measured precisely?

      We agree that careful and consistent nomenclature is important and thank the reviewer for bringing this point to our attention. We believe our manuscript maintains the proper distinctions of the terms diffusion, crowding and viscosity. We refer to what we study with the GEM single-particle tracking consistently as “(cytoplasmic) diffusion”. In Figure 2, we add “crowding” as an additional term since we observe a change in ribosome concentration and we affect the cytoplasmic crowdedness with a hyperosmotic shock. Our in-depth analysis of the confined and unconfined trajectory diffusion suggested that the cytoplasm is not simply globally affected by crowding or viscosity, but contains regions or compartments that trap GEM. Apart from Figure 2, we do not use the term viscosity or crowding, and we only return to “crowding” in the Discussion, either in reference to the aforementioned experiments from Figure 2 (ribosome concentration, hyperosmotic shock) or when discussing studies from cited works.

      However, we did not use the term “macromolecular crowding” consistently and simplified it to “crowding” in a few instances. To be more precise, we now specify “macromolecular crowding” instead of “crowding” wherever applicable; namely in the text referring to Figure 2, where we specifically assess macromolecular crowding.

      • In the EM images, the ribosomes seem smaller after starvation. Is that correct, and how should we interpret this? Is this due to an increased number of monosomes?

      This is an important point, and it indeed appears that in SCD some ribosomes are close together, potentially as polysomes. In SC, the ribosomes appear more distinctly separated from each other, which would be expected due to the polysome collapse that occurs in starvation. However, the apparent size of individual ribosomes is identical in both conditions. Unfortunately, the resolution is not good enough to accurately measure the sizes of the ribosomes and clearly determine their monomer/polysome state.

      • The authors refer to recent work on how biochemical reactions, such as translation, are determined by the cytoplasm. There is some older work on this, see for example in bacteria https://doi.org/10.1073/pnas.1310377110, and also in vitro here DOI: 10.1021/acssynbio.0c00330

      We thank this reviewer for pointing out these publications and have included them in this group of citations.

      • On the section of correlating diffusion and survival outcomes (bottom page 12), it is mentioned that the lowered diffusion could enhance aggregation. However, literature indicates that the opposite is true in buffer; lower diffusion reduces aggregation (also nucleation is inversely proportional to the viscosity).

      This is a valuable point and we have happily expanded on it in the Discussion section. It is true that chemical assays have demonstrated that higher viscosity and slower diffusion decrease nucleation and aggregate formation. However, in vitro studies that alter diffusion through crowding changes have revealed a complex relation between crowding and aggregation propensity. The basic idea is that the excluded volume effect decreases aggregation by stabilization of the more compact, folded state. But the opposite effect, precluded protein folding, has also been ascribed to the excluded volume effect. As of now, studies with different crowders (dextran, ficoll, PEG, etc.) demonstrated increased or reduced protein aggregation upon crowding [1, 2, 3, 4]. The variable effect on aggregation seems to be not only based on the protein that is studied, but also the properties of the crowder (charges, shape, size), the interaction of the crowder with the protein, and the mixture of crowders [5].

      Even though the relationship between crowding and protein aggregation is complex, we speculate that lower diffusion in our more crowded cells could cause protein aggregation, because these starvation conditions are known to induce the formation of protein fibrils and the condensation of mRNA and proteins.

      [1] Uversky, V.N., M. Cooper, E., Bower, K.S., Li, J., Fink, A.L., 2002. Accelerated α-synuclein fibrillation in crowded milieu. FEBS Lett. 515, 99–103. https://doi.org/10.1016/S0014-5793(02)02446-8

      [2] Munishkina, L.A., Cooper, E.M., Uversky, V.N., Fink, A.L., 2004. The effect of macromolecular crowding on protein aggregation and amyloid fibril formation. J. Mol. Recognit. 17, 456–464. https://doi.org/10.1002/jmr.699

      [3] Biswas, S., Bhadra, A., Lakhera, S., Soni, M., Panuganti, V., Jain, S., Roy, I., 2021. Molecular crowding accelerates aggregation of α-synuclein by altering its folding pathway. Eur. Biophys. J. https://doi.org/10.1007/s00249-020-01486-1

      [4] Mittal, S., Singh, L.R., 2014. Macromolecular crowding decelerates aggregation of a β-rich protein, bovine carbonic anhydrase: a case study. J. Biochem. 156, 273–282. https://doi.org/10.1093/jb/mvu039

      [5] Kuznetsova, I.M., Zaslavsky, B.Y., Breydo, L., Turoverov, K.K., Uversky, V.N., 2015. Beyond the excluded volume effects: Mechanistic complexity of the crowded milieu. Molecules 20, 1377–1409. https://doi.org/10.3390/molecules20011377

      To be more precise, we have therefore extended our Discussion section. We believe part of this additional discussion fits better in an earlier section, where we specifically discuss how the cytoplasmic properties, and specifically crowding, have been linked to filament/condensate formation. The updated paragraphs are as follows:

      [...] Additional cytoplasmic rearrangements occur upon energy depletion, including filament formation or the formation of biomolecular condensates (Narayanaswamy et al., 2009; Noree et al., 2010; Petrovska et al., 2014; Prouteau et al., 2017; Riback et al., 2017; Saad et al., 2017; Marini et al., 2020; Stoddard et al., 2020; Cereghetti et al., 2021) highlighting a broader reorganization of the cytoplasm that could further affect the diffusion of macromolecules. In turn, the amount of crowding might also influence the propensity to form condensates and filaments (Heidenreich et al., 2020). Interestingly, in vitro studies have demonstrated a complex, dual effect of crowding on protein fibrillation and aggregation, in suppressing or accelerating it (Uversky et al., 2002; Munishkina et al., 2004; Mittal and Singh, 2014; Biswas et al., 2021). This appears to be dependent not only on the protein of study, but the properties of the crowder (size, charge, shape) and the specific mixture of crowders (Kuznetsova et al., 2015). [...]

      [...] By contrast, extremely low diffusion, as seen in the absence of respiration in glucose starvation, might irreversibly impair cellular functions due to limited movement of proteins and RNA in and out of certain compartments, cellular territories and condensates. Such a model is supported by our analysis of how lower diffusion is the result of confined spaces becoming more prevalent, creating compartments that can trap macromolecules. As previously mentioned, increased crowding and reorganization of the cytoplasm have been linked to condensation and fibril formation of proteins, and, in certain in vitro contexts, accelerated aggregation. This state of crowding-induced low diffusion might therefore enhance protein aggregation or preclude the refolding of damaged proteins, which could disrupt proteostasis and lead to toxic aggregates that are a hallmark of the aging process (López-Otín et al., 2013). Together, these effects on proteins, RNA and other macromolecules likely lead to loss of cell fitness and irreversible arrest of the cells, preventing their reentry into the cell division cycle. [...]

      Reviewer #1 (Significance (Required)):

      General assessment: Strengths: It is a comprehensive study that provides a wealth of information and insight into the intricacies of a field that has received considerable attention, and its views are evolving rapidly. Weaknesses: It may suffer from some overinterpretation of diffusion data. Advance: The significant advance is that the molecular response pathway and precise molecular players are connected to the biophysical response of cells to starvation/quiescence. The dependence of diffusion on starvation has received considerable attention (Jacobs-Wagner, Cell, 2014; the current authors in eLife, 2016; and more recent investigations by Holt, Delarue, and others). Still, the authors take the next step and demonstrate how quiescence, and particularly how the history of a cell affects it, correlates strongly with the diffusion. As far as I can tell, this is new. As mentioned, the molecular insights into the pathways are exceptionally strong from my perspective. From personal experience, this work is also very important for researchers outside of the field from a practical standpoint: Do your measurements change when you stress cells by walking to a microscope? And even if you incubate them there, your measurement outcome will change. In my experience, this is a crucial point, and the cell's history is often overlooked. Audience: Broad -- biophysicists, molecular biologists, cell biologists, biotechnologists. My field of expertise: Biophysics.


      Reviewer #2 (Evidence, reproducibility and clarity (Required)):

      This manuscript addresses an important and longstanding question in the field: how eukaryotic cells remodel themselves to enter and survive quiescence, particularly under nutrient limitation and energy stress. The authors combine tools from biophysics, proteomics, stress signaling, and functional genomics to propose that stress-induced cytoplasmic reorganization, rather than ATP availability per se, is critical for long-term survival when respiration is impaired. The topic is timely, the experiments are generally well executed, and the initial phenomenology is compelling. The paper begins with a set of clear and convincing figures that establish an interesting and biologically important phenotype: when cells are shifted into glucose starvation, they can survive long term only if respiration is functional. Blocking respiration with Antimycin A (AntA) severely compromises viability. One straightforward hypothesis is that this defect simply reflects a failure to generate sufficient ATP. The authors, however, show that a 30-minute heat shock (HS) before glucose withdrawal in the presence of AntA largely rescues survival, even though cellular ATP levels remain critically low. In parallel, they use very well-executed GEM single-particle tracking experiments to demonstrate that cytoplasmic particle mobility decreases markedly in glucose-starved, respiration-deficient cells, and that this diffusion defect is also rescued by the pre-HS, again without restoring ATP. Together, these initial figures strongly support the idea that stress-induced remodeling of the cytoplasm, rather than ATP levels per se, is a key determinant of whether cells can enter and maintain a viable quiescent state. The authors then propose that this protective effect of HS is mediated by induction of the environmental stress response (ESR) and by resulting changes in protein expression. To test whether new protein synthesis is required, they pre-treat cells with cycloheximide during the HS and recovery period. This treatment largely, although not completely, abrogates the beneficial effect of HS on survival and diffusion in AntA-treated, glucose-starved cells. This is a strong experiment and supports the idea that HS-induced synthesis of specific proteins is important for protection, while also hinting that some cycloheximide-insensitive or pre-existing components may contribute. To identify the relevant proteins, the authors turn to global proteomic analysis, comparing multiple conditions: glucose starvation (SC), heat shock followed by glucose starvation (HS SC), glucose starvation plus AntA (SC + AntA), and heat shock followed by glucose starvation plus AntA (HS SC + AntA), each at 1 and 20 hours. This is where, in my view, the story becomes significantly harder to follow. The text for Figure 3 relies almost entirely on GO term enrichment, with very little description of individual proteins or even basic quantitative summaries of the dataset. For example, the authors never clearly state how many proteins were robustly quantified, nor what fraction of the proteome that represents. Without this foundational information, it is difficult to evaluate the strength and generality of their conclusions. Related to this, the GO analysis in Figure 3F reports "significant" enrichment for categories such as ribosomes or translation, yet the underlying number of proteins making up these enrichments is not shown. From the volcano plots, it appears that only a very small number of proteins change in some conditions (e.g., SC 20 h), and yet GO terms appear with extremely strong q-values. This is confusing: how can such strong enrichment occur if only a handful of proteins are changing? At minimum, the authors should provide: • the number of significantly up- or down-regulated proteins in each comparison • the number of proteins contributing to each enriched GO category • the magnitude of the changes for these proteins Because the absolute number of significantly changing proteins appears small in several conditions, the current heavy reliance on GO analysis feels unwarranted and potentially misleading. In such cases, it would likely be more informative to list all differentially abundant proteins-either in supplementary materials or in a main-text table-and briefly describe the most relevant ones, rather than relying on broad category labels. Figure 3F, in particular, needs substantially more explanation. A related issue appears in Figure 3G (and the associated text), where the authors emphasize that the proteomic response to HS + AntA and the response to long-term glucose starvation are distinct. While this conclusion is plausible, the analysis also shows a subset of proteins that are upregulated in both conditions. These overlapping proteins may, in fact, represent the core protective module that enables survival in quiescence. The authors do not discuss these proteins at all; instead, they are effectively dismissed in favor of the "distinct responses" narrative. I encourage the authors to identify and discuss these overlapping proteins explicitly. Are they chaperones, proteasome components, antioxidant enzymes, or other classical stress-response factors? Even if the global proteomes differ, the overlapping subset could be highly informative about the minimal set of proteins required to stabilize the cytoplasm and support entry into quiescence. The SATAY screen is a major strength of the paper, as it moves from correlative proteomics to functional genetic analysis. The approach appears well-controlled, but key information is missing: How many unique insertions were obtained? Was the library saturating? What was the read distribution and coverage? The authors also discuss only a small subset of the screen hits. The volcano plots show many additional genes that are not addressed. What categories do these fall into? Are they informative about pathways beyond Ras/PKA and Msn2/4? Presenting a fuller analysis would strengthen the mechanistic interpretation. The parts of the SATAY analysis that are discussed are solid. The screen implicates the Ras/PKA signaling axis and Msn2/4 in survival under HS-preconditioned, respiration-deficient starvation, and the authors validate these hits with targeted survival assays. The correspondence between genetic perturbations and changes in cytoplasmic diffusion is an intriguing connection. However, the analysis stops short of identifying the downstream effector proteins that actually produce the biophysical benefits observed. The manuscript then returns to the idea that improved cytoplasmic diffusion and reduced confinement may be essential for survival. This is an appealing hypothesis, but the evidence remains correlative. It is still unclear whether biophysical rescue is the cause of improved survival or simply a downstream marker of a properly induced stress response. What remains missing is deeper integration of the proteomics and SATAY data to identify which proteins are likely responsible for the adaptive changes in cytoplasmic organization. Overexpression of promising candidates-such as chaperones or proteostasis factors found in the overlap between HS and long-term starvation responses-could help determine whether any single protein or small group of proteins can phenocopy the HS-induced rescue. Importantly, many of the comments above are intentionally broad: the manuscript does not simply require small clarifications but would benefit from substantial expansion and deepening of the analysis. The observations are compelling, but the mechanistic chain connecting ESR activation → proteomic remodeling → cytoplasmic biophysics → survival remains insufficiently developed in the current draft. Clearer quantitative reporting, fuller presentation of the data, and more thoughtful interpretation would significantly strengthen the manuscript.

      We thank reviewer 2 for this very thoughtful evaluation of our manuscript. We agree that expanding the descriptions and analysis of the presented data will improve the manuscript. Importantly, we now provide the proteomics data and the SATAY screen in an accessible format as supplementary materials. We address the individual points below.

      Summary of Major Issues That Need to Be Addressed • Quantitative clarity in the proteomics o State how many proteins were quantified. o Report the numbers of significantly changing proteins in each condition. o Identify the proteins underlying each GO term and provide effect sizes.

      We have now included a supplemental table containing label-free protein abundances for all 3308 reproducibly quantified proteins across all nine conditions (Supplemental Table S4). In addition, we added a sentence to the main text specifying both the number of reproducibly identified proteins and the approximate coverage of the yeast proteome.

      For the comparison of protein abundances between the different stress conditions and logarithmically growing SCD cells, we now indicate the number of significantly changed proteins in the legend of Figure 3E. Furthermore, we include a heatmap of standardized protein abundances for all proteins that were significantly changed in at least one stress condition (Supplemental File S1) and provide all pairwise comparison results in the supplemental table (Supplemental Table S5). This new Supplemental File S1 replaces the previous Supplemental File S1, which had a stricter cutoff, showing all proteins with an abundance change greater than 2 standard deviations.

      The information requested by the reviewer regarding GO term analysis is indeed important and was missing in the original version. We now report, for each GO term, the number of proteins in the top or bottom 10% of differentially abundant proteins and provide the corresponding effect size, calculated as the ratio of the observed to expected hits (Figure 3F).

      • Over-reliance on GO analysis o Provide explicit lists of differentially expressed proteins. o Indicate whether enrichment results are meaningful given the small number of hits.

      We appreciate this reviewer’s comment and agree that the presentation of the proteomic data in Figure 3 relies strongly on GO term enrichment, with limited description of individual proteins. Our primary goal for the proteomic analysis was to characterize the cellular response to stress at a global level rather than to focus on individual proteins or stress-specific details. We therefore intentionally opted for a broader, more coarse-grained analysis to not overcomplicate the manuscript and maintain accessibility for a broad readership.

      That said, we agree that the underlying data should be made fully accessible. We have therefore expanded the supplemental materials to include a heatmap of all proteins that were significantly changed in at least one condition (Supplemental File S1), as well as comprehensive tables reporting protein abundances and pairwise differences across all stress conditions (Supplemental Tables S4 and S5). These additions provide direct access to the protein-level data while preserving the clarity of the main text.

      With respect to GO term analysis, to avoid overinterpretation driven by small protein sets and better comparability across different conditions, we always performed the GO enrichment based on the top and bottom 10% changed proteins. This is already stated in the legend of Figure 3F and in the Methods section. We have now added the key missing parameters of the analysis to Figure 3F (see response above). Given that the analysis identifies multiple GO terms generally associated with the environmental stress response and that these terms exhibit coordinated behavior across conditions (Figure S3A), we are confident that the conclusions drawn from this analysis are robust.

      • Overlooked overlapping proteins o Analyze and discuss the subset of proteins upregulated both by HS and by long-term starvation. o These may represent the core factors enabling survival.

      Indeed, we agree that the overlapping proteins that are observed in our Figure 3G analysis should be presented. Perhaps surprisingly, these proteins (Hxt5, Sps19, Atg8, Aim17, Put1, Fmp45, YNL194C) have diverse functions and have so far not been implemented in the environmental stress response.

      In the Results section, we now mention and briefly discuss the four that are present in both time points of the HS SC +AntA condition. We now mention all of them in the figure legend.

      The modified text from the Results section is as follows:

      [...] Furthermore, the proteins that are enriched in long-term starvation (SC 20 h vs. SCD) and those enriched in pre-HS respiration-deficient starvation (HS SC +AntA 1 h vs. SCD; HS SC +AntA 20 h vs. SCD) are poorly correlated and there is only a small overlap of factors that are significantly upregulated in all conditions (Figure 3G). These proteins are Aim17, Put1, Fmp45 and YNL194C. Aim17 is a mitochondrial protein of unknown function and Put1 is a mitochondrial proline dehydrogenase. Fmp45 and YNL194C are paralogous membrane proteins involved in cell wall organization. Focusing on the broad proteomic adaptation, we looked at the Gene Ontology (GO) terms of the proteomic changes across all conditions, and observed that long-term starvation (SC 20) leads to the upregulation of a few groups of proteins, mostly involved in respiratory activity and rewiring of the metabolism (Figure S3A). [...]

      We greatly appreciate the suggestion to do an overexpression experiment. However, the overlapping proteins are not significant hits in the SATAY, suggesting that they are individually not required for the survival rescue although their overexpression might benefit survival.

      We have therefore chosen to keep a broad perspective on the proteomics results and investigate instead the SATAY results in more detail, since they inherently contain functional relevance to survival. Overall, we feel that the overexpression of those (individually or as a group) would extend beyond the scope of our current manuscript.

      • SATAY analysis needs fuller presentation o Provide insertion numbers, coverage, and basic library statistics. o Discuss additional hits beyond the Ras/PKA/Msn2/4 pathways. o Integrate SATAY results more deeply with proteomics.

      We have added the insertion numbers and genome coverage percentages to the Methods section as follows:

      [...] SATAY Screen: Analysis and Plotting

      Sequencing detected the following total unique transposon numbers: 690’935 (A1), 558’932 (HA1), and 359’935 (HA4d) unique transposons. The transposon insertions in the different genes yielded the following genome coverages: 96.3% (A1), 94.5% (HA1) and 89.3% (HA4). For each gene [...]

      We now also provide the SATAY screen data as Supplemental Table S6.

      In the Results section, we mention some additional hits from the SATAY screen (ribosome biogenesis, mitochondrial respiration) but then shift our focus to the ESR genes. We now add a comment to the ribosome biogenesis genes before going to the ESR:

      [...] The screen revealed several highly significant gene disruptions that promote or impair the HS-mediated rescue of respiration-deficient, glucose-starved cells (Figure 4A, Supplemental Table S6). The most significant gene hits that impair survival in 4 d HS SC +AntA when disrupted are involved in a variety of cellular processes, including ribosome biogenesis (e.g., ARX1, BUD22, RRP6), mitochondrial respiration (e.g., CBR1, COX23, ETR1), and ESR (e.g., MSN2, PSR2, YAP1). Intriguingly, the ribosome biogenesis genes being crucial for survival suggests that new ribosomes might have to be produced to ensure proper translational response during the HS. Notable among the ESR genes are MSN2 and, less significantly scored, MSN4, the master regulators of the ESR. [...]

      To deepen the discussion on the lack of overlap between the SATAY screen and the proteomics, we have added a sentence highlighting that the SATAY screen detected the main regulators of the ESR, and the proteomics revealed its downstream targets involved in proteostasis and other stress proteins, and therefore these two data sets do both point to the ESR as the crucial response behind the HS-induced rescue. The modified Discussion text is as follows:

      [...] Furthermore, the signaling genes that scored highly in the SATAY screen are often regulated through their activity rather than their abundance. Plausibly, their downstream target proteins are differentially expressed, whereas disrupting the regulators themselves leads to strong survival phenotypes. Similar observations have been made in other stress conditions, where fitness-relevant genes showed little overlap with genes with upregulated expression (Birrell et al., 2002; Giaever et al., 2002). Nonetheless, the SATAY screen revealed the principal regulators of the ESR while the proteomic analysis detected many of the ESR downstream targets involved in proteostasis and oxidative stress, demonstrating a functional convergence on the ESR in both data sets. [...]

      • Mechanistic depth remains limited o Clarify whether cytoplasmic biophysical rescue is causal or downstream. o Test whether overexpression of candidate proteins can mimic HS-induced protection. o Expand the discussion of potential mechanisms using insights from both datasets.

      Indeed, the specific mechanism(s) that govern the cytoplasmic properties in our conditions are currently not known, preventing us from manipulating the cytoplasmic properties and confirming a causal relationship. To uncover the mechanisms, extensive follow-up studies on ESR genes and/or proteins would be required, going beyond the scope of this manuscript. Furthermore, our ongoing follow-up studies are pointing towards redundancy of some potential regulation of the cytoplasmic diffusion, further complicating the analysis.

      The suggested overexpression experiment is addressed in a previous comment where the overlapping proteins are mentioned.

      Reviewer #2 (Significance (Required)):

      This manuscript addresses a fundamental and timely question in cell biology: how eukaryotic cells remodel themselves to enter and survive quiescence, particularly under conditions of nutrient depletion and compromised energy production. Although quiescence has been studied for decades, the mechanisms that link metabolic state, stress signaling, and the physical properties of the cytoplasm remain incompletely understood. This work brings together biophysical measurements, global proteomics, and unbiased genetic screening in an ambitious effort to illuminate how cells maintain viability when respiration-and thus efficient ATP generation-is disrupted. A key conceptual contribution of this study is the demonstration that ATP levels alone do not dictate survival during starvation. Rather, the ability of cells to mount an appropriate stress response and reorganize the cytoplasm appears to be crucial. The early figures provide compelling evidence that heat shock preconditioning can rescue both viability and cytoplasmic mobility in respiration-deficient cells, even when ATP remains low. This finding is notable because it challenges the widely held assumption that energy charge is the primary determinant of successful entry into quiescence. If strengthened by deeper mechanistic analysis, this insight could reshape how the field views energy stress and cellular dormancy. The identification of the Ras/PKA-Msn2/4 axis as a key regulatory node is also significant, as it connects quiescence survival to well-established nutrient and stress signaling pathways. The integration of a genome-wide SATAY screen adds functional depth and offers the potential to uncover specific downstream effectors that remodel the cytoplasm or stabilize cellular structures during prolonged stress. Finally, the manuscript touches on a concept that is gaining traction across many subfields of biology: that the biophysical state of the cytoplasm is a regulated and physiologically meaningful parameter, not merely a passive consequence of metabolic decline. Understanding how cells tune macromolecular crowding, diffusion, and spatial organization during quiescence could have broad implications beyond yeast, including in stem cell biology, microbial dormancy, cancer cell persistence, and aging. Overall, the questions addressed are important, and the study has the potential to make a meaningful conceptual contribution. However, realizing that impact will require clearer and deeper mechanistic analysis-particularly in the proteomics and SATAY sections-to convincingly identify the specific factors and pathways that mediate the cytoplasmic remodeling underlying survival.


      Reviewer #3 (Evidence, reproducibility and clarity (Required)):

      Summary. Yeast haploid cells enter quiescence during nutrient deprivation, undergoing major metabolic, transcriptional and biophysical changes. In particular, quiescent cells remodel their cytoplasm, increasing macromolecular crowding and reducing diffusion. Respiration is known to be essential for entry into quiescence and long-term survival.

      In this study, the authors discovered that respiration is not intrinsically required for yeast to survive glucose-starvation-induced quiescence. In particular, they found that a short heat shock before starvation restores survival even in the absence of respiration (Antimycin A treatment), demonstrating that a stress-induced adaptation can bypass the respiratory requirement. This rescue occurs without ATP recovery and relies on de novo protein synthesis. This stress-induced adaptation also rescues quiescent-like biophysical properties of the cytoplasm (increased crowding) that are normally prevented in non-respiring cells, which are thought to be relevant for cell survival . Proteomics reveals that heat shock induces a distinct stress-response proteome enriched in proteostasis factors. A genetic screen reveals that Ras/PKA inhibition and Msn2/4 activation enable this protective reprogramming. Altogether this highlights the importance and complexity of stress adaptation to quiescence establishment.

      This is an excellent paper in all aspects. I have no major points besides the data accessibility, below.

      We thank this reviewer for this very positive evaluation.

      Main comments. - It would be nice to have the MS data available as Excel files for the community, and uploaded to repositories such as PRIDE. Description of the MS data is a bit expedited to serve the purpose of the paper (clustering to evaluate the similarity of proteomic profiles between conditions, GO term enrichment) so having the full data available might help.

      We agree that the MS data should be accessible. The label-free protein abundances for the reproducibly quantified proteins across all nine conditions (Supplemental Table S4) and the pairwise comparisons shown in Figure 3E (Supplemental Table S5) are now included as supplementary Excel files. The MS data is currently not on PRIDE but we will deposit it there upon publication of our manuscript.

      • Same thing for the SATAY screen. The data is summarized in Fig 4B but I believe that the data should be provided.

      We agree that the SATAY screen results should be accessible as well, and we have now included the data as Supplemental Table S6.

      Minor comments and questions. -I believe that in graphs, the X axis should start at 0 to avoid confusion about the strength of the effect (eg. Fig 2B)

      We thank reviewer 3 for pointing this out, and we have re-evaluated the axis limits of all plots. As suggested, we have adjusted the x-axis in Fig 2B to start at 0 to better highlight the strength of the effect. For our Radius of Confinement and %Confined Trajectories graphs, we believe adjusting the y-axis to start and end at the same values will allow better comparison across figures. However, we chose not to set those y-axes to start at 0, since our measurements lie in a range which is covered by these axes, and these plots would simply include blank space if set to start at 0.

      -I found that using imaging of GEMs at low frequency to reveal cytoplasmic crowding heterogeneity very interesting. Quiescent cells are known to accumulate many "bodies" as discussed in the text, would any of those co-localize with GEM foci?

      Indeed, the imaging at low frequency has revealed that fluorescently-tagged proteins might become trapped in certain regions of the cytoplasm, allowing their detection at conventional imaging frequencies. It is very likely that a similar effect occurs for other cytoplasmic “bodies”, which become visible not only through protein accumulation in a single body but also through low mobility. We have not performed any colocalization experiment with known “bodies” (such as P-bodies or stress granules). Therefore, we do not know if any stress-induced “bodies” are confined to the same spaces as GEMs. However, we would expect at best an incomplete colocalization based on the observation that glucose starvation-induced “bodies” are generally present in a higher percentage of cells than the GEM foci we observe, i.e. it is unlikely that all “bodies” overlap with a GEM focus. It might be interesting to perform such colocalization experiments in follow-up studies, but we feel that such an analysis would go beyond the current scope of this manuscript.

      Reviewer #3 (Significance (Required)):

      General assessment, advances in the field This is an excellent study. The key finding of this paper, ie. that heat shock can compensate for lack of respiration for entry into quiescence, challenges the current views on quiescence establishment. It describes an alternative program that contributes to cell viability upon C source depletion, with details on the proteomic changes occurring in this condition and some of the genetic basis of this pathway. The study is well designed and controlled, the conclusions are in line with the obtained results and very well discussed and placed in perspective. Experimentally, the authors combine several experimental approaches including live-cell single-particle tracking of GEM nanoparticles to quantify cytoplasmic diffusion, FIB-SEM ultrastructural imaging of the cytoplasm to measure macromolecular crowding, proteomics to map stress-induced protein changes and genome-wide SATAY transposon mutagenesis to identify genes required for survival in respiration-deficient cells. The limitations are: -we don't know how this stress program facilitates survival in the absence of restoration of ATP levels. The data suggest that protein homeostasis is involved (chaperones and proteasome up-regulated upon stress, reduced ribosomal and translation-associated proteins down-regulated in the absence of respiration) but the mechanism remains elusive. -the relationships between cytoplasmic crowding and quiescence establishment remain correlative. Yet, the authors provide another pathway to favour viability upon quiescence establishment (with HS) whose activation also displays an increased crowding and reduction of cytoplasmic movement, further consolidating this link. Both of these points are adequately discussed in the manuscript. None of these points should preclude publication of this study, in my opinion.

      Audience. This study would be of interest to researchers in the field of quiescence, biophysics, proteostasis, stress response, nutrient signaling and yeast biology.

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      We thank reviewers for the general positive feedback and insightful suggestions. Reviewers found that our study “provides a rich resource of potential E3-sensor interactions and represents a conceptual and technical advance for the field” and that our “key conclusions are convincing and interesting”. Reviewers suggested both editorial changes to improve the narrative of the manuscript and additional experiments to strengthen the conclusions of the study. We agree with both types of suggestions and decided to modify our manuscript accordingly.

      Reviewer #1 (Evidence, reproducibility and clarity (Required)): 

      The authors present a rational, AlphaFold-based strategy to systematically identify interactions between human nucleic acid sensors and SPRY-containing proteins. Their findings demonstrate that SPRY domains encode substrate-specific recognition patterns that govern immune responses: TRIM25-ZAP in antiviral defense and restricts LNP-encapsulated RNA, while Riplet-RIG-I for the IFNB1 production and restricts lipofection. They further dissect residue-level contributions to the ZAP-TRIM25 interface by integrating structural predictions with experimental validation. 

      Specific comments.  1. The title of this manuscript appears quite broad given that this study mostly focuses on just TRIM25-ZAP and Riplet-RIG-I pairs. 

      We agree that the original title was broader than the main mechanistic focus of the study. We will therefore revise the title to better reflect that the manuscript primarily dissects SPRY-domain–mediated specificity in the TRIM25-ZAP and Riplet-RIG-I interactions (identified through our AlphaFold-based screening framework), while retaining the broader screening context. Proposed new title: "SPRY domains encode ubiquitin ligase specificity for ZAP and RIG-I"

      In Figure 1b, several predicted interaction scores appear inconsistent with previously reported experimental interactions. For instance, KHNYN has been experimentally validated as a TRIM25-interacting protein, yet its interaction score is notably low in your computational results. Could the authors clarify whether this discrepancy arises because the TRIM25 SPRY domain does not significantly contribute to KHNYN binding? 

      We thank the reviewer for raising this point. To our knowledge, published data only support co-immunoprecipitation of TRIM25 and KHNYN in ZAP-deficient in cells (PMID: 31284899), but this does not by itself demonstrate a direct binary interaction, as the association could be mediated by other factors. Consistent with this, our AlphaFold-based screen predicts a low interaction score between KHNYN and TRIM25, suggesting that this may not be a direct protein-protein interaction. Nevertheless, we concede that our approach may have missed interactions that are governed by a small number of interacting residues. We added the following sentences on the limitation of this approach for such interactions in our discussion:

      • While our screen revealed novel interactions between SPRY domain containing proteins and innate immune sensors, it is plausible that certain interactions were missed. Interactions that rely on a small number of contacting residues or interactions that may be mediated by a third binding partner are likely to score poorly in our approach. Future optimization of our algorithm will improve the detection of such interactions.”*

      In Figure 2c, the authors provide intriguing examples for shared targets by SPRY proteins with quite low homology, and distinct target profiles by nearly identical SPRY domains. However, the underlying mechanisms responsible for these observations are not discussed. 

      This is an important point. At present, we cannot assign a single definitive mechanism for every example, but there are several plausible explanations consistent with our framework. First, our analysis indicates that substrate recognition is often driven by a limited subset of residues at the interaction surface, such that distinct sequences can converge on similar three-dimensional interface chemistry, while small local differences can shift binding preferences. Second, we note that although a large fraction of predicted contacting residues are within SPRY domains, other domains can also contribute to interaction and substrate recognition, which could modulate binding profiles even when SPRY sequences are near-identical. Third, the Pearson’s correlation coefficient was calculated all scores, which may include structures with low confidence scores or low interaction scores

      In Figure 3e and 3f, the authors state that the Riplet-T25 SPRY chimeric protein showed enhanced AlphaFold predicted interaction with ZAP, and validated the interaction experimentally. However, the Alphafold also predicted that an increased interaction for the T25-Riplet chimera, although this mutant failed to be co-precipitated with ZAP. How do the authors reconcile this discrepancy between prediction and experimental outcome? 

      The reviewer noticed an important, nuanced result in Fig. 3e. AlphaFold predicts that the TRIM25 chimera containing the Riplet SPRY domain (T25–Riplet) has a higher interaction score with ZAP than Riplet alone (Fig. 3e), yet this chimera is not recovered in ZAP co-immunoprecipitation (Fig. 3f). We reconcile this by emphasising that our framework uses an empirically benchmarked threshold: known SPRY–sensor interactions typically score >2.5, and we therefore adopted >2.5 as the cutoff for “high-confidence” candidate interactions. While the T25–Riplet chimera shows an increased score relative to Riplet, its score remains below this >2.5 cutoff in Fig. 3e (which reports interaction scores of the chimeras against ZAP). Therefore, the model is consistent with the experimental outcome: AlphaFold suggests some degree of interface compatibility, but not at a level we would classify as a robust/predictive interaction under our validated threshold. We clarified this point in the Results section to explicitly note that sub-threshold “increases” should be interpreted cautiously:

      Using the T25-RipletSPRY instead of the Riplet protein, predicted a higher interaction score despite the lack of specific pull-down between this chimera and ZAP; importantly, this interaction score is below our defined threshold (2.5), highlighting the importance of benchmarking predicted scores against known interactions.”

      It is curious if the authors explain why TRIM25 consistently appears as two bands in many of the presented figures. 

      We have also wondered about this observation as well. Other studies report that the double band pattern in western blots of TRIM25 (PMID: 17392790, 28060952, 21292167) and it is believed to be a product of non-degradative self-ubiquitination of TRIM25, primarily acting on the K117 residue (PMID: 21292167). We will add a brief description of this phenomenon in the figure legend.

      In Figure 4b, the authors show that treatment with a proteasome inhibitor increased RIG-I ligand-induced IFNB1 expression and propose that RIG-I may undergo rapid degradation following its interaction with Riplet. However, the evidence supporting this claim is weak. The authors should demonstrate: (1) that RIG-I is indeed degraded via the proteasome, and (2) whether RIG-I undergoes K48-linked ubiquitination. Mutational analysis of putative ubiquitination sites in RIG-I would help clarify its contribution to the observed IFN responses. 

      This is an important point and we are currently performing experiments addressing these questions. Specifically we will provide evidence of (1) whether RIG-I is degraded after activation using a combination of western blotting and pharmacological inhibition of the proteosome/translation machinery; (2) whether RIG-I goes K48- or K63-mediated ubiquitination by performing coIPs of RIG-I in the presence of HA-Ub wildtype or the commonly used HA-Ub K48 and K63 mutants (PMID: 15728840); and (3) whether lysine-to-arginine substitution of key residues impacts RIG-I ubiquitination/degradation.

      Figure 5 c-g: why do the authors show ZAP-L, but not ZAP-S? 

      Both ZAP-S and ZAP-L isoforms contain identical N-terminal domains, which is the region that interacts with TRIM25. Therefore, we assumed that the interaction between TRIM25 and ZAP-L would be similar to TRIM25-ZAP-S. However, to test this assumption, we will generate equivalent mutations in ZAP-S and perform similar co-immunoprecipitation experiments.

      Reviewer #1 (Significance (Required)): 

      This manuscript starts with the AlphaFold-based screening of interactions between human nucleic acid sensors and SPRY-containing proteins. However, the authors then just focused on TRIM25-ZAP and Riplet-RIG-I, whose interactions have been well demonstrated previously, although other protein interactions were not further explored. Also, the information on the evolutional aspects of TRIM25, ZAP, Riplet and RIG-I did not lead to clear conclusions. The differential contribution of TRIM25-ZAP and Riplet-RIG-I in LNP- and lipofectamine-transduced RNAs is interesting, although data shown in Fig.6 are expected from previous studies, and are not so relevant to other data in this study.  Therefore, the study is not well integrated, although pieces are interesting.  This study may attract researchers in innate 

      My expertise is innate immunity and RNA biology. 

      Reviewer #2 (Evidence, reproducibility and clarity (Required)): 

      The paper describes the discovery of unknown E3-RNA sensor interactions from a large scale in silico prediction screen, as well as the clarification of previously described E3-sensor interactions. These findings extend previous work showing ancient relationships between nucleic acid sensors and RING E3s (e.g. PMID: 33373584), which also described the RIPLET-RIG-I pairing identified in the present screen. 

      The interactions focused on are shown to have functional implications for immune signaling pathways, and stability implications for the bound sensor. The argument for the screen is that E3-target interactions are often too transient to detect biochemically. While possibly true, several of the pairings are confirmed by co-IP, with either WT E3 or a catalytically deficient E3 (known elsewhere as a 'substrate trap'). 

      The key conclusions are convincing and interesting; in particular, the conserved interactions between RIPLET and RIG-I, and TRIM25 and ZAP. The hypothesis that the two E3s arose from a common ancestor is intriguing, and the use of chimeras in functional experiments suggest that the length of the coiled coil domains contributes to substrate targeting. The most interesting observation IMO is that showing that RNA vaccines can be sensed by orthogonal sensor/E3 pathways, depending on transfection method, suggesting that distinct entry routes are surveyed by different sensors. These experiments are well performed as E3 manipulation phenocopies sensor manipulation, supporting that the in silico approach will ID relevant pairings. 

      Including the PAE plots for some of the key interactions would be helpful, as a lot of the interaction confidence metrics are hidden in interaction 'scores'. Fig. 1b heatmap is presented as a row max, so it is difficult to calibrate one E3 against another. The raw data from e.g. fig. 1c would be a valuable addition. This would also help orientate future studies predicting similar protein-protein interactions. 

      We agree with the reviewer and we will provide the raw values for the interaction scores and PAE maps as supplementary data to be included in the final publication.

      Figure 1 appears to just use the isolated SPRY domain for screening - were full-length proteins used?

      The data in Figure 1 was generated using full-length proteins, but it will be interesting to test if a similar screen with SPRY domains alone can replicate the predicted interactions. We will repeat this using SPRY domain sequences.

      How many copies of the FL protein were used. TRIM5 employs a low affinity, high avidity binding method; do binding patterns change when the valency of either component is altered? The Alphafold approach perhaps selects for high affinity binders? I don't expect many more experiments to be done here, but commenting on this would be useful. __ __

      This is a rational consideration that we overlooked. We included in our discussion a comment on the limitation of this approach in the context of multimeric assemblies:

      Similarly, the oligomeric nature of some SPRY-containing proteins [22] is likely to impact the correct placement of these domains and, therefore, impact the predicted interaction score. Future optimization of our algorithm will improve the detection of such interactions.”

      The TRIM25 -Riplet PRYSPRY swap experiments in Figure 3 are very informative and powerful. Some more detail on domain boundaries used would be helpful, including AF predictions of what these chimeras look like with respect to their natural counterparts. 

      We agree with the reviewer about the need to explicitly define domain boundaries. We will include as supplemental information a comparison of the AF prediction of these chimeras in relation to the native proteins.

      While AF can place confidence metrics on domain-domain interactions, SPRY containing proteins are themselves often comprised of regions of high structural confidence (e.g. many available PDBs for RINGs, coils and SPRYs) but their relative arrangement within the molecule is unpredictable. Do isolated SPRYs show any better/worse binding to targets? 

      This is a good point as well, and this can be addressed by repeating the AlphaFold screen using only SPRY domain proteins rather than full-length protein (as described above).

      Technically, fig. 1f does not show that TRIM58 destabilises OAS1, as there is no condition with OAS1 alone. Perhaps alter the text to reflect this or repeat with the necessary control. The direction of the text is fine, as Fig. 1g provides a striking result, but 1f needs attention. 

      The reviewer raises an important consideration. To address this, we will repeat the experiment using a OAS1 alone condition, as suggested.

      Fig. 2c - for clarity, please specify the meaning of the connecting lines between the bait 'hits' in the legend. What does the correlation coefficient relate to exactly? % similarity, is this across the whole molecule, or the PRYSPRY (presumably the latter would be a more useful comparison). And it is well established that single variations in SPRY variable loops can toggle binding, so this could be better referenced in the text. It would also be helpful to see e.g. dissimilar PRYSPRYs binding a common target, as PAE plots in the supplementary. Do any shared motifs occur at the variable loops between dissimilar SPRY molecules? 

      We agree that this figure could be clearer. The connecting lines in Fig. 2c indicate protein-protein predictions with common sensors, i.e. connecting lines between the interaction score of ASH2L-MDA5 and the interaction score of TRIM51-MDA5. We only use % similarity of the SPRY domain alone, not the whole molecule. We have modified the figure legend to clarify this point and we include the PAE maps as supplementary information, as requested.

      Fig. 2i - Bat RIG-I binds both TRIM25 and Riplet? This is in contrast to the predicted directionality in 2h? 

      The reviewer astutely noted that, in Fig.2i, pulling down bat RIG-I co-immunoprecipitated with both bat Riplet and bat TRIM25, while AlphaFold predictions only suggest a Riplet-RIG-I interaction. However, while bat Riplet and bat TRIM25 express at comparable levels in the input sample, bat Riplet was far more enriched in RIG-I pulldowns than bat TRIM25. Our interpretation of this data is that, indeed, bat Riplet-RIG-I interaction is more powerful than TRIM25-Rig-I.

      Fig. 3a-b, Sup Fig. 3c,d - IFNB1 transcript normalised to 3p-hRNA transfection in control NTC cells - the presentation chosen obscures the baseline IFNB1 levels in the different siRNA transfections. What is the fold induction of IFNB1 in the different cell lines? 

      We will include the fold induction in each cell line as supplementary information.

      Fig. 3g - RLUs of EV-A71 is only tested in TRIM25 KO cells transfected with the Riplet T25 chimera. The full panel of cDNAs (parental E3s and the inverse 25-riplet swap) should be tested in parallel to confirm the effect is specific to TRIM25 PRYSPRY. 

      This is a great suggestion that will help clarify the role of different domains of TRIM25 in its antiviral activity. We are currently generating cell-line stably expressing these truncations and will perform the suggested experiment.

      Fig. 4b - time point of 3p-hRNA transfection? Y-axis label suggested normalisation to NTC - incorrect label? What is the effect of bortezomib on IFNB1 mRNA in mock treated cells? 

      We thank the reviewer for spotting this typo, we have known corrected the axis label. We harvested cellular mRNA 8h post-transfection. Bortezomib-treatment slightly reduced the background expression of IFNB1 mRNA, but this signal is very close to the detection limit that it is difficult to draw conclusions. Nevertheless, the addition of bortezomib did not increase IFNB1 mRNA expression in the absence of a stimulus.

      Fig. 4g - these experiments would benefit from an untransfected control cell to clarify how cDNA expression impacts sensor stability. 

      We agree with the reviewer and we will include an untransfected control.

      There seems to be an inverse correlation between sensor degradation and signaling output - is that the summary of Fig. 4? On the one hand, sensor degradation attenuates functional output (Fig. 4b), and the E3 that degrades the sensor is required for sensor function; on the other hand, changing coil-length in the E3 disables sensor degradation (Fig. 4g) but and enhances signaling response (Fig. 3j). Do the chimeras of panel Fig. g, h influence IFNB1 expression in the assay from Fig. 3j - this experiment would marry RIG-I expression with signal output. 

      This is an interesting experiment. We are in the process of generating a TRIM25-/- Riplet-/- cell line, which we will use to reconstitute with the chimeras mentioned above and perform the requested experiment.

      The data is generally clear. To facilitate their interpretation and for clarity, Western blots require size markers and Co-IPs should indicate the flag-/ha-epitope tags. Would make fig. 2 i-j much clearer, particularly given apparent co-migration of IgG (heavy chain?) and riplet, and the lack of control IPs. 

      We agree that contextual markings will improve the interpretation of these results. We will add size markers to the western blots in fig2 in order to improve clarity.

      The figure legends could provide more detail. 

      We will add additional experimental details (such as time points) to the figure legends.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #2

      Evidence, reproducibility and clarity

      The paper describes the discovery of unknown E3-RNA sensor interactions from a large scale in silico prediction screen, as well as the clarification of previously described E3-sensor interactions. These findings extend previous work showing ancient relationships between nucleic acid sensors and RING E3s (e.g. PMID: 33373584), which also described the RIPLET-RIG-I pairing identified in the present screen.

      The interactions focused on are shown to have functional implications for immune signaling pathways, and stability implications for the bound sensor. The argument for the screen is that E3-target interactions are often too transient to detect biochemically. While possibly true, several of the pairings are confirmed by co-IP, with either WT E3 or a catalytically deficient E3 (known elsewhere as a 'substrate trap').

      The key conclusions are convincing and interesting; in particular, the conserved interactions between RIPLET and RIG-I, and TRIM25 and ZAP. The hypothesis that the two E3s arose from a common ancestor is intriguing, and the use of chimeras in functional experiments suggest that the length of the coiled coil domains contributes to substrate targeting. The most interesting observation IMO is that showing that RNA vaccines can be sensed by orthogonal sensor/E3 pathways, depending on transfection method, suggesting that distinct entry routes are surveyed by different sensors. These experiments are well performed as E3 manipulation phenocopies sensor manipulation, supporting that the in silico approach will ID relevant pairings.

      Including the PAE plots for some of the key interactions would be helpful, as a lot of the interaction confidence metrics are hidden in interaction 'scores'. Fig. 1b heatmap is presented as a row max, so it is difficult to calibrate one E3 against another. The raw data from e.g. fig. 1c would be a valuable addition. This would also help orientate future studies predicting similar protein-protein interactions.

      Figure 1 appears to just use the isolated SPRY domain for screening - were full-length proteins used? How many copies of the FL protein were used. TRIM5 employs a low affinity, high avidity binding method; do binding patterns change when the valency of either component is altered? The Alphafold approach perhaps selects for high affinity binders? I don't expect many more experiments to be done here, but commenting on this would be useful.

      The TRIM25 -Riplet PRYSPRY swap experiments in Figure 3 are very informative and powerful. Some more detail on domain boundaries used would be helpful, including AF predictions of what these chimeras look like with respect to their natural counterparts.

      While AF can place confidence metrics on domain-domain interactions, SPRY containing proteins are themselves often comprised of regions of high structural confidence (e.g. many available PDBs for RINGs, coils and SPRYs) but their relative arrangement within the molecule is unpredictable. Do isolated SPRYs show any better/worse binding to targets?

      Technically, fig. 1f does not show that TRIM58 destabilises OAS1, as there is no condition with OAS1 alone. Perhaps alter the text to reflect this or repeat with the necessary control. The direction of the text is fine, as Fig. 1g provides a striking result, but 1f needs attention.

      Fig. 2c - for clarity, please specify the meaning of the connecting lines between the bait 'hits' in the legend. What does the correlation coefficient relate to exactly? % similarity, is this across the whole molecule, or the PRYSPRY (presumably the latter would be a more useful comparison). And it is well established that single variations in SPRY variable loops can toggle binding, so this could be better referenced in the text. It would also be helpful to see e.g. dissimilar PRYSPRYs binding a common target, as PAE plots in the supplementary. Do any shared motifs occur at the variable loops between dissimilar SPRY molecules?

      Fig. 2i - Bat RIG-I binds both TRIM25 and Riplet? This is in contrast to the predicted directionality in 2h?

      Fig. 3a-b, Sup Fig. 3c,d - IFNB1 transcript normalised to 3p-hRNA transfection in control NTC cells - the presentation chosen obscures the baseline IFNB1 levels in the different siRNA transfections. What is the fold induction of IFNB1 in the different cell lines?

      Fig. 3g - RLUs of EV-A71 is only tested in TRIM25 KO cells transfected with the Riplet T25 chimera. The full panel of cDNAs (parental E3s and the inverse 25-riplet swap) should be tested in parallel to confirm the effect is specific to TRIM25 PRYSPRY.

      Fig. 4b - time point of 3p-hRNA transfection? Y-axis label suggested normalisation to NTC - incorrect label? What is the effect of bortezomib on IFNB1 mRNA in mock treated cells?

      Fig. 4g - these experiments would benefit from an untransfected control cell to clarify how cDNA expression impacts sensor stability.

      There seems to be an inverse correlation between sensor degradation and signaling output - is that the summary of Fig. 4? On the one hand, sensor degradation attenuates functional output (Fig. 4b), and the E3 that degrades the sensor is required for sensor function; on the other hand, changing coil-length in the E3 disables sensor degradation (Fig. 4g) but and enhances signaling response (Fig. 3j). Do the chimeras of panel Fig. g, h influence IFNB1 expression in the assay from Fig. 3j - this experiment would marry RIG-I expression with signal output.

      The data is generally clear. To facilitate their interpretation and for clarity, Western blots require size markers and Co-IPs should indicate the flag-/ha-epitope tags. Would make fig. 2 i-j much clearer, particularly given apparent co-migration of IgG (heavy chain?) and riplet, and the lack of control IPs.

      The figure legends could provide more detail.

      Significance

      The paper provides a rich resource of potential E3-sensor interactions and represents a conceptual and technical advance for the field. The authors take a novel approach to identify these pairings. Several pairings are validated in CoIPs, and two pairings (T25-ZAP, RIPLET-RIG-I) are studied in detail. Many E3s - including the TRIM proteins which comprise the bulk of E3s studied here - are purported to regulate key nucleic acid sensors in the literature, but the large scale approach taken here provides evidence that the pairings are really quite specific. The findings also supports prior work showing that the PRYSPRY domain (here called the SPRY) is a functionally plastic module that through variable loops can bind a range of different protein substrates.

      The paper will be most interesting to the innate immune field, those working on nucleic acid sensing, and those looking at innate responses to RNA vaccines.

      Regulation of E3 ubiquitin ligases, viral RNA sensing

    1. 5:08 "sie wissen ganz genau um das Problem, und das sind die Länder in denen es die fundamentalen Bekloppten nicht gibt, weil sie die nicht haben wollen, die sind uns da so weit voraus was dieses Thema angeht."<br /> bei uns in europa passiert eine negativ-selektion: wir holen scheisse aus aller welt, und wir verschenken gold in die ganze welt.

    1. Synthèse : La Gestion Explicite des Comportements en Milieu Scolaire

      Résumé Exécutif

      Ce document synthétise les enseignements clés du webinaire du 20 novembre 2024 organisé par l'équipe CARDIE CNR de l'Académie de Paris.

      Le cœur du sujet porte sur la gestion explicite des comportements, une approche pédagogique qui délaisse le modèle punitif traditionnel au profit d'un enseignement proactif des comportements attendus.

      Les points saillants incluent :

      Efficacité prouvée : Le retour d'expérience du Collège de Staël (Paris 15e) démontre une réduction drastique des incidents disciplinaires grâce à cette méthode.

      Inversion du paradigme : Priorité aux interventions préventives (80 % des interactions) et au renforcement positif par rapport aux sanctions.

      Fondement scientifique : L'analyse de Franck Ramus souligne que les punitions sont peu efficaces à long terme car elles n'enseignent pas le comportement de remplacement.

      Enjeu institutionnel : La gestion du climat scolaire devient une priorité académique liée au bien-être des élèves et des personnels.

      --------------------------------------------------------------------------------

      1. Retour d'Expérience : Le Projet "Innovation éduca" (Collège de Staël)

      Le collège de Staël a mis en œuvre une stratégie de gestion explicite des comportements, initialement dans le cadre de la création d'un Fablab (Makerlab), puis étendue à l'ensemble de l'établissement.

      Méthodologie de mise en œuvre

      Le projet s'est structuré autour d'une ingénierie sociale et éducative rigoureuse :

      1. Formation : Les équipes de direction et 14 professeurs ont suivi des formations sur l'enseignement explicite, notamment via les travaux de Steve Bissonnette (Université TÉLUQ).

      2. Coconstruction avec les élèves : 370 élèves ont participé à la définition des règles. Plutôt que d'imposer un règlement, l'équipe a fait verbaliser les problèmes par les élèves pour ensuite les transformer en comportements positifs.

      3. Matérialisation visuelle : Création d'affichages par lieu (cour, CDI, cantine, couloirs) utilisant des phrases positives et des pictogrammes.

      4. Implication communautaire : Collaboration avec une école élémentaire voisine (34 écoliers) pour favoriser le sentiment d'appartenance et la transmission des règles dès le plus jeune âge.

      Résultats Quantitatifs

      L'impact du dispositif est mesurable par une baisse significative des indicateurs de tension scolaire :

      | Indicateur | Année précédente (même période) | Année en cours | | --- | --- | --- | | Nombre de punitions | 2900 | 540 | | Nombre de sanctions | 173 | 18 | | Conseils de discipline | 2 | 0 |

      --------------------------------------------------------------------------------

      2. Analyse Théorique et Leviers Psychologiques

      L'expertise de Franck Ramus (CNRS, ENS, CSEN) permet de comprendre les mécanismes comportementaux sous-jacents.

      La mécanique du comportement

      Le comportement est influencé par deux facteurs :

      Les antécédents : Éléments qui précèdent et favorisent ou inhibent l'action.

      Les conséquences : Ce qui suit immédiatement le comportement. Les récompenses augmentent la probabilité de répétition, tandis que les punitions la diminuent.

      Les limites du modèle punitif

      Le système éducatif est traditionnellement centré sur la sanction, une approche jugée peu efficace pour plusieurs raisons :

      Émotions négatives : Les punitions engendrent du stress, de l'évitement ou de l'agression.

      Habituation : Les élèves fréquemment punis se désensibilisent, provoquant une escalade de la sévérité sans gain d'efficacité.

      Absence d'apprentissage : "Les punitions n'enseignent pas les bons comportements." Elles stoppent momentanément un acte sans proposer de solution alternative.

      Le renforcement positif

      Le levier le plus puissant est le rapport compliment/réprimande. Les recherches montrent une corrélation directe : plus ce rapport est élevé, plus le temps de concentration des élèves sur leurs tâches augmente.

      Récompenses sociales : Loin d'être uniquement matérielles (cadeaux), les meilleures récompenses sont sociales (sourire, compliment verbal, encouragement sur Pronote).

      Normalisation : L'objectif est de rendre les comportements positifs explicites et gratifiants pour qu'ils remplacent naturellement les comportements perturbateurs.

      --------------------------------------------------------------------------------

      3. Stratégies Pratiques pour l'Enseignement des Comportements

      Monsieur Chrétien et Franck Ramus identifient des étapes concrètes pour transformer le climat de classe :

      1. Identifier l'opposé positif : Pour chaque comportement perturbateur (ex: "ne pas insulter"), définir une formulation positive (ex: "utiliser ma parole pour respecter les autres").

      2. Enseignement explicite : Le comportement doit être enseigné comme une matière scolaire. Cela inclut la modélisation et la pratique guidée.

      3. Fractionnement des difficultés : Pour les élèves en grande difficulté (ex: TDH), il convient de ne pas traiter tous les problèmes à la fois. On peut prioriser un comportement (ex: rester assis) avant de travailler sur un autre (ex: prise de parole).

      4. Simulation : À l'instar des exercices incendie, pratiquer les comportements attendus de manière répétée pour créer des automatismes.

      --------------------------------------------------------------------------------

      4. Perspectives Institutionnelles et Bien-être

      Nicolas Jury souligne que la gestion des comportements est une demande majeure des enseignants de terrain, souvent peu abordée de manière technique en formation initiale.

      Priorité Académique : Le Conseil académique des savoirs fondamentaux intègre désormais un axe "bien-être à l'école", dont la gestion des comportements est le premier levier.

      Cohérence d'équipe : L'efficacité du modèle repose sur l'engagement de tous les personnels. Une règle commune et une approche cohérente évitent les disparités de traitement qui nuisent à la clarté pour l'élève.

      Alliance avec les familles : Bien que le comportement puisse varier entre l'école et la maison, informer les parents des méthodes de renforcement positif peut favoriser une convergence éducative bénéfique.

      --------------------------------------------------------------------------------

      5. Ressources Identifiées

      Pour approfondir ces concepts, plusieurs ressources sont recommandées par les experts :

      Steve Bissonnette : Ouvrages sur l'enseignement explicite et formation en ligne (Université TÉLUQ).

      Franck Ramus : MOOC "La psychologie pour les enseignants" (disponible sur YouTube et parcours Magistère).

      Alan Kazdin : L'ouvrage "Éduquer sans s'épuiser" est cité comme une référence majeure pour la gestion comportementale.

      Livrables académiques : Le livret sur l'enseignement explicite de l'Académie de Paris et les futures publications du CNR Cardie.

    1. La Coopération en Classe au Service des Apprentissages et du Bien-être

      Résumé Exécutif

      Ce document synthétise les interventions du webinaire organisé par la Cardie de l'Académie de Paris, portant sur le développement des habiletés à coopérer.

      La coopération est identifiée comme un levier fondamental pour renforcer l'engagement des élèves et améliorer le climat scolaire.

      Les retours d'expérience du collège Antoine Quoisevaux, couplés à l'analyse experte de Laurent Renault, soulignent que la coopération ne doit pas être un simple "supplément d'âme", mais une modalité pédagogique structurée.

      Les points clés incluent la distinction cruciale entre coopération (visant le progrès individuel par l'échange) et collaboration (visant la performance collective), l'importance de la réciprocité de l'aide pour éviter les biais de l'effet tuteur, et la nécessité de ritualiser des instances comme le conseil d'élèves pour transformer les conflits en opportunités d'apprentissage.

      Bien que chronophage, cette approche favorise la motivation et le développement de compétences psychosociales essentielles.

      --------------------------------------------------------------------------------

      I. Retours d'Expérience : Le Projet du Collège Antoine Quoisevaux

      Mis en place il y a quatre ans par Marion Saag (mathématiques) et Antoine Marteille (français), ce projet concerne des classes de 5ème dans un établissement multisecteur du 18ème arrondissement de Paris, caractérisé par une grande mixité sociale.

      1. Genèse et Méthodologie

      Le projet a évolué d'une pratique empirique vers une démarche étayée par la recherche et la formation (notamment les travaux de Laurent Renault et les ressources du lycée Jacques Feyder).

      Objectif : Associer des temps formels (conseils d'élèves) et informels (apprentissage coopératif en cours).

      Convaincre les élèves : La coopération n'est pas innée. Des activités "décrochées" de la didactique (ex: construire la plus haute tour de chamallows, marché de connaissances) sont organisées dès la rentrée pour apprendre à travailler en groupe.

      Métacognition : Chaque activité est suivie d'un temps de retour sur ce qui a fonctionné ou non, permettant aux élèves de s'interroger sur l'efficacité de leur travail collectif.

      2. Modalités de Travail en Classe

      Le travail collectif intervient généralement après une phase de réflexion individuelle ("mise en effort intellectuel"). Les enseignants font varier le tempo des séances via :

      Le binôme : Notamment pour des clôtures de séance (l'élève A explique à l'élève B ce qu'il a retenu).

      Les îlots : Groupes de quatre élèves dans des salles disposées en "L" pour faciliter la circulation.

      La classe puzzle et l'arpentage : Pour l'étude de textes.

      L'autonomie collective : Organisation spatiale spontanée pour reconstituer un récit (ex: après la projection d'un film).

      --------------------------------------------------------------------------------

      II. Le Conseil d'Élèves : Pilier du Climat de Classe

      Le conseil d'élèves se tient tous les quinze jours. C'est un espace de parole, de régulation des conflits et de recherche collective de solutions.

      1. Rôles et Responsabilités

      Pour assurer un fonctionnement démocratique et serein, les rôles tournent entre les élèves :

      | Rôle | Fonction | | --- | --- | | Président | Rappelle les règles et ouvre la séance de façon solennelle. | | Adjoint | Rappelle les décisions prises lors du conseil précédent. | | Secrétaire | Garde une trace écrite des échanges et des décisions. | | Distributeur de parole | Utilise un bâton de parole pour réguler les échanges. | | Protecteur de parole | Assure un cadre bienveillant et sécurisant. | | Observateur | Analyse la répartition de la parole (bilan genré, équité). |

      2. Structure et Contenu du Conseil

      Le conseil suit un ordre du jour ritualisé basé sur des messages écrits par les élèves :

      Remerciements et Félicitations : Valorisation de l'entraide et de l'estime de soi (ex: "Je remercie X de m'avoir expliqué les maths").

      Problèmes et Soucis : Régulation des relations entre élèves (médiation par les pairs) ou de la relation pédagogique avec les enseignants.

      Propositions : Projets de sorties, mais aussi demandes pédagogiques (ex: "Faire plus d'exposés en Histoire-Géo").

      --------------------------------------------------------------------------------

      III. Analyse Conceptuelle et Points de Vigilance

      Laurent Renault, expert en pédagogie coopérative, apporte un éclairage théorique pour "réinterroger les évidences".

      1. Coopération vs Collaboration

      Il est impératif de distinguer ces deux modalités pour éviter l'exclusion des élèves les plus fragiles :

      La Coopération (visée : Progresser) : Échange de points de vue sans obligation de production immédiate (ex: le conseil d'élèves).

      La Collaboration (visée : Performer) : Répartition des tâches pour produire un résultat (ex: une affiche). Le risque est que seuls les "concepteurs" apprennent, tandis que les autres exécutent des tâches subalternes.

      2. L'Effet Tuteur et la Réciprocité

      L'aide entre élèves n'est pas automatiquement bénéfique pour celui qui la reçoit.

      L'aidant : Progresse toujours (mémorisation, abstraction, valorisation).

      L'aidé : Peut subir l'aide comme une illusion de compréhension et intérioriser une dépendance.

      Solution : Garantir la réciprocité de l'aide. Chaque élève doit, au cours d'une période, occuper la position d'aidant sur des compétences variées (rédaction, schéma, etc.).

      3. La Posture de l'Enseignant : "Travailler à capot ouvert"

      Innover, c'est accepter une part d'humilité et de déstabilisation.

      S'effacer : Dans le conseil, l'enseignant ne doit pas être moralisateur mais garant de la sécurité de la parole.

      Gérer le "bazar" initial : La coopération peut dégrader le climat scolaire à court terme car elle fait émerger des conflits latents. Ces conflits sont des matériaux d'apprentissage pour "penser ensemble".

      Considérer l'élève comme un interlocuteur valable : S'appuyer sur son ressenti et sa motivation.

      --------------------------------------------------------------------------------

      IV. Enjeux et Perspectives

      1. Bénéfices Constatés

      Engagement : Plaisir des élèves à venir au collège et investissement accru dans les disciplines (français/mathématiques).

      Compétences psychosociales : Travail sur les trois macro-compétences définies par Santé publique France.

      Émulation : Utilisation de la motivation collective sans tomber dans la rivalité destructrice.

      2. Limites et Défis

      Aspect chronophage : Nécessite un investissement important pour mener les conseils et suivre les décisions.

      Isolement de l'équipe : Difficulté à étendre le projet au-delà du binôme initial. Un tiers de l'emploi du temps est couvert, mais une cohérence d'équipe serait préférable.

      Aménagement spatial : Importance de l'ergonomie (classes flexibles, îlots en L) pour faciliter les transitions entre travail individuel et collectif.

      3. Conclusion

      La coopération en classe ne s'improvise pas. Elle repose sur un "tâtonnement balisé" par la recherche (Sylvain Conac, Philippe Meirieu) et une organisation rigoureuse.

      L'objectif final est de passer du simple "vivre ensemble" au "penser ensemble", en respectant l'équilibre entre l'individu (le "Je") et le groupe (le "Nous").

    1. https://www.youtube.com/watch?v=Ptn8nF_nf98

      Synthèse sur les Compétences Psychosociales (CPS) au Cœur des Apprentissages

      Résumé Exécutif

      Les compétences psychosociales (CPS) — définies comme un ensemble de capacités cognitives, émotionnelles et sociales — s'imposent désormais comme le « troisième pilier » des fondamentaux scolaires, aux côtés de la maîtrise du langage et des mathématiques.

      Ce document de synthèse, basé sur les interventions d'experts et de praticiens, démontre que le développement des CPS n'est pas une simple mission éducative supplémentaire, mais un levier puissant pour la réussite académique, le bien-être individuel et la réduction des inégalités sociales.

      Les recherches scientifiques confirment que les CPS sont des prédicteurs de réussite scolaire aussi puissants que le quotient intellectuel (QI). Les interventions structurées produisent une amélioration moyenne de 11 % des résultats aux épreuves scolaires et génèrent un retour sur investissement social majeur (1 € investi pour 11 € économisés à long terme).

      La mise en œuvre réussie de ces compétences repose sur une approche systémique incluant la formation des enseignants, l'aménagement des espaces, la posture de l'adulte et l'enseignement explicite aux élèves.

      --------------------------------------------------------------------------------

      1. Définition et Typologie des Compétences Psychosociales

      Selon la nomenclature de Santé Publique France, les CPS se divisent en trois catégories interdépendantes. Elles visent à développer la confiance en soi, la motivation et la qualité des interactions entre pairs et avec les adultes.

      Les trois piliers des CPS

      | Catégorie | Compétences clés identifiées | | --- | --- | | Cognitives | Maîtrise de soi, capacité de planification, prise de décision, connaissance de ses forces et faiblesses. | | Émotionnelles | Identification et régulation de ses propres émotions, gestion du stress, développement de l'empathie. | | Sociales/Relationnelles | Communication non-violente (CNV), coopération, résolution de conflits, capacité à écouter et à demander de l'aide. |

      --------------------------------------------------------------------------------

      2. La Valeur Prédictive et Scientifique des CPS

      L'analyse de Thomas Villemontex, chercheur en psychologie, souligne que les CPS sont les compétences les plus prédictrices de l'insertion future de l'individu dans la société, surpassant souvent les savoirs purement disciplinaires.

      Réussite Scolaire : Des méta-analyses portant sur plus de 200 études et 100 000 élèves montrent un lien direct entre CPS et engagement scolaire. Les compétences émotionnelles prédisent particulièrement la réussite en mathématiques, car elles permettent de gérer l'anxiété liée à l'apprentissage.

      Réduction des Inégalités : Les élèves issus de milieux défavorisés présentent statistiquement des CPS plus fragiles. Le travail sur ces compétences en milieu scolaire est donc un outil de justice sociale et d'équité.

      Impact à Long Terme : Une étude menée à Montréal montre que 20 heures d'intervention en maternelle sur la régulation du comportement ont des effets mesurables sur la réussite professionnelle 20 ans plus tard.

      CPS des Enseignants : La capacité d'un enseignant à être empathique, chaleureux et à croire en la réussite de ses élèves est un prédicteur majeur de la progression de la classe sur l'ensemble des disciplines.

      --------------------------------------------------------------------------------

      3. Mise en Pratique : Retours d'Expérience du Terrain

      Au Collège : La Transition vers la Classe Coopérative

      Le collège Pierre Mendès France (Paris) a transformé ses pratiques suite à la perte de moyens d'encadrement, passant d'un focus disciplinaire à une approche psychosociale globale.

      Le Conseil d'Élèves : Une heure hebdomadaire ritualisée où les élèves gèrent la parole et la médiation des conflits.

      La Coopération en EPS : Utilisation de la danse pour travailler l'empathie. Les élèves « empathes » doivent lire les signaux non-verbaux de fatigue ou de vulnérabilité chez leurs camarades pour intervenir au moment opportun.

      Aménagement de l'Espace : Repenser les salles de classe et les salles de réunion pour favoriser le bien-être et la communication physique.

      En Maternelle : Posture de l'Adulte et Éducation Explicite

      À l'école Gustave Rouanet, l'accent est mis sur la « déconstruction » de l'autoritarisme institutionnel au profit d'une autorité explicite et bienveillante.

      Validation Émotionnelle : L'adulte valide l'émotion (« Tu as le droit d'être en colère ») tout en cadrant le comportement (« Mais tu ne peux pas frapper »).

      Langage et Estime de Soi : Utilisation de messages clairs dès 3 ans. Éviter d'essentialiser l'enfant (ne pas dire « tu es méchant », mais parler de son comportement).

      Feedback Positif : Valoriser systématiquement les comportements attendus plutôt que de se focaliser uniquement sur les sanctions.

      --------------------------------------------------------------------------------

      4. Programmes et Dispositifs d'Intervention

      Le document identifie plusieurs programmes probants pour structurer l'enseignement des CPS :

      L'École des Émotions (Maternelle) : Programme basé sur la littérature jeunesse, structuré autour d'ateliers d'empathie, de bien-être corporel et de « rondes des émotions ».

      Vivre Ensemble - Freeforoberry (Primaire) : Programme danois adapté, axé sur la prévention du harcèlement par l'apprentissage des CPS et du consentement (ex: l'activité de massage dans le dos où l'enfant doit donner son accord).

      Le Kit d'Empathie (DGESCO) : Outil institutionnel inspiré des recherches récentes pour déployer des séances en classe.

      --------------------------------------------------------------------------------

      5. Défis et Perspectives de l'Évaluation

      L'évaluation des CPS reste un sujet complexe et dénué de consensus définitif. Les points saillants de la réflexion actuelle incluent :

      Éviter la Notation : Les experts s'accordent sur le fait que les CPS ne doivent pas faire l'objet d'une évaluation chiffrée ou sommative classique.

      Identification des Fragilités : L'objectif de l'évaluation doit être de repérer les élèves en difficulté émotionnelle ou relationnelle pour leur proposer des parcours renforcés.

      Observation des Pratiques : Utilisation de grilles d'observation sur l'enseignement explicite et les interactions pour mesurer le climat scolaire.

      --------------------------------------------------------------------------------

      Citations Clés

      « Les compétences psychosociales sont le troisième pilier des fondamentaux au côté de la maîtrise du langage et des mathématiques. » — Stanislas Dehaene

      « Travailler les CPS chez les élèves, c'est aussi travailler les CPS chez les enseignants. Cela participe de mon bien-être professionnel. » — Charlotte Ninin, Enseignante

      « L'autoritarisme et la pédagogie de la peur ont un coût humain, sociétal et financier à très long terme. » — Nicolas Jury, Doyen des inspecteurs

      « 1 € investi dans les CPS, c'est 11 € économisés pour la société en frais de santé mentale et en parcours de vie brisés. » — Thomas Villemontex, Chercheur

    1. Author response:

      The following is the authors’ response to the original reviews

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      This work addresses a key question in cell signalling: how does the membrane composition affect the behaviour of a membrane signalling protein? Understanding this is important, not just to understand basic biological function but because membrane composition is highly altered in diseases such as cancer and neurodegenerative disease. Although parts of this question have been addressed on fragments of the target membrane protein, EGFR, used here, Srinivasan et al. harness a unique tool, membrane nanodisks, which allow them to probe full-length EGFR in vitro in great detail with cutting-edge fluorescent tools. They find interesting impacts on EGFR conformation in differently charged and fluid membranes, explaining previously identified signalling phenotypes.

      Strengths:

      The nanodisk system enables full-length EGFR to be studied in vitro and in a membrane with varying lipid and cholesterol concentrations. The authors combine this with single-molecule FRET utilising multiple pairs of fluorophores at different places on the protein to probe different conformational changes in response to EGF binding under different anionic lipid and cholesterol concentrations. They further support their findings using molecular dynamics simulations, which help uncover the full atomistic detail of the conformations they observe.

      Weaknesses:

      Much of the interpretation of the results comes down to a bimodal model of an 'open' and 'closed' state between the intracellular tail of the protein and the membrane. Some of the data looks like a bimodal model is appropriate, but its use is not sufficiently justified (statistically or otherwise) in this work in its current form. The experiments with varying cholesterol in particular appear to suggest an alternate model with longer fluorescent lifetimes. More justification of these interpretations of the central experiment of this work would strengthen the paper.

      We thank the reviewer for highlighting the strengths of the study, including the use of nanodiscs, single-molecule FRET, and MD simulations to probe full-length EGFR in controlled membrane environments.

      We agree that statistical justification is important for interpreting the distributions. To address this, we performed global fits of the data with both two- and three-Gaussian models and evaluated them using the Bayesian Information Criterion (BIC), which balances the model fit with a penalty for additional parameters. The three-Gaussian model gave a substantially lower BIC, indicating statistical preference for the more complex model. However, we also assessed the separability of the Gaussian components using Ashman’s D, which quantifies whether peaks are distinct. This analysis showed that two Gaussians (µ = 2.64 and 3.43 ns) are not separable, implying they represent one broad distribution rather than two states.

      Author response table 1.

      Both the two- and three-Gaussian models include a low-value component (µ = ~1.3 ns), but the apparent improvement of the three-Gaussian model arises only from splitting the central population into two overlapping Gaussians. Thus, while the BIC favors the three-Gaussian model statistically, Ashman’s D demonstrates that the central peak should not be interpreted as bimodal. Therefore, when all the distributions are fit globally, the data are best explained as two Gaussians, one centered at ~1.3 ns and the other at ~2.7 ns, with cholesterol-dependent shifts reflecting changes in the distribution of this population rather than the emergence of a separate state. Finally, we acknowledge that additional conformations may exist, but based on this analysis a bimodal model describes the populations captured in our data and so we limit ourselves to this simplest framework.

      We have clarified this in the revised manuscript by adding a section in the Methods (page 26) titled Model Selection and Statistical Analysis, which describes the results of the global two- versus three-Gaussian fits evaluated using BIC and Ashman’s D. Additional details of these analyses are also provided in response to Reviewer #1, Question 8 (Recommendations for the authors).

      Reviewer #2 (Public review):

      Summary:

      Nanodiscs and synthesized EGFR are co-assembled directly in cell-free reactions. Nanodiscs containing membranes with different lipid compositions are obtained by providing liposomes with corresponding lipid mixtures in the reaction. The authors focus on the effects of lipid charge and fluidity on EGFR activity.

      Strengths:

      The authors implement a variety of complementary techniques to analyze data and to verify results. They further provide a new pipeline to study lipid effects on membrane protein function.

      We thank the reviewer for noting the strengths of our approach, particularly the use of complementary techniques and the development of a new pipeline to study lipid effects on membrane protein function.

      Weaknesses:

      Due to the relative novelty of the approach, a number of concerns remain.

      (1) I am a little skeptical about the good correlation of the nanodisc compositions with the liposome compositions. I would rather have expected a kind of clustering of individual lipid types in the liposome membrane, in particular of cholesterol. This should then result in an uneven distribution upon nanodisc assembly, i.e., in a notable variation of lipid composition in the individual nanodiscs. Could this be ruled out by the implemented assays, or can just the overall lipid composition of the complete nanodisc fraction be analyzed?

      We monitored insertion of anionic lipids into nanodiscs by performing zeta potential measurements, which report on surface charge, and cholesterol insertion by Laurdan fluorescence, which reports on membrane order. Both assays provide information at the ensemble level, not single-nanodisc resolution. We clarified this in the Methods section (see below).

      Cholesterol clustering is well documented in ternary systems with saturated lipids and sphingolipids [Veatch, Biophys J., 2003; Risselada, PNAS, 2008]. However, in unsaturated POPC-cholesterol mixtures such as those used here, cholesterol primarily alters bilayer order and large-scale segregation is not typically observed.  The addition of POPS to the POPC-cholesterol mixture perturbs cholesterol-induced ordering, lowering the likelihood of cholesterol-rich domains [Kumar, J. Mol. Graphics Modell., 2021].

      Lipid heterogeneity between nanodiscs would be expected to give rise to heterogeneity in hydrodynamic properties, including potentially broadening the dynamic light scattering (DLS) distributions. However, the full width at half maximum (FWHM) values from the DLS measurements (see Author response table 2) do not indicate a broadening with cholesterol. Statistical testing (Mann-Whitney U test for non-normal data) showed no significant difference between samples with and without cholesterol (p = 0.486; n = 4 per group). While the sample size is small making firm conclusions challenging, these results suggest that large-scale heterogeneity is unlikely.

      Author response table 2.

      In the case of POPS lipids, clustering of POPS in EGFR embedded nanodiscs is a recognized property of receptor-lipid interactions. Molecular dynamics simulations have shown that POPS, although constituting only 30% of the inner leaflet, accounts for ~50% of the lipids directly contacting EGFR [Arkhipov, Cell, 2013], underscoring that anionic lipids are preferentially recruited to the receptor’s immediate environment.

      For nanodiscs containing cholesterol and anionic lipids, our smFRET experiments were designed to isolate the effect of EGF binding. The nanodisc population is the same in the ± EGF conditions as EGF was introduced just prior to performing sm-FRET experiments, and not during nanodisc assembly. Thus, for a given lipid composition, any observed differences between ligand-free and ligand-bound states reflect conformational changes of EGFR.

      Methods, page 23, “Zeta potential measurements to quantify surface charge of nanodiscs: Data analysis was processed using the instrumental Malvern’s DTS software to obtain the mean zeta-potential value. This ensemble measurement reports the average surface charge of the nanodisc population, verifying incorporation of anionic POPS lipids.”

      Methods, page 23, “Fluorescence measurements with Laurdan to confirm cholesterol insertion into nanodiscs: The excitation spectrum was recorded by collecting the emission at 440 nm and emission spectra was recorded by exciting the sample at 385 nm. Laurdan fluorescence provides an ensemble readout of membrane order and confirms cholesterol incorporation into the nanodisc population. While laurdan does not resolve the composition of individual nanodiscs, prior work has shown that POPC–cholesterol mixtures are miscible without forming cholesterol-rich domains[91,92], thus the observed ordering changes likely reflect the intended input cholesterol content at the ensemble level.”

      (91) Veatch, S. L. & Keller, S. L. Separation of liquid phases in giant vesicles of ternary mixtures of phospholipids and cholesterol. Biophysical journal, 85(5), 3074-3083 (2003).

      (92) Risselada, H. J. & Marrink, S. J. The molecular face of lipid rafts in model membranes. Proceedings of the National Academy of Sciences 105(45), 17367–17372 (2008).

      (2) Both templates have been added simultaneously, with a 100-fold excess of the EGFR template. Was this the result of optimization? How is the kinetics of protein production? As EGFR is in far excess, a significant precipitation, at least in the early period of the reaction, due to limiting nanodiscs, should be expected. How is the oligomeric form of the inserted EGFR? Have multiple insertions into one nanodisc been observed?

      We thank the reviewer for these insightful questions. Yes, the EGFR:ApoA1∆49 template ratio of 100:1 was empirically determined through optimization experiments now shown in the revised Supplementary Fig. 3. Cell-free reactions were performed across a range of EGFR:ApoA1∆49 template ratios (1:2 to 1:200) and sampled at different time points (2-19 hours). As shown in the gels, EGFR expression increased with higher template ratios and longer reaction times up to ~9 hours, while ApoA1 expression became clearly detectable only after 6 hours. Based on these results, we selected an EGFR:ApoA1∆49 ratio of 100:1 and 8-hour reaction time as the optimal condition, which yielded sufficient full-length EGFR incorporated into nanodiscs for ensemble and single-molecule experiments.

      In cell-free systems, protein yield does not scale directly with DNA template concentration, as translation efficiency is limited by factors such as ribosome availability and co-translational membrane insertion [Hunt, Chem. Rev., 2024; Blackholly, Front. Mol. Biosci., 2022]. Consistent with this, we observed that ApoA1∆49 is produced at higher levels than EGFR despite the lower DNA input (Supplementary Fig. 2b). Providing an excess EGFR template prevents the reaction from becoming limited by scaffold availability and helps compensate for the fact that, as a large multi-domain receptor, EGFR expression can yield truncated as well as full-length products. This strategy ensures that sufficient full-length receptors are available for nanodisc incorporation. We will clarify this in the Methods section (see below).

      We observed little to no visible precipitation under the reported cell-free conditions, likely due to the following reasons: (i) EGFR and ApoA1∆49 are co-expressed in the cell-free reaction, and ApoA1∆49 assembles into nanodiscs concurrently with receptor translation, providing an immediate membrane sink (ii) ApoA1∆49 is expressed at high levels, maintaining disc concentrations that keep the reaction in a soluble regime.

      The sample contains donor-labeled EGFR (snap surface 594) together with acceptor-labeled lipids (cy5-labeled PE doped in the nanodisc). We assess the oligomerization state of EGFR in nanodiscs using single-molecule photobleaching of the donor channel. Snap surface 594 is a benzyl guanine derivative of Atto 594 that reacts with the SNAP tag with near-stoichiometry efficiency [Sun, Chembiochem, 2011]. Most molecules (~75%) exhibited a single photobleaching step, consistent with incorporation of a single EGFR per nanodisc [Srinivasan, Nat. Commun., 2022]. A minority of traces (~15%) showed two photobleaching steps and about ~10% of traces showed three or more photobleaching steps, consistent with occasional multiple insertions. For all smFRET analysis, we restricted the dataset to single-step photobleaching traces, ensuring measurements were performed on monomeric EGFR.

      Methods, page 20, “Production of labeled, full-length EGFR nanodiscs: Briefly, the E.Coli slyD lysate, in vitro protein synthesis E.Coli reaction buffer, amino acids (-Methionine), Methionine, T7 Enzyme, protease inhibitor cocktail (Thermofisher Scientific), RNAse inhibitor (Roche) and DNA plasmids (20ug of EGFR and 0.2ug of ApoA1∆49) were mixed with different lipid mixtures. The DNA template ratio of EGFR:ApoA1∆49 = 100:1 was empirically chosen by testing different ratios on SDS-PAGE gels and selecting the condition that maximized full-length EGFR expression in DMPC lipids (Supplementary Fig. 3).”

      (3) The IMAC purification does not discriminate between EGFR-filled and empty nanodiscs. Does the TEM study give any information about the composition of the particles (empty, EGFR monomers, or EGFR oligomers)? Normalizing the measured fluorescence, i.e., the total amount of solubilized receptor, with the total protein concentration of the samples could give some data on the stoichiometry of EGFR and nanodiscs.

      Negative-stain TEM was performed to confirm nanodisc formation and morphology, but this method does not resolve whether a given disc contains EGFR. To directly assess receptor stoichiometry, we instead relied on single-molecule photobleaching of snap surface 594-labeled EGFR (see response to Point 2). These experiments showed that the majority of nanodiscs contain a single receptor, with a minority containing two receptors. For all smFRET analyses, we restricted data to single-step photobleaching traces, ensuring measurements were performed on monomeric EGFR.

      We did not normalize EGFR fluorescence to total protein concentration because the bulk protein fraction after IMAC purification includes both receptor-loaded and empty nanodiscs. The latter contribute to ApoA1∆49 mass but do not contain receptors and including them would underestimate receptor occupancy. Importantly, the presence of empty nanodiscs does not affect our measurements as photobleaching and single-molecule FRET analyses selectively report only on receptor-containing nanodiscs. This clarification has been added to the Methods.

      Methods, page 26, “Fluorescence Spectroscopy: Traces with a single photobleaching step for the donor and acceptor were considered for further analysis. Regions of constant intensity in the traces were identified by a change-point algorithm95. Donor traces were assigned as FRET levels until acceptor photobleaching. The presence of empty nanodiscs does not influence these measurements, as photobleaching and single-molecule FRET analyses selectively report on receptor-containing nanodiscs.”

      (4) The authors generally assume a 100% functional folding of EGFR in all analyzed environments. While this could be the case, with some other membrane proteins, it was shown that only a fraction of the nanodisc solubilized particles are in functional conformation. Furthermore, the percentage of solubilized and folded membrane protein may change with the membrane composition of the supplied nanodiscs, while non-charged lipids mostly gave rather poor sample quality. The authors normalize the ATP binding to the total amount of detectable EGFR, and variations are interpreted as suppression of activity. Would the presence of unfolded EGFR fractions in some samples with no access to ATP binding be an alternative interpretation?

      We agree that not all nanodisc-embedded EGFR molecules may be fully functional and that the fraction of folded protein could vary with lipid composition. In our ATP-binding assay, EGFR detection relies on the C-terminal SNAP-tag fused to an intrinsically disordered region. Successful labeling requires that this segment be translated, accessible, and folded sufficiently to accommodate the SNAP reaction, which imposes an additional requirement compared to the rigid, structured kinase domain where ATP binds. Misfolded or truncated EGFR molecules would therefore likely fail to label at the C-terminus. These factors strongly imply that our assay predominantly reports on receptor molecules that are intact and well folded.

      Additionally, our molecular dynamics simulations at 0% and 30% POPS support the experimental ATP-binding measurements (Fig. 2c, d). This consistency between both the experimental and simulated evidence, including at 0% POPS where reduced receptor folding might be expected, suggests that the observed lipid-dependent changes are more likely due to modulation of the functional receptor rather than receptor misfolding. We have clarified these points by adding the following

      Results, page 7, “Role of anionic lipids in EGFR kinase activity: In the presence of EGF, increasing the anionic lipid content decreased the number of contacts from 71.8 ± 1.8 to 67.8 ± 2.4, indicating increased accessibility, again in line with the experimental findings. Because detection of EGFR relies on labeling at the C-terminus and ATP binding requires an intact kinase domain, the ATPbinding assay is for receptors that are properly folded and competent for nucleotide binding. The consistency between experimental results and MD simulations suggests that the observed lipiddependent changes are more likely due to modulation of functional EGFR than to artifacts from misfolding.”

      Reviewer #1 (Recommendations for the authors):

      The experimental program presented here is excellent, and the results are highly interesting. My enthusiasm is dampened by the presentation in places which is confusing, especially Figure 3, which contains so many of the results. I also have some reservations about the bimodal interpretation of the lifetime data in Figure 3.

      We thank the reviewer for their positive assessment of our experimental approach and results. In the revised version, we have improved figure organization and readability by adding explicit labels for lipid composition and EGF presence/absence in all lifetime distributions, moving key supplementary tables into main text, and reorganizing the supplementary figures as Extended Data Figures following eLife’s format. Figures and tables now appear in the order in which they are referenced in the text to further improve readability.

      Regarding the bimodal interpretation of the lifetime distribution, we have performed global fits of the data with both two- and three-Gaussian models and evaluated them using the Bayesian Information Criterion (BIC) and Ashman’s D analysis, which supported the bimodal interpretation. Details of this analysis are provided in our response to comment (8) below and included in the manuscript.

      Specific comments below:

      (1) Abstract -"Identifying and investigating this contribution have been challenging owing to the complex composition of the plasma membrane" should be "has".

      We have corrected this error in the revised manuscript.

      (2) Results - p4 - some explanation of what POPC/POPS are would be helpful.

      We have added the text below discussing POPC and POPS.

      Results, page 4, “POPC is a zwitterionic phospholipid forming neutral membranes, whereas POPS carries a net negative charge and provides anionic character to the bilayer[56]. Both PC and PS lipids are common constituents of mammalian plasma membranes, with PC enriched in the outer leaflet and PS in the inner leaflet[22].”

      (22) Lorent, J. H., Levental, K. R., Ganesan, L., Rivera-Longsworth, G., Sezgin, E., Doktorova, M., Lyman, E. & Levental, I. Plasma membranes are asymmetric in lipid unsaturation, packing and protein shape. Nature Chemical Biology 16, 644–652 (2020).

      (56) Her, C., Filoti, D. I., McLean, M. A., Sligar, S. G., Ross, J. A., Steele, H. & Laue, T. M. The charge properties of phospholipid nanodiscs. Biophysical journal 111(5), 989–998 (2016).

      (3) Figure 2b - it would be easier to compare if these were plotted on top of each other. Are we at saturating ATP binding concentration or below it? Also, please put a key to say purple - absent and orange +EGF on the figure. I am also confused as to why, with no EGF, ATP binding is high with 0% POPS, but low when EGF is present, but that then reverses with physiological lipid content.

      While we agree that a direct comparison would be easier, the ATP-binding experiments for the ± EGF conditions were actually performed independently on separate SDS-PAGE gels, which unfortunately precludes such a comparison. We have added a color key to clarify the -EGF and +EGF datasets.

      The experiments were carried out at 1 µM of the fluorescently labeled ATP analogue (atto647Nγ ATP). Reported kinetic measurements for the isolated EGFR kinase domain indicate an K<sub>m</sub> of 5.2 µM suggesting that our experimental concentration is below, but close to the saturating range ensuring sensitivity to changes in accessibility of the binding site rather than saturating all available receptors.

      We have revised the manuscript to clarify these details by including the following text:

      Results, page 6, “To investigate how the membrane composition impacts accessibility, we measured ATP binding levels for EGFR in membranes with different anionic lipid content. 1 µM of fluorescently-labeled ATP analogue, atto647N-γ ATP, which binds irreversibly to the active site, was added to samples of EGFR nanodiscs with 0%, 15%, 30% or 60% anionic lipid content in the absence or presence of EGF.”

      Methods, page 24, “ATP binding experiments: Full-length EGFR in different lipid environments was prepared using cell-free expression as described above. 1μM of snap surface 488 (New England Biolabs) and atto647N labeled gamma ATP (Jena Bioscience) was added after cell-free expression and incubated at 30 °C , 300 rpm for 60 minutes. 1μM of atto647N-γ ATP was used, corresponding to a concentration near the reported Km of 5.2 µM for ATP binding to the isolated EGFR kinase domain[93], ensuring sensitivity to lipid-dependent changes in ATP accessibility.”

      (ii) Nucleotide binding is suppressed under basal conditions, likely to ensure that the catalytic activity is promoted only upon EGF stimulation.

      The molecular dynamics simulations at 0% and 30% POPS further support this interpretation, showing that anionic lipids modulate the accessibility of the ATP-binding site in a manner consistent with experimental trends (Fig. 2c and 2d).

      We have clarified these points in the main text with the following additions:

      Results, page 6, “In the presence of EGF, ATP binding overall increased with anionic lipid content with the highest levels observed in 60% POPS bilayers. In the neutral bilayer, ligand seemed to suppress ATP binding, indicating anionic lipids are required for the regulated activation of EGFR.”

      Results, page 7, “In the absence of EGF, increasing the anionic lipid content from 0\% POPS to 30% POPS increased the number of ATP-lipid contacts 58.6±0.7 to 74.4±1.2, indicating reduced accessibility, consistent with the experimental results and suggesting anionic lipids are required for ligand-induced EGFR activity.”

      (93) Yun, C. H., Mengwasser, K. E., Toms, A. V., Woo, M. S., Greulich, H., Wong, K. K., Meyerson,M. & Eck, M.J. The T790M mutation in EGFR kinase causes drug resistance by increasing the affinity for ATP. PNAS, 105(6), 2070–2075 (2008).

      (4) Figure 2d - how was the 16A distance arrived at?

      We thank the reviewer for pointing this out. The 16 Å cutoff was chosen based on the physical dimensions of the ATP analogue used in the experiments. Specifically, the largest radius of the atto647N-γ ATP molecule is ~16.9 Å, which defines the maximum distance at which lipid atoms could sterically obstruct access of ATP to the binding pocket. Accordingly, in the simulations, contacts were defined as pairs of coarse-grained atoms between lipid molecules and the residues forming the ATP-binding site (residues 694-703, 719, 766-769, 772-773, 817, 820, and 831) separated by less than 16 Å.

      We have rewritten the rationale for selecting the 16 Å cutoff in the Methods section to improve clarity.

      Methods, page 28, “Coarse-grained, Explicit-solvent Simulations with the MARTINI Force Field: We analyzed our simulations using WHAM[108,109] to reweight the umbrella biases and compute the average values of various metrics introduced in this manuscript. Specifically, we calculated the distance between Residue 721 and Residue 1186 (EGFR C-terminus) of the protein. To quantify the accessibility of the ATP-binding site, we calculated the number of contacts between lipid molecules and the residues forming the ATP-binding pocket (residues 694-703, 719, 766-769, 772-773, 817, 820, and 831)[110]. Close contact between the bilayer and these residues would sterically hinder ATP binding; thus, the contact number serves as a proxy for ATP-site accessibility. The cutoff distance for defining a contact was set to 16 Å, corresponding to the largest molecular radius of the fluorescent ATP analogue (atto647N-γ ATP, 16.96 Å111). Accordingly, we defined a contact as a pair of coarse-grained atoms, one from the lipid membrane and one from the ATP binding site, within a mutual distance of less than 16 Å.”

      (5) Figure 2e-h - I think a bar chart/violin plot/jitter plot would make it easier to compare the peak values. The statistics in the table should just be quoted in the text as value +/- error from the 95% confidence interval. The way it is written currently is confusing, as it implies that there is no conformational change with the addition of EGF in neutral lipids, but there is ~0.4nm one from the table. I don't understand what you mean by "The larger conformational response of these important domains suggests that the intracellular conformation may play a role in downstream signaling steps, such as binding of adaptor proteins"?

      We thank the reviewer for these suggestions. For the smFRET lifetime distributions (Figure 2j, k; previously Figure 2e, f), we have now included jitter plots of the donor lifetimes in the Supplementary Figure 11 to facilitate direct visual comparison of the median and distribution widths for each lipid composition and ±EGF conditions. The distance distributions for the ATP to C-terminus in Figure 2e, f (previously Figure 2g, h) were obtained from umbrella-sampling simulations that calculate free-energy profiles rather than raw, unbiased distance values. Because the sampling is guided by biasing potentials, individual distance values cannot be used to construct violin or jitter plots. We therefore present the simulation data only as probability density distributions, which best reflect the equilibrium distributions derived from them.

      We have also revised the text to report the median ± 95% confidence interval, improving clarity and consistency with the statistical table.

      Results, page 9: “In the neutral bilayer (0% POPS), the distributions in the absence of EGF peaks at 8.1 nm (95% CI: 8.0–8.2 nm) and in the presence of EGF peaks at 8.6 nm (95% CI: 8.5–8.7 nm) (Table 1, Supplementary Table 1). In the physiological regime of 30% POPS nanodiscs, the peak of the donor lifetime distribution shifts from 9.1 nm (95% CI: 8.9–9.2 nm) in the absence of EGF to 11.6 nm (95% CI: 11.1–12.6 nm) in the presence of EGF (Table 1, Supplementary Table 1), which is a larger EGF-induced conformational response than in neutral lipids.”

      Finally, we have rephrased the sentence in question for clarity. The revised text now reads:

      Results, page 9: “The larger conformational response observed in the presence of anionic lipids suggests that these lipids enhance the responsiveness of the intracellular domains to EGF, potentially ensuring interactions between C-terminal sites and adaptor proteins during downstream signaling.”

      (6) "r, highlighting that the charged lipids can enhance the conformational response even for protein regions far away from the plasma membrane" - is it not that the neutral membrane is just very weird and not physiological that EGFR and other proteins don't function properly?

      We agree with the reviewer that completely neutral (0% POPS) membranes are not physiological and likely do not support the native organization or activity of EGFR. We have revised the text to clarify that the 30% POPS condition represents a more native-like lipid environment that restores or stabilizes the expected conformational response, rather than "enhancing" it. The revised sentence now reads:

      Results, page 10: “Both experimental and computational results show a larger EGF-induced conformational change in the partially anionic bilayer, consistent with the notion that a partially anionic lipid bilayer provides a more native environment that supports proper receptor activation, compared to the non-physiological neutral membrane.”

      (7) "snap surface 594 on the C-terminal tail as the donor and the fluorescently-labeled lipid (Cy5) as the acceptor (Supplementary Fig. 2, 11)." Why not refer to Figure 3a here to make it easier to read?

      We have added the reference to Figure 3a, and we thank the Reviewer for the suggestion.

      (8) Figure 3 - the bimodality in many of these plots is dubious. It's very clear in some, i.e. 0% POPS +EGF, but not others. Can anything be done to justify bimodality better?

      We agree that statistical justification is important for interpreting lifetime distributions. To address this, we performed global fits of the data with both two- and three-Gaussian models and evaluated them using the Bayesian Information Criterion (BIC), which balances the model fit with a penalty for additional parameters. The three-Gaussian model gave a substantially lower BIC, indicating statistical preference for the more complex model. However, we also assessed the separability of the Gaussian components using Ashman’s D, which quantifies whether peaks are distinct. This analysis showed that two of the Gaussians are not separable, implying they represent one broad distribution rather than two discrete states. Therefore, when all the distributions are fit globally, the data are best described as two Gaussians, one centered at ~1.3 ns and the other at ~2.7 ns, with cholesterol-dependent shifts reflecting changes in the distribution of this population rather than the emergence of a separate state. We better justified our choice of model by incorporating the results of the global two- vs three-Gaussian fits with BIC and Ashman’s D analysis in the revised manuscript.

      Methods, page 27: “Model Selection and Statistical Analysis

      Global fitting of lifetime distributions was performed across all experimental conditions using maximum likelihood estimation. Both two-Gaussian and three-Gaussian distribution models were evaluated as described previously.62 Model performance was compared using the Bayesian Information Criterion (BIC),[101] which balances model likelihood and complexity according to

      BIC = -2 ln L + k ln n

      where L is the likelihood, k is the number of free parameters, and n is the number of singlemolecule photon bunches across all experimental conditions. A lower BIC value indicates a statistically better model[101]. The separation between Gaussian components was subsequently assessed using the Ashman’s D where a score above 2 indicates good separation[102]. For two Gaussian components with means µ1, µ2 and standard deviations σ1, σ2,

      where Dij represents the distance metric between Gaussian components i and j. All fitted parameters, likelihood values, BIC scores, and Ashman’s D values are summarized in Supplementary Table 5.”

      (101) Schwarz, G. Estimating the dimension of a model. The Annals of Statistics, 461–464 (1978).

      (102) Ashman, K. M., Bird, C. M. & Zepf, S. E. Detecting bimodality in astronomical datasets. The Astronomical Journal 108(6), 2348–2361 (1994).

      (9) Figure 3c - can you better label the POPS/POPC on here?

      We thank the reviewer for this suggestion. In the revised manuscript, Figure 3b (previously Figure 3c) has been updated to label the lipid composition corresponding to each smFRET distribution to make the comparison across conditions easier to follow.

      (10) Figure 3g - it looks like cholesterol causes a shift in both the peaks, such that the previous open and closed states are not the same, but that there are 2 new states. This is key as the authors state: "Remarkably, high anionic lipids and cholesterol content produce the same EGFR conformations but with opposite effects on signaling-suppression or enhancement." But this is only true if there really are the same conformational states for all lipid/cholesterol conditions. Again, the bimodal models used for all conditions need to be justified.

      We appreciate the reviewer’s insightful comment. We agree that the interpretation of the lifetime distributions depends on whether cholesterol and anionic lipids modulate existing conformational states or create new ones. To test this, we performed global fits of all distributions using the two- and three-Gaussian models and compared them using the Bayesian Information Criterion (BIC) and Ashman’s D, the results of which are described in detail in response to (8) above.

      Both fitting models, two- and three-Gaussian, identified the same short lifetime component (µ = 1.3 ns), suggesting this reflects a well separated conformation. While the three-Gaussian model gave a lower BIC, Ashman’s D analysis indicated that the two of the three components (µ = 2.6 ns and 3.4 ns) are not statistically separable, suggesting they represent a single broad conformational population rather than distinct states. If instead these two components reflected distinct states present under different conditions, Ashman’s D analysis would have found the opposite result. This supports our interpretation that high cholesterol and high anionic lipid content produce similar conformation ensembles with opposite effects on signaling output.

      Finally, we acknowledge that additional conformations may exist, but based on this analysis a bimodal model describes the populations captured in our data and so we limit ourselves to this simplest framework. We have clarified this rationale in the revised manuscript and added the results of the BIC and Ashman’s D analysis to support this interpretation.

      (11) Why are we jumping about between figures in the text? Figure 1d is mentioned after Figure 2. Also, DMPC is shown in the figures way before it is described in the text. It is very confusing. Figure 3 is so compact. I think it should be spread out and only shown in the order presented in the text. Different parts of the figure are referred to seemingly at random in the text. Why is DMPC first in the figure, when it is referred to last in the text?

      Following the Reviewer’s comment, we have revised the figure order and layout to improve readability and ensure consistency with the text. The previous Figures 1d-f which introduce the single-molecule fluorescence setup are now Figure 2g-i, positioned immediately before the first single-molecule FRET experiments (Fig 2j, k). The DMPC distribution in Figure 3 has been moved to the Supplementary Information (Supplementary Fig. 17), where it is shown alongside POPC, as these datasets are compared in the section “Mechanism of cholesterol inhibition of EGFR transmembrane conformational response”. The smFRET distributions in Figure 3 are now presented in the same sequence as they are discussed in the text, and the figure has been spread out for better clarity.

      (12) Throughout, I find the presentation of numerical results, their associated error, and whether they are statistically significantly different from each other confusing. A lot of this is in supplementary tables, but I think these need to go in the main text.

      To improve clarity and ensure that key quantitative results are easily accessible, we have moved the relevant supplementary tables to the main text. Specifically, the following tables have been incorporated into the main manuscript:

      (i) Median distance between the ATP binding site and the EGFR C-terminus, or between membrane and EGFR C-terminus from smFRET measurements (previously supplementary table 1 is now main table 1)

      (ii) Median distance between the membrane and the EGFR C-terminus in different anionic lipid environments (previously supplementary table 4 is now main table 2)

      (iii) Median distance between the membrane and the EGFR C-terminus in different cholesterol environments (previously supplementary table 8 and 12 is now combined to be main table 3)

      (13) Supplementary figures - in general, there is a need to consider how to combine or simplify these for eLife, as they will have to become extended data figures.

      We thank the reviewer for this helpful suggestion. In the revised manuscript, we have reorganized the supplementary figures into extended data figures in accordance with eLife’s format. Specifically:

      - Supplementary Figs. 1–7 are now grouped as Extended Data Figures for Figure 1 in the main text. They are now Figure 1 - figure supplements 1–7.

      - Supplementary Fig. 8–11 is now Extended Data Figure associated with Figure 2. It is now Figure 2 - figure supplements 1–4.

      - Supplementary Figs. 12–17 are now grouped as Extended Data Figures for Figure 3. They are now Figure 3 - figure supplements 1–6.

      (14) Supplementary Figure 2 - label what the two bands are in the EGFR and pEGFR sets at the bottom of panel c.

      We thank the reviewer for this comment. The two bands shown in the EGFR and pEGFR blots in Supplementary Fig. 2d (previously Supplementary Fig. 2c) corresponds to replicate samples under identical conditions. We have now clarified this in the figure legend and labeled the lanes as “Rep 1” and “Rep 2” in the revised figure and modified the figure legend.

      Supplementary Figure 2, page 31: “(d) Western blots were performed on labelled EGFR in nanodiscs. Anti-EGFR Western blots (left) and anti-phosphotyrosine Western blots (right) tested the presence of EGFR and its ability to undergo tyrosine phosphorylation, respectively, consistent with previous experiments on similar preparations[18, 54, 55]. The two lanes in each blot correspond to replicate samples under identical conditions.”

      (15) Supplementary Figures 3+4 - a bar chart/boxplot or similar would be easier for comparison here.

      In the revised version, we have replaced the histograms with jitter plots showing the nanodisc size distributions for each condition in supplementary figures 4 and 5 (previously supplementary figures 3 and 4). The plots display individual measurements with a horizontal line indicating the mean size (mean ± standard deviation values provided in the caption).

      (16) Supplementary Figures 10, 12, 13, 15, 16 - I would jitter these.

      We have incorporated jitter plots for the relevant datasets in Supplementary Figures 11, 13, 15, 16 and 17 (previously supplementary figures 10, 12 13, 15 and 16) to provide a clearer visualization of the data distributions and median values.

      Reviewer #2 (Recommendations for the authors):

      (1) Reactions were performed in 250 µL volumes. What is the average yield of solubilized EGFR in those reactions? Are there differences in the EGFR solubilization with the various lipid mixtures?

      The amount of solubilized EGFR produced in each 250 µL cell-free reaction was below the reliable detection limit for quantitative absorbance assays. At these protein levels, little to no EGFR precipitation was observed for all lipid compositions. Although exact yields could not be determined, fluorescence-based detection confirmed the presence of functional, nanodiscincorporated EGFR suitable for smFRET and ensemble fluorescence experiments. We observed variability in total yield between independent reactions within the same lipid composition, which is common for cell-free systems, but no consistent trend attributable to lipid composition.

      (2) Figure S2: It would be better to have a larger overview of the particles on a grid to get a better impression of sample homogeneity.

      TEM images showing a larger field of view have been added for each lipid composition in Supplementary Figures 4 and 5.

      (3) Figure 2b: It appears that there is some variation in the stoichiometry of ApoA1 and EGFR within the samples. Have equal amounts of each sample been analyzed? Are there, in addition, some precipitates of EGFR? It would further be good to have a negative control without expression to get more information about the additional bands in Figure S2b. As they do not appear in the fluorescent gel, it is unlikely that they represent premature terminations of EGFR.

      The fluorescence intensity from the bound ATP analogue (Atto 647N-ATP) and from the snap surface 488 label, which binds stoichiometrically to the SNAP tag at the EGFR C-terminus, was measured for each sample. The relative amount of ATP binding was quantified for each sample by normalizing to the EGFR content (Figure 2b). This normalization accounts for the different amounts of EGFR produced in each condition.

      We did not observe any visible precipitation under the reported cell-free conditions, likely due to the following reasons:

      (i) EGFR and ApoA1 are co-expressed in the cell-free reaction, and ApoA1 assembles into nanodiscs concurrently with receptor translation, providing an immediate membrane sink

      (ii) ApoA1 is expressed at high levels, maintaining disc concentrations that keep the reaction in a soluble regime.

      A control cell-free reaction containing only ApoA1∆49 (1 µg) and no EGFR template, analyzed after affinity purification, showed a single prominent band at ~ 25 kDa (gel image below), corresponding to ApoA1, along with faint background bands typical of Ni-NTA purification from cell-lysates. These weak, non-specific bands likely arise from co-purification of endogenous E.coli proteins.  

      The ApoA1∆49-only control gel has now been included as part of the supplementary figure 2.

      (4) Figure S2c: It would be better to show the whole lanes to document the specificity of the antibodies. Anti-Phosphor antibodies are frequently of poor selectivity. In that case, a negative control with corresponding tyrosine mutations would be helpful.

      We have updated Figure S2d (previously Figure S2c) to include the full gel lanes to better illustrate the specificity of both the total EGFR and phospho-EGFR (Y1068) antibodies. The results show a single clear band at the expected molecular weight for EGFR, conforming antibody specificity.

      (5) The Results section already contains quite some discussion. I would thus recommend combining both sections.

      We thank the reviewer for the suggestion. We have now created a results and discussion section to better reflect the content of these paragraphs, with the previous discussion section now a subsection focused on implications of these results.

    1. Allgemeine Geschäftsbedingungender Solid Deal GmbH für die Nutzung der Plattform TIPAR1. GeltungsbereichDiese Allgemeinen Geschäftsbedingungen gelten für alle Verträge zwischen der Solid Deal GmbH (nachfolgend „Anbieter“) und Kunden, die Leistungen über die Plattform TIPAR (www.tipar.de) in Anspruch nehmen. Abweichende Bedingungen des Kunden werden nicht anerkannt, es sei denn, der Anbieter stimmt ihrer Geltung ausdrücklich schriftlich zu.2. VertragsgegenstandTIPAR ist eine digitale Vorsorgeplattform für Tierhalter. Der Anbieter stellt die technische Infrastruktur zur Erfassung, Erstellung und Dokumentation von Tierpatenschaftsvereinbarungen bereit. Dazu gehören optionale Zusatzleistungen wie Notfallkarten, QR-Code-Zugänge und Informationspakete.3. Registrierung und NutzerkontoZur Nutzung der Services ist ein persönliches Nutzerkonto erforderlich. Der Kunde verpflichtet sich, bei der Registrierung wahrheitsgemäße Angaben zu machen und Zugangsdaten vertraulich zu behandeln. Änderungen der Kontaktdaten sind unverzüglich mitzuteilen.Pro Person darf nur ein persönliches Konto geführt werden.Der Kunde ist für die Richtigkeit seiner Angaben verantwortlich.Bei Verdacht auf Missbrauch des Kontos ist der Anbieter unverzüglich zu informieren.4. VertragsschlussDer Vertrag kommt zustande, sobald der Kunde den Bestellprozess auf der Plattform abschließt und die Zahlung erfolgreich bestätigt wurde. Der Anbieter übermittelt dem Kunden unverzüglich eine Bestätigung per E-Mail.5. Preise und ZahlungAlle angegebenen Preise verstehen sich in Euro inklusive der gesetzlichen Umsatzsteuer. Die Zahlungsabwicklung erfolgt über den Zahlungsdienstleister Stripe Payments Europe Ltd.Zahlungsmethoden: Kreditkarte, SEPA-Lastschrift, Apple Pay, Google Pay.Der Betrag wird unmittelbar nach Vertragsabschluss fällig.Rechnungen werden elektronisch bereitgestellt.6. WiderrufsrechtDie über TIPAR erstellten Tierpatenschaftsvereinbarungen werden individuell nach den Angaben des Kunden angefertigt. Gemäß § 312g Abs. 2 Nr. 1 BGB besteht daher kein Widerrufsrecht. Mit Abschluss des Bestellvorgangs bestätigt der Kunde, dass er von diesem Ausschluss des Widerrufsrechts Kenntnis genommen hat und diesem zustimmt.Für digitale Zusatzprodukte ohne Individualisierung gilt das gesetzliche Widerrufsrecht. Nähere Informationen finden sich in der Widerrufsbelehrung.Korrekturen sind vor Beginn der individuellen Erstellung möglich.Änderungswünsche bitte unverzüglich an support@tipar.de melden.7. Pflichten der NutzerDer Kunde stellt sicher, dass die in TIPAR hinterlegten Daten zu Tier und Paten korrekt und aktuell sind. Änderungen sind zeitnah zu aktualisieren. Der Kunde ist dafür verantwortlich, dass benannte Paten zur Übernahme bereit und informiert sind.8. HaftungDer Anbieter haftet bei Vorsatz und grober Fahrlässigkeit unbeschränkt. Bei leichter Fahrlässigkeit haftet der Anbieter nur bei Verletzung wesentlicher Vertragspflichten (Kardinalpflichten) und begrenzt auf den vorhersehbaren, vertragstypischen Schaden. Eine Haftung für Schäden, die auf fehlerhafte oder unvollständige Angaben des Kunden zurückzuführen sind, ist ausgeschlossen.9. Vertragslaufzeit und KündigungDie Vertragslaufzeit richtet sich nach dem gewählten Tarif. Digitale Zugänge bleiben aktiv, solange ein gültiger Vertrag besteht. Eine ordentliche Kündigung vor Ablauf der vereinbarten Laufzeit ist ausgeschlossen, sofern nichts anderes vereinbart wurde.10. SchlussbestimmungenEs gilt das Recht der Bundesrepublik Deutschland unter Ausschluss des UN-Kaufrechts. Erfüllungsort ist der Sitz des Anbieters. Sollten einzelne Bestimmungen dieser AGB unwirksam sein, bleibt die Wirksamkeit der übrigen Bestimmungen unberührt.

      TIPAR AGB

      **Allgemeine Geschäftsbedingungen

      der Solid Deal GmbH für die Nutzung der Plattform TIPAR**

      1. Geltungsbereich

      Diese Allgemeinen Geschäftsbedingungen gelten für alle Verträge zwischen der Solid Deal GmbH, Horneburger Str. 44, 45711 Datteln (nachfolgend „Anbieter“) und Verbrauchern oder Unternehmern (nachfolgend „Nutzer“), die Leistungen über die Plattform TIPAR unter www.tipar.de in Anspruch nehmen.

      Abweichende Bedingungen des Nutzers finden keine Anwendung, es sei denn, der Anbieter stimmt ihrer Geltung ausdrücklich in Textform zu.

      2. Vertragsgegenstand

      TIPAR ist eine digitale Vorsorgeplattform für Tierhalter. Der Anbieter stellt eine technische Infrastruktur zur Verfügung, mit der Nutzer Informationen zu Tieren, benannten Ansprechpartnern (z. B. Paten) sowie ergänzende Angaben erfassen, verwalten und dokumentieren können, um im Ernstfall Orientierung zu schaffen.

      Zum Leistungsumfang können – je nach gewähltem Paket – digitale Zugänge sowie optionale physische Produkte (z. B. Notfallkarten oder QR-Kennzeichnungen) gehören.

      TIPAR ersetzt keine tierärztliche, rechtliche oder behördliche Entscheidung und begründet keine Eigentumsübertragung an Tieren.

      2a. Rolle von TIPAR / Vermittlungsleistung

      TIPAR stellt ausschließlich eine digitale Plattform zur Dokumentation, Verwaltung und Auffindbarkeit von Informationen zur Verfügung.

      Die Vereinbarung über die tatsächliche Betreuung, Übernahme oder Versorgung eines Tieres kommt ausschließlich zwischen dem Tierhalter und der von ihm benannten Person zustande. TIPAR wird nicht Vertragspartner dieser Vereinbarung und übernimmt keine rechtliche, tatsächliche oder wirtschaftliche Verpflichtung zur Betreuung, Übernahme oder Versorgung eines Tieres.

      TIPAR übernimmt insbesondere keine Garantie oder Haftung dafür, dass benannte Personen die Betreuung oder Übernahme eines Tieres tatsächlich durchführen, durchführen können oder erreichbar sind.

      Die Leistung von TIPAR beschränkt sich auf die Bereitstellung der technischen Infrastruktur, die Dokumentation der vom Nutzer bereitgestellten Angaben sowie deren digitale Auffindbarkeit im Ernstfall.

      Der Nutzer ist selbst dafür verantwortlich, Hinweise, Kennzeichnungen, Notfallkarten oder sonstige physische oder digitale Verweise auf TIPAR so zu platzieren, mitzuführen oder anzubringen, dass sie im Ernstfall von Dritten gefunden und wahrgenommen werden können.

      TIPAR schuldet die vertragsgemäße Bereitstellung der Plattform sowie die technische Abrufbarkeit der vom Nutzer hinterlegten Informationen im Rahmen des vereinbarten Leistungsumfangs. Eine Garantie oder Erfolgsschuld besteht jedoch nicht, insbesondere nicht dafür, dass Dritte (z.B. Behörden, Einsatzkräfte, Finder) die Hinweise tatsächlich finden, den Abruf durchführen oder die Informationen nutzen, sowie nicht dafür, dass benannte Ansprechpartner erreichbar sind oder die Versorgung tatsächlich übernehmen. Die Verantwortung dafür, dass Hinweise, Kennzeichnungen oder Verweise auf TIPAR im Einzelfall so platziert oder mitgeführt werden, dass sie von Dritten wahrgenommen werden können, liegt beim Nutzer.

      3. Registrierung und Nutzerkonto

      Die Nutzung ist nur volljährigen Personen gestattet; für Minderjährige handeln die gesetzlichen Vertreter.

      Die Nutzung der Plattform erfordert die Erstellung eines persönlichen Nutzerkontos.

      Der Nutzer verpflichtet sich, bei der Registrierung vollständige und wahrheitsgemäße Angaben zu machen und diese aktuell zu halten. Zugangsdaten sind vertraulich zu behandeln und dürfen nicht an Dritte weitergegeben werden.

      Pro Person darf nur ein Nutzerkonto geführt werden. Der Nutzer ist für alle Aktivitäten verantwortlich, die über sein Konto erfolgen. Bei Verdacht auf Missbrauch ist der Anbieter unverzüglich zu informieren.

      4. Vertragsschluss

      Der Vertrag kommt zustande, sobald der Nutzer den Bestellprozess auf der Plattform abschließt und – sofern kostenpflichtige Leistungen gewählt wurden – die Zahlung erfolgreich durchgeführt wurde. Bei Verbrauchern erfolgt der Vertragsschluss im elektronischen Geschäftsverkehr über eine eindeutig als zahlungspflichtig gekennzeichnete Bestätigungsschaltfläche.

      Der Anbieter bestätigt den Vertragsschluss per E-Mail.

      5. Preise und Zahlung

      Alle Preise verstehen sich in Euro inklusive der gesetzlichen Umsatzsteuer, sofern nicht anders angegeben.

      Die Zahlungsabwicklung erfolgt über den Zahlungsdienstleister Stripe Payments Europe Ltd. Akzeptierte Zahlungsmethoden sind insbesondere Kreditkarte, SEPA-Lastschrift, Apple Pay und Google Pay.

      Einmalige Entgelte (z. B. Setup-Fee) werden unmittelbar nach Vertragsschluss fällig. Rechnungen werden dem Nutzer elektronisch zur Verfügung gestellt.

      Soweit eine Verlängerung vereinbart ist, erteilt der Nutzer mit Vertragsschluss die Autorisierung zur wiederkehrenden Abrechnung der jeweiligen Vertragsperiode über die gewählte Zahlungsmethode.

      § 5a Lieferung und Herstellung physischer Produkte (Goodies)

      1. Herstellung / Beginn der Fertigung
Sofern der Leistungsumfang physische Produkte (z. B. Notfallkarten, QR-Kennzeichnungen, Plaketten) umfasst, beginnt die Herstellung grundsätzlich nach Abschluss des Bestellprozesses und erfolgreicher Zahlung, sofern keine abweichende Regelung im Bestellprozess angegeben ist.
      2. Liefergebiet und Versand
Die Lieferung erfolgt an die vom Nutzer im Bestellprozess angegebene Lieferadresse. Ein Anspruch auf Lieferung in bestimmte Länder besteht nur, soweit diese im Bestellprozess als Liefergebiet angeboten werden.
      3. Lieferzeit
Angaben zu Lieferzeiten sind, sofern nicht ausdrücklich als verbindlich bezeichnet, unverbindliche Richtwerte. Teillieferungen sind zulässig, soweit sie dem Nutzer zumutbar sind.
      4. Mitwirkungspflicht: korrekte Lieferadresse
Der Nutzer ist verpflichtet, die Lieferadresse vollständig und korrekt anzugeben und Änderungen unverzüglich mitzuteilen, soweit dies technisch möglich ist. Mehrkosten, die durch eine vom Nutzer zu vertretende fehlerhafte oder unvollständige Adressangabe entstehen (z. B. Rücksendung, erneuter Versand), trägt der Nutzer.
      5. Gefahrübergang
Gegenüber Verbrauchern geht die Gefahr des zufälligen Untergangs oder der zufälligen Verschlechterung der Ware erst mit Übergabe der Ware an den Verbraucher über. Gegenüber Unternehmern geht die Gefahr mit Übergabe der Ware an das Versandunternehmen über.
      6. Sachmängel / Austausch bei fehlerhaften Produkten
Für physische Produkte gelten die gesetzlichen Gewährleistungsrechte. Der Nutzer wird gebeten, offensichtliche Transportschäden möglichst zeitnah dem Versanddienstleister und dem Anbieter mitzuteilen; die gesetzlichen Rechte des Nutzers bleiben hiervon unberührt.
Bei berechtigten Mängelrügen leistet der Anbieter nach seiner Wahl Nacherfüllung durch Ersatzlieferung oder Nachbesserung, soweit dies möglich und zumutbar ist.

      § 5b Spendenanteil / Unterstützung Tierschutz

      1. Soweit im Bestellprozess ausgewiesen, wird aus der Setup-Fee ein fester Betrag zur Unterstützung von Tierschutzorganisationen verwendet (z. B. 5,00 EUR).
      2. Der Unterstützungsbetrag ist Bestandteil der Gesamtpreisgestaltung. Ein Anspruch des Nutzers auf Auswahl einer bestimmten Organisation besteht nur, sofern dies im Bestellprozess ausdrücklich angeboten wird.
      3. Bei Kündigung oder sonstiger Vertragsbeendigung erfolgt keine Rückerstattung des Unterstützungsbetrags.

      6. Widerrufsrecht

      Sofern der Vertrag die Lieferung von Waren umfasst, die individuell nach Kundenspezifikation angefertigt werden (z. B. personalisierte Notfallkarten oder Kennzeichnungen), besteht gemäß § 312g Abs. 2 Nr. 1 BGB kein Widerrufsrecht.

      Für nicht individualisierte digitale Leistungen gilt das gesetzliche Widerrufsrecht, sofern gesetzlich vorgesehen. Einzelheiten ergeben sich aus der gesonderten Widerrufsbelehrung.

      Korrekturen von Angaben sind bis zum Beginn der individuellen Fertigung möglich und unverzüglich mitzuteilen.

      7. Pflichten der Nutzer

      Der Nutzer ist dafür verantwortlich, dass alle in TIPAR hinterlegten Angaben zu Tier, Ansprechpartnern und sonstigen Informationen korrekt, vollständig und aktuell sind.

      Der Nutzer darf die Plattform ausschließlich für eigene, berechtigte Zwecke nutzen. Insbesondere ist es untersagt, falsche oder irreführende Angaben zu machen, Tiere zu registrieren, für die keine Berechtigung besteht, oder Daten ohne Wissen und Einverständnis der betroffenen Personen zu hinterlegen. Der Anbieter behält sich vor, bei missbräuchlicher oder rechtswidriger Nutzung Inhalte zu sperren oder Nutzerkonten zu deaktivieren.

      Der Nutzer stellt sicher, dass benannte Ansprechpartner über ihre Rolle informiert sind und zur Übernahme der benannten Verantwortung grundsätzlich bereit und fähig sind.

      Der Nutzer versichert zudem, dass er berechtigt ist, personenbezogene Daten der benannten Ansprechpartner (z. B. Name, E-Mail-Adresse, Telefonnummer) in TIPAR zu hinterlegen, und dass die benannten Ansprechpartner mit der Speicherung und Nutzung dieser Daten zum Zweck der Kontaktaufnahme im Rahmen von TIPAR einverstanden sind.

      Der Nutzer verpflichtet sich, benannte Ansprechpartner auf Wunsch von TIPAR oder des Ansprechpartners unverzüglich zu aktualisieren oder zu entfernen, sofern hierfür ein berechtigter Grund besteht.

      Der Anbieter übernimmt keine Prüfung der tatsächlichen Verfügbarkeit, Eignung oder Erreichbarkeit benannter Personen.

      TIPAR hat keinen Einfluss darauf, ob Behörden, Einsatzkräfte oder sonstige Dritte die bereitgestellten Informationen tatsächlich abrufen oder nutzen.

      § 7a Sperrung und Kündigung durch den Anbieter

      1. Sperrung bei Verdacht / Schutz der Plattform
Der Anbieter ist berechtigt, den Zugang zur Plattform vorübergehend zu sperren, wenn konkrete Anhaltspunkte für einen Missbrauch, einen Verstoß gegen diese AGB oder eine rechtswidrige Nutzung vorliegen und die Sperrung zur Abwehr von Schäden oder zur Sicherung der Plattform erforderlich ist.
      2. Kündigung aus wichtigem Grund
Der Anbieter ist berechtigt, den Vertrag aus wichtigem Grund außerordentlich zu kündigen, insbesondere wenn der Nutzer
a) vorsätzlich falsche oder irreführende Angaben hinterlegt,
b) Tiere registriert, für die keine Berechtigung besteht,
c) personenbezogene Daten ohne erforderliche Berechtigung oder Einwilligung hinterlegt,
d) die Plattform zur Täuschung, zum Spam, zu missbräuchlichen Anfragen oder sonstigen rechtswidrigen Zwecken nutzt oder
e) Sicherheitsmechanismen oder technische Schutzmaßnahmen umgeht oder dies versucht.
      3. Vorherige Fristsetzung / Abmahnung
Soweit dem Anbieter zumutbar, wird der Nutzer vor einer außerordentlichen Kündigung abgemahnt und erhält eine angemessene Frist zur Abhilfe. Dies gilt nicht, wenn eine Abhilfe nicht möglich ist oder die sofortige Kündigung aufgrund der Schwere des Verstoßes gerechtfertigt ist.
      4. Folgen der Sperrung / Kündigung
Im Falle der Sperrung oder Kündigung kann der Anbieter den Zugang zu Inhalten und Funktionen der Plattform einschränken. Gesetzliche Aufbewahrungspflichten und berechtigte Interessen des Anbieters bleiben unberührt.
      5. Erstattungen
Im Falle einer außerordentlichen Kündigung durch den Anbieter aus wichtigem Grund, den der Nutzer zu vertreten hat, besteht kein Anspruch auf Erstattung bereits gezahlter Entgelte. Gesetzliche Ansprüche des Nutzers bleiben unberührt.

      § 7b Nutzerinhalte, Rechte und Freistellung

      1. Nutzerinhalte
Soweit TIPAR das Hochladen oder Hinterlegen von Inhalten ermöglicht (z. B. Fotos, Texte, Dokumente oder sonstige Dateien; nachfolgend „Nutzerinhalte“), ist der Nutzer für diese Inhalte allein verantwortlich.
      2. Rechte an Nutzerinhalten
Der Nutzer versichert, dass er über alle erforderlichen Rechte an den Nutzerinhalten verfügt und durch die Nutzung keine Rechte Dritter (insbesondere Urheber-, Marken-, Persönlichkeits- oder Datenschutzrechte) verletzt werden.
      3. Einräumung von Nutzungsrechten an den Anbieter
Der Nutzer räumt dem Anbieter an den Nutzerinhalten ein einfaches, nicht ausschließliches, räumlich unbeschränktes und für die Dauer des Vertragsverhältnisses gültiges Recht ein, die Nutzerinhalte zum Zweck der Bereitstellung der Plattform zu speichern, zu vervielfältigen, technisch zu verarbeiten, im Nutzerkonto anzuzeigen sowie im Rahmen der vom Nutzer vorgesehenen Abruf- und Freigabefunktionen zugänglich zu machen. Eine darüberhinausgehende Nutzung zu Werbe- oder Marketingzwecken erfolgt nur mit gesonderter Zustimmung des Nutzers.
      4. Entfernung von Nutzerinhalten
Der Nutzer kann Nutzerinhalte im Rahmen der technischen Möglichkeiten im Nutzerkonto entfernen oder anpassen. Gesetzliche Aufbewahrungspflichten und berechtigte Interessen des Anbieters bleiben unberührt.
      5. Freistellung
Der Nutzer stellt den Anbieter von sämtlichen Ansprüchen Dritter frei, die aufgrund der Nutzerinhalte oder einer sonstigen rechtswidrigen Nutzung der Plattform gegen den Anbieter geltend gemacht werden, sofern der Anbieter die Rechtsverletzung nicht zu vertreten hat. Dies umfasst auch angemessene Kosten der Rechtsverteidigung.

      8. Haftung

      Der Anbieter haftet unbeschränkt bei Vorsatz und grober Fahrlässigkeit.

      Bei leichter Fahrlässigkeit haftet der Anbieter nur bei Verletzung wesentlicher Vertragspflichten (Kardinalpflichten) und beschränkt auf den vorhersehbaren, vertragstypischen Schaden.

      Eine Haftung für Schäden, die auf unrichtige, unvollständige oder nicht aktualisierte Angaben des Nutzers zurückzuführen sind, ist ausgeschlossen.

      Ein Anspruch auf eine jederzeitige, ununterbrochene Verfügbarkeit der Plattform besteht nicht. Wartungsarbeiten, Sicherheitsupdates oder technische Störungen können zu vorübergehenden Einschränkungen führen.

      § 8a Höhere Gewalt / Drittleistungen

      Der Anbieter haftet nicht für Leistungsstörungen, die auf höhere Gewalt oder auf Störungen bei Drittanbietern beruhen, die der Anbieter nicht zu vertreten hat (z. B. Zahlungsdienstleister, Versanddienstleister, Hosting), sofern der Anbieter zumutbare Maßnahmen zur Behebung ergreift.

      9. Änderungen am System / Weiterentwicklung

      Der Anbieter behält sich vor, Funktionen der Plattform weiterzuentwickeln, anzupassen oder zu verändern, sofern der Vertragszweck hierdurch nicht wesentlich beeinträchtigt wird. Für Verbraucher gelten bei Änderungen der digitalen Leistungen ergänzend die Regelungen in Ziffer 9a.

      9a. Digitale Leistungen, Aktualisierungen und Änderungen (Verbraucher)

      1. Vertragsgemäße Bereitstellung
Der Anbieter stellt dem Nutzer die digitalen Leistungen von TIPAR im Rahmen der vereinbarten Funktionen über die Plattform bereit.
      2. Aktualisierungen
Soweit Aktualisierungen (insbesondere Sicherheits- und Funktionsupdates) erforderlich sind, um die Vertragsgemäßheit der digitalen Leistungen zu erhalten, wird der Anbieter diese innerhalb eines angemessenen Zeitraums bereitstellen.
      3. Mitwirkungspflichten des Nutzers
Der Nutzer ist verpflichtet, bereitgestellte Aktualisierungen zu installieren bzw. die erforderlichen Mitwirkungshandlungen vorzunehmen, sofern ihm dies zumutbar ist und er über die Folgen einer unterlassenen Aktualisierung informiert wurde.
      4. Rechte bei Leistungsstörungen / Mängeln
Soweit die digitalen Leistungen nicht vertragsgemäß bereitgestellt werden, hat der Nutzer die gesetzlichen Rechte. Der Anbieter erhält zunächst die Möglichkeit, den vertragsgemäßen Zustand innerhalb angemessener Frist herzustellen.
      5. Änderungen an digitalen Leistungen
Der Anbieter kann digitale Leistungen ändern, wenn hierfür ein triftiger Grund besteht (z. B. technische Weiterentwicklung, Sicherheitsanforderungen, Rechtsänderungen) und die Änderung für den Nutzer zumutbar ist.
Der Anbieter wird den Nutzer über Änderungen rechtzeitig in geeigneter Form informieren.
      6. Sonderkündigungsrecht bei nicht nur unerheblicher Beeinträchtigung
Führt eine Änderung zu einer nicht nur unerheblichen Beeinträchtigung der Nutzungsmöglichkeit der digitalen Leistungen, kann der Nutzer den Vertrag innerhalb von 30 Tagen ab Zugang der Änderungsmitteilung bzw. ab Durchführung der Änderung kündigen.

      10. Vertragslaufzeit und Kündigung

      Die Vertragslaufzeit richtet sich nach dem jeweils gewählten Tarif.

      Sofern ein kostenfreies erstes Nutzungsjahr vorgesehen ist, beginnt eine kostenpflichtige Verlängerung erst nach Ablauf dieses Zeitraums. Als Aktivierung gilt der Zeitpunkt, zu dem (i) der Bestellprozess abgeschlossen und (ii) die fällige Zahlung erfolgreich verarbeitet wurde und der gewählte Tarif im Nutzerkonto freigeschaltet ist. Das kostenfreie erste Nutzungsjahr beginnt mit dem Zeitpunkt der Aktivierung und endet nach Ablauf von zwölf (12) Monaten. Ab dem Folgetag beginnt die kostenpflichtige Vertragsperiode gemäß dem jeweils gewählten Tarif. Der Anbieter informiert den Nutzer rechtzeitig vor Beginn der ersten kostenpflichtigen Vertragsperiode in Textform über den anstehenden Übergang in die kostenpflichtige Verlängerung sowie über Preis, Laufzeit und Kündigungsfrist. Nach Ablauf der jeweiligen Vertragslaufzeit verlängert sich der Vertrag um die vereinbarte Laufzeit, sofern er nicht fristgerecht gekündigt wird. Dies gilt auch nach Ablauf eines kostenfreien ersten Nutzungsjahres, sofern im Tarif eine anschließende kostenpflichtige Verlängerung vorgesehen ist.

      Eine Kündigung ist mit einer Frist von 30 Tagen zum Ende der jeweiligen Vertragslaufzeit möglich, sofern im Tarif nichts Abweichendes geregelt ist.

      Etwaige gesetzliche Sonderkündigungsrechte, insbesondere nach Ziffer 9a, bleiben unberührt.

      § 10a Online-Kündigung (Kündigungsfunktion)

      1. Verbraucher können Verträge über die Plattform TIPAR auch online kündigen. Hierfür stellt der Anbieter eine unmittelbar erreichbare Kündigungsfunktion (z. B. „Verträge hier kündigen“) bereit.
      2. Die Kündigung kann ohne zusätzliche Hürden abgegeben werden; der Anbieter bestätigt den Eingang der Kündigung in Textform.
      3. Weitere Kündigungswege (z. B. per E-Mail in Textform) bleiben unberührt.

      § 10b Datenzugriff, Export und Löschung nach Vertragsende

      1. Nach Vertragsende kann der Zugriff auf Funktionen und Inhalte entsprechend dem gewählten Tarif eingeschränkt werden.
      2. Der Nutzer hat die Möglichkeit, die von ihm hinterlegten Daten im Rahmen der technischen Möglichkeiten vor Vertragsende zu exportieren bzw. herunterzuladen.
      3. Nach Vertragsende werden Daten im Rahmen gesetzlicher Aufbewahrungspflichten gespeichert und im Übrigen nach Ablauf angemessener Fristen gelöscht oder anonymisiert; Details ergeben sich aus der Datenschutzerklärung.

      11. Schlussbestimmungen

      Es gilt das Recht der Bundesrepublik Deutschland unter Ausschluss des UN-Kaufrechts. Sollten einzelne Bestimmungen dieser AGB unwirksam sein oder werden, bleibt die Wirksamkeit der übrigen Bestimmungen unberührt. Verbraucherstreitbeilegung (§ 36 VSBG): Die Solid Deal GmbH ist weder verpflichtet noch bereit, an einem Streitbeilegungsverfahren vor einer Verbraucherschlichtungsstelle teilzunehmen.

    1. https://bafybeigi4urr6jumopybpwxfu2i5edncg4e64c2z6dgtgm2clro7ibxmpe.ipfs.dweb.link/?filename=O%20%E2%80%94%20The%20Last%20Debt.%20When%20the%20empire%E2%80%99s%20money%20lies%2C%20its%E2%80%A6%20%EF%BD%9C%20by%20Ray%20Podder%20%EF%BD%9C%20Medium.html

      Page saved with SingleFile web-indy://💻/asus/ 🧊/♖//hyperpost/~/indyweb/📓/20/26/15/o-the-last-debt

      Page saved with SingleFile url: https://raypodder.medium.com/o-the-last-debt-3c12a1d998e7 saved date: Sat Dec 27 2025 21:56:44 GMT+0100 (Central European Standard Time)

      https://raypodder.medium.com/o-the-last-debt-3c12a1d998e7

    1. Gedruckte Patenurkunde (A4)Hochwertig gedruckt & laminiert – emotionales Add-on für Zuhause.

      Der Plan ist hier, die Tinkerella-Urkunden zu nehmen. Hier haben wir mehrere Designs, von denen sich der User eins aussuchen kann. Da werden dann 4 Kästchen mit den Designs angezeigt, die dann personalisiert, gedruckt und versendet werden. Sie sind aber nicht laminiert, sondern kommen in einer Klarsichthülle geschützt in einem stabilen dicken A4-Briefumschlag. Preis ist 7,95 €.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer # 1 (Public review):

      Significance:

      While most MAVEs measure overall function (which is a complex integration of biochemical properties, including stability), VAMP-seqtype measurements more strongly isolate stability effects in a cellular context. This work seeks to create a simple model for predicting the response for a mutation on the "abundance" measurement of VAMPseq.

      We thank the reviewer for their evaluation of our work and for their comments and feedback below.

      Of course, there is always another layer of the onion, VAMP-seq measures contributions from isolated thermodynamic stability, stability conferred by binding partners (small molecule and protein), synthesis/degradation balance (especially important in "degron" motifs), etc. Here the authors' goal is to create simple models that can act as a baseline for two main reasons:

      (1) how to tell when adding more information would be helpful for a global model;

      (2) how to detect when a residue/mutation has an unusual profile indicative of an unbalanced contribution from one of the factors listed above.

      As such, the authors state that this manuscript is not intended to be a state-of-the-art method in variant effect prediction, but rather a direction towards considering static structural information for the VAMP-seq effects. At its core, the method is a fairly traditional asymmetric substitution matrix (I was surprised not to see a comparison to BLOSUM in the manuscript) - and shows that a subdivision by burial makes the model much more predictive. Despite only having 6 datasets, they show predictive power even when the matrices are based on a smaller number. Another success is rationalizing the VAMPseq results on relevant oligomeric states.

      We thank the reviewer for their summary of the main points of our work. Based on the suggestion by the reviewer, we have added a comparison to predictions with BLOSUM62 to our revised manuscript, noting that we have previously compared the BLOSUM62 matrix to a broader and more heterogeneous set of scores generated by MAVEs (Høie et al, 2022).

      Specific Feedback:

      Major points:

      The authors spend a good amount of space discussing how the six datasets have different distributions in abundance scores. After the development of their model is there more to say about why? Is there something that can be leveraged here to design maximally informative experiments?

      We believe that these effects arise from a combination of intrinsic differences between the systems and assay-specific effects. For example, biophysical differences between the systems, such as differences in absolute folding stabilities or melting temperatures, will play a role, as will the fact that some proteins contain multiple domains.

      Also, the sequencing-based score for an individual variant in a sort-seq experiment (such as VAMP-seq) depends both on the properties of that variant and on the composition of the entire FACS-sorted cell library. This is because cells are sorted into bins depending on the composition of the entire library, which means that library-to-library composition differences can contribute to the differences between VAMP-seq score distributions. 

      From our developed models and outliers in predictions from these, it is difficult to tell which of the several possible underlying reasons cause the differences. We have briefly expanded the discussion of these points in the manuscript, and we have moreover elaborated on this in subsequent work (Schulze et al., 2025).

      They compare to one more "sophisticated model" - RosettaddG - which should be more correlated with thermodynamic stability than other factors measured by VAMP-seq. However, the direct head-tohead comparison between their matrices and ddG is underdeveloped. How can this be used to dissect cases where thermodynamics are not contributing to specific substitution patterns OR in specific residues/regions that are predicted by one method better than the other? This would naturally dovetail into whether there is orthogonal information between these two that could be leveraged to create better predictions.

      We thank the reviewer for this suggestion and indeed had spent substantial effort trying to gain additional biological insights from variants for which MAVE scores or MAVE predictions do not match predicted ∆∆G values. One major caveat in this analysis is that the experimental MAVE scores, MAVE predictions and the predicted ∆∆G values are rather noisy, making it difficult to draw conclusions based on individual variants or even small subsets of variants.

      In our revised manuscript, we have added an analysis to discover residue substitution profiles that are predicted most accurately either by a ∆∆G model or by our substitution matrix model, thereby avoiding analysis of individual variant effect scores. 

      We find that many substitution profiles are predicted equally well by the two model types, but also that there are residues for which one method predicts substitution effects better than the other method. We have added an analysis of the characteristics of the residues and variants for which either the ∆∆G model or the substitution matrix model is most useful to rank variants. Since we only find relatively few residues for which this is the case, we do not expect a model that leverages predicted scores from both methods to perform better than ThermoMPNN across variants. 

      Perhaps beyond the scope of this baseline method, there is also ThermoMPNN and the work from Gabe Rocklin to consider as other approaches that should be more correlated only with thermodynamics.

      We acknowledge that there are other approaches to predict ∆∆G beyond Rosetta including for example ThermoMPNN and our own method called RaSP (Blaabjerg et al, eLIFE, 2023), and we have added comparisons to ThermoMPNN and RaSP in the revised manuscript. We are unsure how one would use the data from Rocklin and colleagues directly, but we note that e.g. RaSP has been benchmarked on this data and other methods have been trained on this data. We originally used Rosetta since the Rosetta model is known to be relatively robust and because it has never seen large databases during training (though we do not think that training of ThermoMPNN and RaSP would be biased towards the VAMP-seq data). We note also that we have previously compared both Rosetta calculations and RaSP with VAMP-seq data for TPMT, PTEN and NUDT15 (Blaabjerg et al, eLIFE, 2023)

      I find myself drawn to the hints of a larger idea that outliers to this model can be helpful in identifying specific aspects of proteostasis. The discussion of S109 is great in this respect, but I can't help but feel there is more to be mined from Figure S9 or other analyses of outlier higher than predicted abundance along linear or tertiary motifs.

      We agree with these points and have previously spent substantial time trying to make sense of outliers in Figure S9 and Figure S18 (Figure S8 and Figure S18 of revised manuscript). The outlier analysis was challenging, in part due to the relatively high noise levels in both experimental data and predictions, and we did not find any clear signals. Some outliers in e.g. Figure S9 are very likely the result of dataset-specific abundance score distributions, which further complicates the outlier analysis. We now note this in the revised paper and hope others will use the data to gain additional insights on proteostasis-specific effects.  

      Reviewer # 2 (Public review):

      Summary:

      This study analyzes protein abundance data from six VAMP-seq experiments, comprising over 31,000 single amino acid substitutions, to understand how different amino acids contribute to maintaining cellular protein levels. The authors develop substitution matrices that capture the average effect of amino acid changes on protein abundance in different structural contexts (buried vs. exposed residues). Their key finding is that these simple structure-based matrices can predict mutational effects on abundance with accuracy comparable to more complex physics-based stability calculations (ΔΔG).

      Major strengths:

      (1) The analysis focuses on a single molecular phenotype (abundance) measured using the same experimental approach (VAMP-seq), avoiding confounding factors present when combining data from different phenotypes (e.g., mixing stability, activity, and fitness data) or different experimental methods.

      (2) The demonstration that simple structural features (particularly solvent accessibility) can capture a significant portion of mutational effects on abundance.

      (3) The practical utility of the matrices for analyzing protein interfaces and identifying functionally important surface residues.

      We thank the reviewer for the comments above and the detailed assessment of our work.

      Major weaknesses:

      (1) The statistical rigor of the analysis could be improved. For example, when comparing exposed vs. buried classification of interface residues, or when assessing whether differences between prediction methods are significant.

      We agree with the reviewer that it is useful to determine if interface residues (or any of the residues in the six proteins) can confidently be classified as buried- or exposed-like in terms of their substitution profiles. Thus, we have expanded our approach to compare individual substitution profiles to the average profiles of buried and exposed residues to now account for the noise in the VAMP-seq data. In our updated approach, we resample the abundance score substitution profile for every residue several thousand times based on the experimental VAMP-seq scores and score standard deviations, and we then compare every resampled profile to the average profiles for buried and exposed residues, thereby obtaining residue-specific distributions of RMSD<sub>buried</sub> and RMSD<sub>exposed</sub> values. These RMSD distributions are typically narrow, since many variants in several datasets have small standard deviations. In the revised manuscript, we report a residue to have e.g. a buried-like substitution profile if RMSD<sub>buried</sub> <RMSD<sub>exposed</sub> for at least 95% of the resampled profiles. We do not recalculate average scores in substitution matrices for this analysis. 

      Moreover, to illustrate potential overlap in predictive performance between prediction methods more clearly than in our preprint, we have added confidence intervals in Fig. 2 and Fig. 3 of the revised manuscript. We note that the analysis in Fig. 2 is performed using a leave-one-protein-out approach, which we believe provides the cleanest assessment of how well the different models perform.

      (2) The mechanistic connection between stability and abundance is assumed rather than explained or investigated. For instance, destabilizing mutations might decrease abundance through protein quality control, but other mechanisms like degron exposure could also be at play.

      We agree that we have not provided much description of the relation between stability and abundance in our original preprint. In the revised manuscript, we provide some more detail as well as references to previous literature explaining the ways in which destabilising mutations can cause degradation. We have moreover performed and added additional analyses of the relationship between thermodynamic stability and abundance through comparisons of stability predictions and predictions performed with our substitution matrix models.

      (3) The similar performance of simple matrix-based and complex physics-based predictions calls for deeper analysis. A systematic comparison of where these approaches agree or differ could illuminate the relationship between stability and abundance. For instance, buried sites showing exposed-like behavior might indicate regions of structural plasticity, while the link between destabilization and degradation might involve partial unfolding exposing typically buried residues. The authors have all the necessary data for such analysis but don't fully exploit this opportunity.

      This is similar to a point made by reviewer 1, and our answer is similar. We were indeed hoping that our analyses would have revealed clearer differences between effects on thermodynamic protein stability and cellular abundance and have tried to find clear signals. One major caveat in performing the suggested analysis is that both the experimental MAVE scores, ∆∆G predictions and our simple matrix-based predictions are rather noisy, making it difficult to make conclusions based on individual variants or even small subsets of variants. 

      To address this point, we have added an analysis to discover residue substitution profiles that are predicted most accurately either by a ∆∆G model or by our substitution matrix model, thereby avoiding analysis of individual variant effect scores. We find that many substitution profiles are predicted equally well by the two model types, but we also, in particular, find solvent-exposed residues for which the substitution matrix model is the better predictor. These residues are often aspartate, glutamate and proline, suggesting that surface-level substitutions of these amino acid types often can have effects that are not captured well by a thermodynamical model, either because this model does not describe thermodynamic effects perfectly, or because in-cell effects are necessary to account for to provide an accurate description.

      (4) The pooling of data across proteins to construct the matrices needs better justification, given the observed differences in score distributions between proteins (for example, PTEN's distribution is shifted towards high abundance scores while ASPA and PRKN show more binary distributions).

      We agree with the reviewer that the differences between the score distributions are important to investigate further and keep in mind when analysing e.g. prediction outliers. However, our results show that the pooling of VAMP-seq scores across proteins does result in substitution matrices that make sense biochemically and can identify outlier residues with proteostatic functions. As we also respond to a related point by reviewer 1, the differences in score distributions likely have complex origins. In that sense, we also hope that our results can inspire experimentalists to design methods to generate data that are more comparable across proteins.

      For example, biophysical differences between the systems, such as differences in absolute folding stabilities or melting temperatures will play a role, as will the fact that some proteins contain multiple domains. Also, the sequence-based score for an individual variant in a sort-seq experiment (such as VAMP-seq) depends both on the properties of that variant and from the composition of the entire FACS-sorted cell library. This is because cells are sorted into bins depending on the composition of the entire library, which means that library-to-library composition can contribute to the differences between VAMP-seq score distributions. From our developed models and outliers in predictions from these, it is difficult to tell which of the several possible underlying reasons cause the differences.

      Thus, even when experiments on different proteins are performed using the same technique (VAMP-seq), quantifying the same phenomenon (cellular abundance) and done in similar ways (saturation mutagenesis, sort-seq using four FACS bins), there can still be substantial differences in the results across different systems. An interesting side result of our work is to highlight this including how such variation makes it difficult to learn across experiments. We now elaborate on these points in the revised manuscript.

      (5) Some key methodological choices require better justification. For example, combining "to" and "from" mutation profiles for PCA despite their different behaviors, or using arbitrary thresholds (like 0.05) for residue classification.

      We hope we have explained our methodological choices clearer in the revised paper.

      We removed the dependency of the threshold of 0.05 used for residue classification in Fig. S19 of the original manuscript; in the revised manuscript we only report a residue to have e.g. a buried-like substitution profile if RMSD<sub>buried</sub> <RMSD<sub>exposed</sub> for at least 95% of the abundance score profiles that we resampled according to VAMP-seq score noise levels, as explained above.

      With respect to combining “to” and “from” mutational profiles for PCA, we could have also chosen to analyse these two sets of profiles separately to take potentially different behaviours along the two mutational axes into account. We do not think that there should be anything wrong with concatenating the two sets of profiles in a single analysis, since the analysis on the concatenated profiles simply expresses amino acid similarities and differences in a more general manner.

      The authors largely achieve their primary aim of showing that simple structural features can predict abundance changes. However, their secondary goal of using the matrices to identify functionally important residues would benefit from more rigorous statistical validation. While the matrices provide a useful baseline for abundance prediction, the paper could offer deeper biological insights by investigating cases where simple structure-based predictions differ from physics-based stability calculations.

      This work provides a valuable resource for the protein science community in the form of easily applicable substitution matrices. The finding that such simple features can match more complex calculations is significant for the field. However, the work's impact would be enhanced by a deeper investigation of the mechanistic implications of the observed patterns, particularly in cases where abundance changes appear decoupled from stability effects.

      We agree that disentangling stability and other effects on cellular abundance is one of the goals of this work. As discussed above, it has been difficult to find clear cases where amino acid substitutions affect abundance without stability beyond for example the (rare) effects of creating surface exposed degrons. Our new analysis, in which we compare substitution matrix-based predictions to stability predictions, does offer deeper insight into the relationship between the two predictor types and hence possibly between folding stability and abundance. 

      Reviewer #3 (Public review): 

      "Effects of residue substitutions on the cellular abundance of proteins" by Schulze and Lindorff-Larsen revisits the classical concept of structure-aware protein substitution matrices through the scope of modern protein structure modelling approaches and comprehensive phenotypic readouts from multiplex assays of variant effects (MAVEs). The authors explore 6 unique protein MAVE datasets based on protein abundance (and thus stability) by utilizing structural information, specifically residue solvent accessibility and secondary structure type, to derive combinations of context-specific substitution matrices predicting variant abundance. They are clear to outline that the aim of the study is not to produce a new best abundance predictor but to showcase the degree of prediction afforded simply by utilizing information on residue accessibility. The performance of their matrices is robustly evaluated using a leave-one-out approach, where the abundance effects for a single protein are predicted using the remaining datasets. Using a simple classification of buried and solvent-exposed residues, and substitution matrices derived respectively for each residue group, the authors convincingly demonstrate that taking structural solvent accessibility contexts into account leads to more accurate performance than either a structureunaware matrix, secondary structure-based matrix, or matrices combining both solvent accessibility or secondary structure. Interestingly, it is shown that the performance of the simple buried and exposed residue substitution matrices for predicting protein abundance is on par with Rosetta, an established and specialized protein variant stability predictor. More importantly, the authors finish off the paper by demonstrating the utility of the two matrices to identify surface residues that have buried-like substitution profiles, that are shown to correspond to protein interface residues, posttranslational modification sites, functional residues, or putative degrons.

      Strengths:

      The paper makes a strong and well-supported main point, demonstrating the utility of the authors' approach through performance comparisons with alternative substitution matrices and specialized methods alike. The matrices are rigorously evaluated without introducing bias, exploring various combinations of protein datasets. Supplemental analyses are extremely comprehensive and detailed. The applicability of the substitution matrices is explored beyond abundance prediction and could have important implications in the future for identifying functionally relevant sites.

      We thank the reviewer for the supportive comments on our work. 

      Comments:

      (1) A wider discussion of the possible reasons why matrices for certain proteins seem to correlate better than others would be extremely interesting, touching upon possible points like differences or similarities in local environments, degradation pathways, posttranslation modifications, and regulation. While the initial data structure differences provide a possible explanation, Figure S17A, B correlations show a more complicated picture.

      We agree with the reviewer that biochemical and biophysical differences between the proteins might contribute to the fact that some matrices correlate better than others. We also agree that it would be very interesting to understand these differences better. While it might be possible to examine some of the suggested causes of the differences, like differences or similarities in local environments, we have generally found that noise and differences in score distributions make such analyses difficult (see also responses to reviewers 1 and 2). For now, we will defer additional analyses to future work.

      (2) The performance analysis in Figure 2D seems to show that for particular proteins "less is more" when it comes to which datasets are best to derive the matrix from (CYP2C9, ASPA, PRKN). Are there any features (direct or proxy), that would allow to group proteins to maximize accuracy? Do the authors think on top of the buried vs exposed paradigm, another grouping dimension at the protein/domain level could improve performance?

      We don’t currently know if any protein- or domain-level features could be used to further split residues into useful categories for constructing new substitution matrices, but it is an interesting suggestion. We note that every substitution matrix consists of 380 averages, and creating too many residue groupings will cause some matrix entries to be averaged over very few abundance scores, at least with the current number of scores in the pooled VAMP-seq dataset. For example, while previous work has shown different mutational effects e.g. in helices and sheets (as one would expect), we find that a model with six matrices ({buried,exposed}x{helix,sheet,other}) does not lead to improved predictions (Fig. 2C), presumably because of an unfavourable balance between parameters and data.

      (3) While the matrices and Rosetta seem to show similar degrees of correlation, do the methods both fail and succeed on the same variants? Or do they show a degree of orthogonality and could potentially be synergistic?

      These are good questions and are related to similar questions from reviewers 1 and 2. In the revised manuscript, we have added additional analyses of differences between predictions from our substitution matrix model and a stability model, and we indeed find that the two methods show a degree of orthogonality. However, since we identify only relatively few residues for which one method performs better than the other, we don’t expect a synergistic model to outperform the stability predictor across all variants in any of the six proteins.  

      Overall, this work presents a valuable contribution by creatively utilizing a simple concept through cutting-edge datasets, which could be useful in various.

      Reviewing Editor:

      As discussed in more detail below, to strengthen the assessment, the authors are encouraged to:

      (1) Include more thorough statistical analyses, such as confidence intervals or standard errors, to better validate key claims (e.g., RMSD comparisons).

      (2) Perform a deeper comparison between substitution response matrices and ΔΔG-based predictions to uncover areas of agreement or orthogonality

      (3) Clarify the relationship between structural features, stability, and abundance to provide more mechanistic insights.

      As discussed above and below, we have added new analyses and clarifications to the revised manuscript.

      Reviewer #1 (Recommendations for the authors):

      Minor points:

      Why is a continuous version of the contact number used here, instead of a discrete count of neighbouring residues? WCN values of the residues in the core domain can be affected by residues far away (small contribution but not strictly zero; if there are many of them, it adds up).

      We have previously found WCN, which quantifies residue contact numbers in a continuous manner, to be a useful input feature for a classifier that determines whether individual residues are important for maintaining protein abundance or function (Cagiada et al, 2023). We have also found WCN and the cellular abundance of single substitution variants to correlate well in individual analyses of different proteins (Grønbæk-Thygesen et al., 2024; Gersing et al., 2024; Clausen et al., 2024).

      We have calculated the WCN as well as a contact number based on discrete counts of neighbouring residues for the six proteins in our dataset. When distances between residues are evaluated in the same way (i.e. using the shortest distance between any pair of heavy atoms in the side chains), and when the cutoff value used for the discrete count is equal to the r<sub>0</sub> of the WCN function, the continuous and discrete evaluations of residue contact numbers are highly and linearly correlated, and their rank correlation with the VAMP-seq data are very similar. We only observe minor contributions from residues far away in the structure on the WCN.

      Typos in SI figure captions e.g. Figure S8-11 "All predictions were performed using using...."

      Thank you for pointing this out. We have corrected the typos in Figure S8-11 (Figure S7-S10 in the revised manuscript).

      Personally, I'd appreciate a definition of these new substitution matrices under the constraints of rASA/WCN values. It was unclear to me until I read the code but we think that the definition is averaging the substitution matrix based on the clusters they are assigned to. If so, this could be straightforwardly defined in the method section with a heaviside step function.

      We have added a definition of the “buried” and “exposed” substitution matrices as a function of rASA in the methods section (“Definitions of buried and exposed residues” and “Definition of substitution matrices”) of the manuscript, as well as a definition of how we classified residues as either buried or exposed using both rASA and WCN as input. Our final substitution matrices, as shown in e.g. Fig. 2, do not depend on the WCN; only the substitution matrix results in Figure S6 (Figure S20 in the revised manuscript) depend on both WCN and rASA.

      Reviewer #2 (Recommendations for the authors):

      The following suggestions aim to strengthen the analysis and clarify the presentation of your findings:

      (1) Specific analyses to consider:

      (1.1) Analyze buried positions where the exposed matrix performs better. Understanding these cases might reveal properties of protein core regions that show unexpected mutational tolerance.

      We agree with the reviewer that a more detailed analysis of buried residues with exposed-like substitution profiles would be very interesting.

      We note that for proteins where the VAMP-seq score distribution is shifted towards high values (as it is the case for PTEN, TPMT and CYP2C9), our identification of such residues may be a result of the score distribution differences between the six datasets. To confidently identify mutationally tolerant core regions, it would be best to (a) correct for the distribution differences prior to the analysis or (b) focus the analysis on residues that fall far below the diagonal in Figure S18.

      In additional data (which can be found at https://github.com/KULL-Centre/_2024_Schulze_abundance-analysis)) ,we provide, for each of the proteins, a list of buried residues for which RMSD<sub>exposed</sub> <RMSD<sub>buried</sub> (for more than 95% of resampled substitution profiles, as described under 1.6). We have not analysed these residues further.

      (1.2) A systematic comparison of matrix-based vs. ΔΔG-based predictions could help understand both exposed sites that behave as buried (as analyzed in the paper) and buried sites that behave as exposed (1.1), potentially revealing mechanisms underlying abundance changes.

      In our revised manuscript, we have added additional analyses to compare matrixbased and ΔΔG-based predictions, focusing on exposed sites for which one prediction method captures variant effects on abundance considerably better the other prediction method. We have not investigated buried sites with exposed-like behaviour any further in this work.

      (1.3) Explore different normalization approaches when pooling data across proteins. In particular, consider using log(abundance score): if the experimental error in abundance measurements is multiplicative (which can be checked from the reported standard errors), then log transformation would convert this into a constant additive error, making the analysis more statistically sound.

      As we answer below to point 2.2, the abundance scores are, within each dataset, min-max normalised to nonsense and synonymous variant scores, and the score scale is thus in this way consistent across the six datasets. We have explained above and in the revised manuscript that abundance score distribution differences across datasets are likely partially a result of the FACS binning of assay-specific variant libraries. Using only the VAMP-seq scores (that is, without further information about the individual experiments), we cannot correct for the influence of the sorting strategy on the reported scores. A score normalisation across datasets that places all data points on a single scale would require inter-dataset references variant scores, which we do not have. We note that in a subsequent manuscript (Schulze et al, bioRxiv, 2025) we have attempted to take system- and experimentspecific score distributions into account. We now refer to this work in the revised manuscript.

      (1.4) Consider using correlation coefficients between predicted and observed abundance profiles as an alternative to RMSD, which is sensitive to the absolute values of the scores.

      We agree with the reviewer that using correlation coefficients to compare substitution profiles might also be useful, in particular for datasets with relatively unique VAMP-seq score distributions, such as the ASPA dataset. To explore this idea, we have repeated the analysis presented in Fig. S18 using the Pearson correlation coefficient r rather than the RMSD.

      As in Fig. S18, we derive r<sub>buried</sub> and r<sub>exposed</sub> for every residue in the six proteins, specifically by calculating r between the abundance score substitution profile of every individual residue and the average abundance score substitution profiles of buried and exposed residues. VAMP-seq data for the protein for which r<sub>buried</sub> and r<sub>exposed</sub> are evaluated is omitted from the calculation of average abundance score substitution profiles, and we use only monomer structures to determine whether residues are buried or exposed. 

      We show the results of this analysis in an Author response image 1 below. In each panel of the figure, r<sub>buried</sub> and r<sub>exposed</sub> are shown for individual residues of a single protein. Blue datapoints indicate residues that are solvent-exposed in the wild-type protein structures, and yellow datapoints indicate residues that are buried in the wild-type structures. Residues for which it is not the case that r<sub>buried</sub> < r<sub>exposed</sub> or r<sub>exposed</sub><r<sub>buried</sub> in more than 95% of 1000 resampled residue substitution profiles (see explanation of resampling method above) are coloured grey. “Acc.” is the balanced classification accuracy, calculated using all non-grey datapoints, indicating how many buried residues have buried-like substitution profiles (r<sub>exposed</sub><r<sub>buried</sub>) and how many solvent-exposed residues have exposed-like substitution profiles (r<sub>buried</sub> < r<sub>exposed</sub>). The classification accuracy per protein in this figure cannot be compared to the classification accuracy of the same protein in Fig. S18, since the number of datapoints used in the accuracy calculation differ between the r- and RMSD-based analyses. 

      Author response image 1.

      Comparing the r-based approach to the RMSD-based approach (Fig. S18), it is clear that the r-based method is less robust than the RMSD-based method for noisy and incomplete datasets. For the noisiest and most mutationally incomplete VAMP-seq datasets (i.e., PTEN, TPMT and CYP2C9) (Fig. 1), there are relatively few residues for which we with high confidence can determine if the substitution profile is more buried- or more exposed-like. When the VAMP-seq data is less noisy and has high mutational completeness, the r-based method becomes more robust and may thus be relevant in potential future work on new VAMP-seq data with small error bars.

      In conclusion, we find that RMSD-based approach to compare substitution profiles is more robust than an r-based approach for several of the VAMP-seq datasets that are included in our analysis. We do believe than an approach based on the correlation coefficient, or potentially several metrics, could be relevant to use, since abundance score distributions from VAMP-seq datasets can differ significantly across datasets. So as not to increase the length of the main text of our manuscript, we have not added this analysis to the revised manuscript.

      (1.5) Consider treating missing abundance scores as zero values, as they might indicate variants with very low abundance, rather than omitting them from the analysis.

      This suggestion would be most relevant for the PTEN, TPMT and CYP2C9 datasets, which all have a relatively small average mutational depth and completeness, as shown in Fig. 1B and 1C. To assess if setting missing abundance scores as zero values would be reasonable, we have compared the distributions of predicted ΔΔG values (from RaSP and ThermoMPNN) and of predicted abundance scores (from our exposure-based substitution matrices) for variants with reported and missing VAMP-seq data. We show the result in Author response image 2, with data aggregated across the six protein systems:

      Author response image 2.

      We find that variants with and without VAMP-seq data have similar ΔΔG score distributions and similar predicted abundance score distributions, and there is thus no clear enrichment of predicted loss of abundance for variants with missing VAMP-seq scores. This suggests that missing abundance scores do not necessarily indicate very low abundance. One cause of missing data might instead be problems with library generation (Matreyek et al, 2018, 2021).

      We show in Fig. S9 (Fig. S8 of the revised manuscript) that predicted scores for variants with experimental abundance scores of 0 are often overestimated for NUDT15, ASPA and PRKN, but this is not so much a problem for PTEN, TMPT and CYP2C9, the datasets with most missing scores. The lack of an enrichment of low abundance variants from the various predictors would thus still support that missing scores do not necessarily indicate low abundance.

      (1.6) Develop a proper statistical framework for comparing buried vs exposed predictions (whether using RMSD or correlations), including confidence intervals, rather than using arbitrary thresholds.

      As explained above and in the methods section of our revised manuscript, we have expanded our approach to compare the substitution profile of a residue to the average profiles of buried and exposed residues, and our method now accounts for the noise in the VAMP-seq data, making the analysis more statistically rigorous. In our expanded approach, we compare the substitution profiles of individual residues to the average profiles for buried and exposed residues 10,000 times per residue to get a residue-specific distribution of RMSD<sub>buried</sub> and RMSD<sub>exposed</sub> values. Individual RMSD<sub>buried</sub> and RMSD<sub>exposed</sub> values are calculated by resampling abundance scores from a Gaussian distribution defined by the experimentally reported abundance score and abundance score standard deviation per variant. We now only report a residue to have e.g. a buried-like substitution profile if RMSD<sub>buried</sub> < RMSD<sub>exposed</sub> in at least 95% of our samples. We do not recalculate average scores in substitution matrices for this analysis. We have updated the plots in our manuscript, e.g. in Fig. S18 and S19 of the revised version, to indicate which residues are confidently classified as buried- or exposed-like.

      (2) Presentation improvements:

      (2.1) In Figure 4, consider removing the average abundance scores, which are not directly related to the RMSD comparison being shown.

      We have decided to keep the average abundance scores in Fig. 4 (now Fig. 5), as we find the average abundance scores useful for guiding interpretation of the RMSD values. For example, an unusually small average abundance score with a relatively small standard deviation may explain a case where RMSD<sub>buried</sub> and RMSD<sub>exposed</sub> are both large. This is for example the case for residue G185 in ASPA. 

      In our preprint, the error bars on the average abundance scores in Fig. 4 (now Fig. 5) indicated the standard deviation across the abundance scores that were used to calculate the average per position. We have removed these error bars in the revised manuscript, as we realised that these were not necessarily helpful to the reader.

      (2.2) I am assuming that abundance scores are defined as the ratio abundance_variant/abundance_wt throughout the analysis, but I don't think this has been explicitly defined. If this is correct, please state it explicitly. In such case, log(abundance_score) would have a simple interpretation as the difference in abundance between variant and wild-type.

      Abundance scores are defined throughout the manuscript as sequence-based scores that have been min-max normalised to the abundance of nonsense and synonymous variants, i.e. abundance_score = (abundance_variant abundance_nonsense)/(abundance_wt–abundance_nonsense). We have described the normalisation of scores to wild-type and nonsense variant abundance in lines 164-166 of the original manuscript. We have now added additional information about the normalisation scheme in the methods section. We note that we did not ourselves apply this normalisation to the data; the scores were reported in this manner in the original publications that reported the VAMP-seq experiments for the six proteins.

      (2.3) Consider renaming "rASA" to the more commonly used "RSA" for relative solvent accessibility.

      We have decided to keep using “rASA” throughout the manuscript.

      (2.4) The weighted contact number function used differs from the established WCN measure (Σ1/rij²) introduced by Lin et al. (2008, Proteins). This should be acknowledged and the choice of alternative weighting scheme justified.

      As we have also responded to the first minor point of reviewer 1, we have previously found WCN, as it is defined in our manuscript, to be a useful input feature for a classifier that determines whether individual residues are important for maintaining protein abundance or function (Cagiada et al, 2023). We have also previously found this type of WCN to correlate well with variant abundance of individual proteins, as measured with VAMP-seq or protein fragment complementation assays (Grønbæk-Thygesen et al., 2024; Clausen et al., 2024; Gersing et al., 2024). We acknowledge that residue contact numbers or weighted contact numbers could also be expressed in other ways and that alternative contact number definitions would likely also produce values that correlate well with VAMP-seq data. Since the WCN, as defined in our manuscript, already correlates relatively well with abundance scores, we have not explored whether alternative definitions produce better correlations.  

      (2.5) Replace the phrase "in the above" with specific references to sections or simply "above" where appropriate. Also, consider replacing many instances of "moreover" with simpler alternatives such as "also" or "in addition" to improve readability.

      We have changed several sentences according to this suggestion and hope that we have improved the readability of our manuscript.

      Reviewer #3 (Recommendations for the authors):

      (1) It should be explicitly confirmed earlier that complex structures are used for NUDT15 and ASPA when assessing rASA/WCN. Additionally, it would be interesting to see the effect that deriving the matrices using NUDT15 and ASPA monomers would have.

      We have commented on the use of NUDT15 and ASPA homodimer structures earlier in the revised manuscript (specifically already in the subsection Abundance scores correlate with the degree of residue solvent-exposure section).

      When residues are classified using monomer rather than dimer structures of NUDT15 and ASPA, there is a small effect on the resulting “buried” and “exposed” substitution matrices. Entries in this set of substitution matrices calculated using either monomer or dimer structures typically differ by less than 0.05, and only a single entry differ by more than 0.1. As expected, the “exposed” matrix tend to contain slightly larger numbers when derived from dimer structures than when derived from monomer structures, meaning that when the interface residues are included in the exposed residue category, the average abundance scores of the “exposed” matrix are lowered. For buried residues, the picture is more mixed, although the overall tendency is that the interface residues make the “buried” matrix contain smaller average abundance scores for dimer compared to monomer structures. These results generally support the use of dimer structures for the residue classification.

      We here show the differences between the substitution matrices calculated with dimer or monomer structures of NUDT15 and ASPA and using data for all six proteins in our combined VAMP-seq dataset (average_abundance_score_differece = average_abundance_score_dimers – average_abundance_score _monomers):

      Author response image 3.

      We have not explored these alternative matrices further.

      (2) While the supplemental analyses are rigorous, the abundance of various metrics being presented can be confusing, especially when they seem to differ in their result. For instance, the discussion of Figure S17 (paragraph starting 428) contains mentions of mean differences but then switches to correlations, while both are presented for all panels. The claim "The datasets thus mainly differ due to differences in substitution effects in buried environments. " is well supported by the observed mean differences, but for Pearson's correlations the average panel A ,B values of buried 0.421 vs exposed 0.427 are hardly different. Which of the metrics is more meaningful, and are both needed?

      We agree with the reviewer that the claim that “The datasets thus mainly differ due to differences in substitution effects in buried environments” is not well-supported by the r between the substitution matrices, and we have removed this claim from the text.

      Since some datasets share VAMP-seq score distribution features, while others do not, the absolute difference between scores or matrices may be relevant to check for some dataset pairs, while the r may be more relevant to check for other dataset pairs. Hence, we have included both metrics in Fig S17 (Fig S11 in the revised manuscript).

      (3) Lines 337-340 - does not feel like S7 is the topic, perhaps the authors meant Figure 2A, B? In general, the supplemental figure references are out of order and panel combinations are sometimes confusing.

      We have corrected figures references to now be correct and changed the arrangement of supplemental figures so that they now occur in the correct order. We have looked through the panel combinations with clarity in mind, and hope that the current set of main and supplementary figures balances overview and detail.

      (4) Line 363 "are also are also".

      We have corrected this typo.

    1. TIPAR ist aus echten Bedürfnissen entstanden, aus Gesprächen mit Tierhaltern, Paten und Tierschutzorganisationen. Jede Funktion wurde in der Praxis getestet, verbessert – und erst dann veröffentlicht.

      TIPAR ist nicht am Reißbrett entstanden, sondern aus konkreten Fragen und Unsicherheiten. In vielen Gesprächen mit Tierhaltern wurde schnell deutlich, wo die "geregelte" Vorsorge in der Realität schnell scheitert: Informationen sind zwar oft da, aber im Ernstfall für Andere nicht auffindbar. Genau aus dem Grund sind die Tierheime voll von betroffenen Tieren, deren Halter verunglückt sind, schwer erkrankt waren oder plötzlich nicht mehr zurückkehren konnten.

      Deshalb wurde jede Funktion von TIPAR entlang realer Situationen entwickelt, getestet und weiter verbessert. Erst was im Alltag und im Ausnahmefall Bestand hat, wird Teil des Ganzen.

    1. Author response:

      Public Reviews:

      Reviewer #1 (Public review):

      The study analyzes the gastric fluid DNA content identified as a potential biomarker for human gastric cancer. However, the study lacks overall logicality, and several key issues require improvement and clarification. In the opinion of this reviewer, some major revisions are needed:

      (1) This manuscript lacks a comparison of gastric cancer patients' stages with PN and N+PD patients, especially T0-T2 patients.

      We are grateful for this astute remark. A comparison of gfDNA concentration among the diagnostic groups indicates a trend of increasing values as the diagnosis progresses toward malignancy. The observed values for the diagnostic groups are as follows:

      Author response table 1.

      The chart below presents the statistical analyses of the same diagnostic/tumor-stage groups (One-Way ANOVA followed by Tukey’s multiple comparison tests). It shows that gastric fluid gfDNA concentrations gradually increase with malignant progression. We observed that the initial tumor stages (T0 to T2) exhibit intermediate gfDNA levels, which in this group is significantly lower than in advanced disease (p = 0.0036), but not statistically different from non-neoplastic disease (p = 0.74).

      Author response image 1.

      (2) The comparison between gastric cancer stages seems only to reveal the difference between T3 patients and early-stage gastric cancer patients, which raises doubts about the authenticity of the previous differences between gastric cancer patients and normal patients, whether it is only due to the higher number of T3 patients.

      We appreciate the attention to detail regarding the numbers analyzed in the manuscript. Importantly, the results are meaningful because the number of subjects in each group is comparable (T0-T2, N = 65; T3, N = 91; T4, N = 63). The mean gastric fluid gfDNA values (ng/µL) increase with disease stage (T0-T2: 15.12; T3-T4: 30.75), and both are higher than the mean gfDNA values observed in non-neoplastic disease (10.81 ng/µL for N+PD and 10.10 ng/µL for PN). These subject numbers in each diagnostic group accurately reflect real-world data from a tertiary cancer center.

      (3) The prognosis evaluation is too simplistic, only considering staging factors, without taking into account other factors such as tumor pathology and the time from onset to tumor detection.

      Histopathological analyses were performed throughout the study not only for the initial diagnosis of tissue biopsies, but also for the classification of Lauren’s subtypes, tumor staging, and the assessment of the presence and extent of immune cell infiltrates. Regarding the time of disease onset, this variable is inherently unknown--by definition--at the time of a diagnostic EGD. While the prognosis definition is indeed straightforward, we believe that a simple, cost-effective, and practical approach is advantageous for patients across diverse clinical settings and is more likely to be effectively integrated into routine EGD practice.

      (4) The comparison between gfDNA and conventional pathological examination methods should be mentioned, reflecting advantages such as accuracy and patient comfort.

      We wish to reinforce that EGD, along with conventional histopathology, remains the gold standard for gastric cancer evaluation. EGD under sedation is routinely performed for diagnosis, and the collection of gastric fluids for gfDNA evaluation does not affect patient comfort. Thus, while gfDNA analysis was evidently not intended as a diagnostic EGD and biopsy replacement, it may provide added prognostic value to this exam.

      (5) There are many questions in the figures and tables. Please match the Title, Figure legends, Footnote, Alphabetic order, etc.

      We are grateful for these comments and apologize for the clerical oversight. All figures, tables, titles and figure legends have now been double-checked.

      (6) The overall logicality of the manuscript is not rigorous enough, with few discussion factors, and cannot represent the conclusions drawn.

      We assume that the unusual wording remark regarding “overall logicality” pertains to the rationale and/or reasoning of this investigational study. Our working hypothesis was that during neoplastic disease progression, tumor cells continuously proliferate and, depending on various factors, attract immune cell infiltrates. Consequently, both tumor cells and immune cells (as well as tumor-derived DNA) are released into the fluids surrounding the tumor at its various locations, including blood, urine, saliva, gastric fluids, and others. Thus, increases in DNA levels within some of these fluids have been documented and are clinically meaningful. The concurrent observation of elevated gastric fluid gfDNA levels and immune cell infiltration supports the hypothesis that increased gfDNA—which may originate not only from tumor cells but also from immune cells—could be associated with better prognosis, as suggested by this study of a large real-world patient cohort.

      In summary, we thank Reviewer #1 for his time and effort in a constructive critique of our work.

      Reviewer #2 (Public review):

      Summary:

      The authors investigated whether the total DNA concentration in gastric fluid (gfDNA), collected via routine esophagogastroduodenoscopy (EGD), could serve as a diagnostic and prognostic biomarker for gastric cancer. In a large patient cohort (initial n=1,056; analyzed n=941), they found that gfDNA levels were significantly higher in gastric cancer patients compared to non-cancer, gastritis, and precancerous lesion groups. Unexpectedly, higher gfDNA concentrations were also significantly associated with better survival prognosis and positively correlated with immune cell infiltration. The authors proposed that gfDNA may reflect both tumor burden and immune activity, potentially serving as a cost-effective and convenient liquid biopsy tool to assist in gastric cancer diagnosis, staging, and follow-up.

      Strengths:

      This study is supported by a robust sample size (n=941) with clear patient classification, enabling reliable statistical analysis. It employs a simple, low-threshold method for measuring total gfDNA, making it suitable for large-scale clinical use. Clinical confounders, including age, sex, BMI, gastric fluid pH, and PPI use, were systematically controlled. The findings demonstrate both diagnostic and prognostic value of gfDNA, as its concentration can help distinguish gastric cancer patients and correlates with tumor progression and survival. Additionally, preliminary mechanistic data reveal a significant association between elevated gfDNA levels and increased immune cell infiltration in tumors (p=0.001).

      Reviewer #2 has conceptually grasped the overall rationale of the study quite well, and we are grateful for their assessment and comprehensive summary of our findings.

      Weaknesses:

      (1) The study has several notable weaknesses. The association between high gfDNA levels and better survival contradicts conventional expectations and raises concerns about the biological interpretation of the findings.

      We agree that this would be the case if the gfDNA was derived solely from tumor cells. However, the findings presented here suggest that a fraction of this DNA would be indeed derived from infiltrating immune cells. The precise determination of the origin of this increased gfDNA remains to be achieved in future follow-up studies, and these are planned to be evaluated soon, by applying DNA- and RNA-sequencing methodologies and deconvolution analyses.

      (2) The diagnostic performance of gfDNA alone was only moderate, and the study did not explore potential improvements through combination with established biomarkers. Methodological limitations include a lack of control for pre-analytical variables, the absence of longitudinal data, and imbalanced group sizes, which may affect the robustness and generalizability of the results.

      Reviewer #2 is correct that this investigational study was not designed to assess the diagnostic potential of gfDNA. Instead, its primary contribution is to provide useful prognostic information. In this regard, we have not yet explored combining gfDNA with other clinically well-established diagnostic biomarkers. We do acknowledge this current limitation as a logical follow-up that must be investigated in the near future.

      Moreover, we collected a substantial number of pre-analytical variables within the limitations of a study involving over 1,000 subjects. Longitudinal samples and data were not analyzed here, as our aim was to evaluate prognostic value at diagnosis. Although the groups are imbalanced, this accurately reflects the real-world population of a large endoscopy center within a dedicated cancer facility. Subjects were invited to participate and enter the study before sedation for the diagnostic EGD procedure; thus, samples were collected prospectively from all consenting individuals.

      Finally, to maintain a large, unbiased cohort, we did not attempt to balance the groups, allowing analysis of samples and data from all patients with compatible diagnoses (please see Results: Patient groups and diagnoses).

      (3) Additionally, key methodological details were insufficiently reported, and the ROC analysis lacked comprehensive performance metrics, limiting the study's clinical applicability.

      We are grateful for this useful suggestion. In the current version, each ROC curve (Supplementary Figures 1A and 1B) now includes the top 10 gfDNA thresholds, along with their corresponding sensitivity and specificity values (please see Suppl. Table 1). The thresholds are ordered from-best-to-worst based on the classic Youden’s J statistic, as follows:

      Youden Index = specificity + sensitivity – 1 [Youden WJ. Index for rating diagnostic tests. Cancer 3:32-35, 1950. PMID: 15405679]. We have made an effort to provide all the key methodological details requested, but we would be glad to add further information upon specific request.

    1. Author response:

      The following is the authors’ response to the original reviews

      We again thank the reviewers for their comments and recommendations. In response to the reviewer’s suggestions, we have performed several additional experiments, added additional discussion, and updated our conclusions to reflect the additional work. Specifically, we have performed additional analyses in female WT and Marco-deficient animals, demonstrating that the Marco-associated phonotypes observed in male mice (reduced adrenal weight, increased lung Ace mRNA and protein expression, unchanged expression of adrenal corticosteroid biosynthetic enzymes) are not present in female mice. We also report new data on the physiological consequences of increased aldosterone levels observed in male mice, namely plasma sodium and potassium titres, and blood pressure alterations in WT vs Marco-deficient male mice. In an attempt to address the reviewer’s comments relating to our proposed mechanism on the regulation of lung Ace expression, we additionally performed a co-culture experiment using an alveolar macrophage cell line and an endothelial cell line. In light of the additional evidence presented herein, we have updated our conclusions from this study and changed the title of our work to acknowledge that the mechanism underlying the reported phenotype remains incompletely understood. Specific responses to reviewers can be seen below.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      The investigators sought to determine whether Marco regulates the levels of aldosterone by limiting uptake of its parent molecule cholesterol in the adrenal gland. Instead, they identify an unexpected role for Marco on alveolar macrophages in lowering the levels of angiotensin-converting enzyme in the lung. This suggests an unexpected role of alveolar macrophages and lung ACE in the production of aldosterone.

      Strengths:

      The investigators suggest an unexpected role for ACE in the lung in the regulation of systemic aldosterone levels.

      The investigators suggest important sex-related differences in the regulation of aldosterone by alveolar macrophages and ACE in the lung.

      Studies to exclude a role for Marco in the adrenal gland are strong, suggesting an extra-adrenal source for the excess Marco observed in male Marco knockout mice.

      Weaknesses:

      While the investigators have identified important sex differences in the regulation of extrapulmonary ACE in the regulation of aldosterone levels, the mechanisms underlying these differences are not explored.

      The physiologic impact of the increased aldosterone levels observed in Marco -/- male mice on blood pressure or response to injury is not clear.

      The intracellular signaling mechanism linking lung macrophage levels with the expression of ACE in the lung is not supported by direct evidence.

      Reviewer #2 (Public Review):

      Summary:

      Tissue-resident macrophages are more and more thought to exert key homeostatic functions and contribute to physiological responses. In the report of O'Brien and Colleagues, the idea that the macrophage-expressed scavenger receptor MARCO could regulate adrenal corticosteroid output at steady-state was explored. The authors found that male MARCO-deficient mice exhibited higher plasma aldosterone levels and higher lung ACE expression as compared to wild-type mice, while the availability of cholesterol and the machinery required to produce aldosterone in the adrenal gland were not affected by MARCO deficiency. The authors take these data to conclude that MARCO in alveolar macrophages can negatively regulate ACE expression and aldosterone production at steady-state and that MARCO-deficient mice suffer from secondary hyperaldosteronism.

      Strengths:

      If properly demonstrated and validated, the fact that tissue-resident macrophages can exert physiological functions and influence endocrine systems would be highly significant and could be amenable to novel therapies.

      Weaknesses:

      The data provided by the authors currently do not support the major claim of the authors that alveolar macrophages, via MARCO, are involved in the regulation of a hormonal output in vivo at steady-state. At this point, there are two interesting but descriptive observations in male, but not female, MARCO-deficient animals, and overall, the study lacks key controls and validation experiments, as detailed below.

      Major weaknesses:

      (1) According to the reviewer's own experience, the comparison between C57BL/6J wild-type mice and knock-out mice for which precise information about the genetic background and the history of breedings and crossings is lacking, can lead to misinterpretations of the results obtained. Hence, MARCO-deficient mice should be compared with true littermate controls.

      (2) The use of mice globally deficient for MARCO combined with the fact that alveolar macrophages produce high levels of MARCO is not sufficient to prove that the phenotype observed is linked to alveolar macrophage-expressed MARCO (see below for suggestions of experiments).

      (3) If the hypothesis of the authors is correct, then additional read-outs could be performed to reinforce their claims: levels of Angiotensin I would be lower in MARCO-deficient mice, levels of Antiotensin II would be higher in MARCO-deficient mice, Arterial blood pressure would be higher in MARCO-deficient mice, natremia would be higher in MARCO-deficient mice, while kaliemia would be lower in MARCO-deficient mice. In addition, co-culture experiments between MARCO-sufficient or deficient alveolar macrophages and lung endothelial cells, combined with the assessment of ACE expression, would allow the authors to evaluate whether the AM-expressed MARCO can directly regulate ACE expression.

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      (1) Corticosterone levels in male Marco -/- mice are not significantly different, but there is (by eye) substantially more variability in the knockout compared to the wild type. A power analysis should be performed to determine the number of mice needed to detect a similar % difference in corticosterone to the difference observed in aldosterone between male Marco knockout and wild-type mice. If necessary the experiments should be repeated with an adequately powered cohort.

      Using a power calculator (www.gigacalculator.com) it was determined that our sample size of 13 was one less than sufficient to detect a similar % difference in corticosterone as was detected in corticosterone. We regret that we unable to perform additional measurements as the author suggested in the available timeframe.

      (2) All of the data throughout the MS (particularly data in the lung) should be presented in male and female mice. For example, the induction of ACE in the lungs of Marco-/- female mice should be absent. Similar concerns relate to the dexamethasone suppression studies. Also would be useful if the single cell data could be examined by sex--should be possible even post hoc using Xist etc.

      Given the limitations outlined in our previous response to reviewers it was not possible to repeat every experiment from the original manuscript. We were able to measure the expression of lung Ace mRNA, ACE protein, adrenal weights, adrenal expression of steroid biosynthetic enzymes, presence of myeloid cells, and levels of serum electrolytes in female animals. These are presented in figures 1G, 3B, 4A, 4E, 4F, 4I, and 4J. We have elected to not present single cell seq data according to sex as it did not indicate substantial differences between males and females in Marco or Ace expression and so does not substantively change our approach.

      (3) IF is notoriously unreliable in the lung, which has high levels of autofluorescence. This is the only method used to show ACE levels are increased in the absence of Marco. Orthogonal methods (e.g. immunoblots of flow-sorted cells, or ideally CITE-seq that includes both male and female mice) should be used.

      We used negative controls to guide our settings during acquisition of immunofluorescent images. Additionally, we also used qPCR to show an increase in Ace mRNA expression in the lung in addition to the protein level. This data was presented in the original manuscript and is further bolstered by our additional presentation of expression data for Ace mRNA and protein in female animals in this revised manuscript.

      (4) Given the central importance of ACE staining to the conclusions, validation of the antibody should be included in the supplement.

      We don’t have ACE-deficient mice so cannot do KO validation of the antibody. We did perform secondary stain controls which confirmed the signal observed is primary antibody-derived. Moreover, we specifically chose an anti-ACE antibody (Invitrogen catalogue # MA5-32741) that has undergone advanced verification with the manufacturer. We additionally tested the antibody in the brain and liver and observed no significant levels of staining.

      Author response image 1.

      (5) The link between alveolar macrophage Marco and ACE is poorly explored.

      We carried out a co-culture experiments of alveolar macrophages and endothelial cells and measure ACE/Ace expression as a consequence. This is presented in figure 5D and the discussion.

      (6) Mechanisms explaining the substantial sex difference in the primary outcome are not explored.

      This is outside the scope if this project, though we would consider exploring such experiments in future studies.

      (7) Are there physiologic consequences either in homeostasis or under stress to the increased aldosterone (or lung ACE levels) observed in Marco-/- male mice?

      We measured blood electrolytes and blood pressure in Marco-deficient and Marco-sufficient mice. The results from these experiments are presented in 4G-4M.

      Reviewer #2 (Recommendations For The Authors):

      Below is a suggestion of important control or validation experiments to be performed in order to support the authors' claims.

      (1) It is imperative to validate that the phenotype observed in MARCO-deficient mice is indeed caused by the deficiency in MARCO. To this end, littermate mice issued from the crossing between heterozygous MARCO +/- mice should be compared to each other. C57BL/6J mice can first be crossed with MARCO-deficient mice in F0, and F1 heterozygous MARCO +/- mice should be crossed together to produce F2 MARCO +/+, MARCO +/- and MARCO -/- littermate mice that can be used for experiments.

      We thank the reviewer for their comments. We recognise the concern of the reviewer but due to limited experimenter availability we are unable to undertake such a breeding programme to address this particular concern.

      (2) The use of mice in which AM, but not other cells, lack MARCO expression would demonstrate that the effect is indeed linked to AM. To this end, AM-deficient Csf2rb-deficient mice could be adoptively transferred with MARCO-deficient AM. In addition, the phenotype of MARCO-deficient mice should be restored by the adoptive transfer of wild-type, MARCO-expressing AM. Alternatively, bone marrow chimeras in which only the hematopoietic compartment is deficient in MARCO would be another option, albeit less specific for AM.

      We recognise the concern of the reviewer. We carried out a co-culture experiments of alveolar macrophages and endothelial cells and measure ACE/Ace expression as a consequence. This is presented in figure 5D and the implications explored in the discussion.

      (3) If the hypothesis of the authors is correct, then additional read-outs could be performed to reinforce their claims: levels of Angiotensin I would be lower in MARCO-deficient mice, levels of Antiotensin II would be higher in MARCO-deficient mice, Arterial blood pressure would be higher in MARCO-deficient mice, natremia would be higher in MARCO-deficient mice, while kaliemia would be lower in MARCO-deficient mice. Similar read-outs could also be performed in the models proposed in point 2).

      We measured blood electrolytes and blood pressure in Marco-deficient and Marco-sufficient mice. The results from these experiments are presented in 4G-4M.

      (4) Co-culture experiments between MARCO-sufficient or deficient alveolar macrophages and lung endothelial cells, combined with the assessment of ACE expression, would allow the authors to evaluate whether the AM-expressed MARCO can directly regulate ACE expression.

      To address this concern we carried out a co-culture experiment as described above.

    1. Software Development Life Cycle (SDLC)

      l'écriture de la figure du bas est trop petite. Peut-être réduire la taille de la timeline et augmenter celle de la figure du bas ?

    2. "Code source et Logiciels"

      l'écriture de l'image du bas est peu intelligible. Je suggère de reéquilibrer les deux figurés en réduisant la taille de la time line et en augmentant celle de l'image avec les bulles

    3. Sources : Violaine Louvet

      source : Violaine Louvet, Grégory Miura. Introduction sur le code source, les logiciels. Accompagner la préservation et la diffusion des logiciels dans les établissements, Média Normandie; ADBU; Software Heritage, May 2023, Visioconférence, France. ⟨hal-04102897⟩

    1. De l’indocilité des jeunesses populaires : Analyse de la formation professionnelle initiale

      Résumé exécutif

      Ce document synthétise les travaux de Prisca Kergoat, sociologue et directrice du laboratoire CERTOP, présentés dans son ouvrage De l’indocilité des jeunesses populaires. Apprenti.es et élèves de lycées professionnels (2022).

      L'étude remet en question la vision traditionnelle d'une jeunesse ouvrière et employée passive face à la domination sociale.

      L'analyse démontre que les élèves et apprentis de la formation professionnelle font preuve d'une indocilité manifeste, caractérisée par une sagacité sociologique leur permettant de déconstruire leurs conditions de formation.

      L'étude met en lumière une expérience partagée de l'humiliation institutionnelle lors de l'orientation, un accès inégalitaire à l'apprentissage basé sur le capital social et les discriminations, ainsi qu'un sentiment aigu d'injustice face aux injonctions contradictoires du système éducatif et du monde du travail.

      --------------------------------------------------------------------------------

      1. Cadre méthodologique et fondements de la recherche

      La recherche s'appuie sur une méthodologie robuste combinant des approches quantitatives et qualitatives pour saisir la réalité des jeunesses populaires.

      Données quantitatives : Exploitation d'environ 3 000 questionnaires distribués à des élèves et apprentis (niveaux CAP et Bac professionnel).

      Données qualitatives : 43 entretiens semi-directifs menés auprès de filles et de garçons, ainsi que d'enseignants.

      Secteurs étudiés :

      ◦ Filières ultra-féminisées (coiffure, esthétique, aide à la personne).    ◦ Filières ultra-masculinisées (bâtiment, mécanique automobile).    ◦ Filière mixte (commerce et vente).

      Objectif central : Substituer au concept de "docilité" celui d'indocilité pour décrire la capacité d'agir, l'autonomie de pensée et la résistance symbolique de ces jeunes face aux contraintes exercées sur eux.

      --------------------------------------------------------------------------------

      2. L'orientation scolaire : Un vecteur d'humiliation institutionnelle

      L'orientation vers la voie professionnelle est analysée comme un processus de relégation qui a profondément évolué depuis les années 1990.

      L'évolution du profil des élèves

      Le système éducatif actuel produit une population caractérisée par l'indissociabilité de l'origine populaire et de la difficulté scolaire. Les statistiques révèlent un déterminisme social frappant :

      • À niveau scolaire comparable, un élève d'origine populaire a 93 fois plus de chances d'être orienté en seconde professionnelle.

      • Cette probabilité s'élève à 169 fois pour une orientation en CAP.

      La rhétorique de l'auto-entreprenariat

      Les réformes de 1989 et 2018 ont introduit la "rhétorique du projet", transformant l'élève en "entrepreneur de lui-même".

      Cette approche, issue du management, rend l'individu seul responsable de ses réussites et de ses échecs, masquant les déterminismes sociaux sous le voile du mérite.

      Le vécu de l'humiliation

      L'humiliation est définie comme un "mépris de classe et une honte de soi". Elle est vécue même par ceux ayant un rapport vocationnel au métier.

      Légitimité institutionnelle : Contrairement aux brimades en classe, cette humiliation est perçue comme "réglementaire" car elle émane du conseil de classe et se fonde sur les notes.

      Jugement de classe : Elle oppose les élèves "dignes de poursuivre" aux autres, stigmatisant durablement les jeunes orientés par une exclusion de la culture scolaire légitime.

      --------------------------------------------------------------------------------

      3. L'accès à l'entreprise : Sélection et éviction sociale

      La recherche d'une place en entreprise (stage ou apprentissage) constitue un deuxième palier de sélection sociale, où l'apprentissage est devenu plus valorisé mais aussi plus exclusif que le lycée professionnel.

      Typologie des pratiques de recherche

      L'enquête identifie trois classes distinctes dans la recherche de contrats :

      | Classe | Profil type | Caractéristiques de la recherche | Facteurs de succès | | --- | --- | --- | --- | | 1\. Accès rapide (31%) | Garçons, parents issus de la fraction stable des classes populaires (artisans, commerçants). | Une seule entreprise contactée, recherche bouclée en une journée. | Capital d'autochtonie : Réseau familial et connaissance directe d'un maître d'apprentissage. | | 2\. Velléité (56% des élèves de LP) | Jeunes très jeunes, issus des fractions paupérisées, étrangers ou issus de l'immigration. | Très peu de recherches actives malgré un souhait initial d'apprentissage. | Lucidité sociale : Anticipation des discriminations et choix du lycée professionnel comme espace protecteur. | | 3\. Haute mobilisation | Filles et jeunes issus des classes paupérisées. | Jusqu'à 100 entreprises contactées sur une durée de 3 mois. | Succès aléatoire malgré un investissement massif. |

      La performance biaisée de l'apprentissage

      L'étude démontre que les meilleurs taux d'insertion de l'apprentissage par rapport au lycée professionnel ne sont pas dus à une supériorité intrinsèque du mode de formation, mais à une éviction préalable des populations les plus fragiles (filles, jeunes issus de l'immigration, milieux précaires) lors du recrutement par les entreprises.

      --------------------------------------------------------------------------------

      4. Manifestations de l'indocilité et conscience de l'injustice

      L'indocilité se manifeste par la capacité des jeunes à identifier et à critiquer les rapports de domination dont ils font l'objet.

      Critique du double discours professoral : Les jeunes perçoivent l'hypocrisie des discours qui valorisent la voie professionnelle tout en poussant systématiquement les "meilleurs" élèves vers la voie générale.

      Injonctions de genre : Les filles témoignent d'une pression forte pour adopter les codes de féminité des classes intermédiaires (maquillage, tenue, langage) pour obtenir et garder une place en entreprise.

      Le "vol" de la jeunesse : Un argument récurrent concerne l'impossibilité de prolonger leur jeunesse. À 14 ou 15 ans, on exige d'eux des choix de vie définitifs, leur refusant le "luxe" d'être adolescents.

      Injonctions contradictoires : Désiré, un élève cité dans l'étude, souligne le paradoxe de leur statut : traités comme des enfants à l'école (obligation de mots d'absence des parents) mais sommés de se comporter comme des adultes responsables et autonomes en entreprise.

      Conclusion

      Loin d'être des acteurs passifs ou consentants à leur propre domination, les élèves et apprentis des classes populaires déploient une véritable sagacité pour débusquer les injustices du système.

      Leur indocilité est une réponse rationnelle à un appareil de formation qui, sous couvert de démocratisation et de choix individuel, continue de fonctionner comme un puissant moteur de sélection et de stigmatisation sociale.

    1. eLife Assessment

      This study presents a valuable advance by enabling functional mapping of Ca²⁺ responses in live human pancreatic tissue slices, providing new opportunities to study islet heterogeneity and diabetes-related dysfunction in an intact tissue context. The evidence supporting the main conclusions is solid, based on reproducible methodology and functional validation across multiple human donor samples. Key revisions needed include clearer quantification of transduction efficiency and tissue viability, and improved clarification of how CaMPARI2 signals should be interpreted.

    2. Reviewer #2 (Public review):

      (1) The photoconversion protocol requires a more detailed and quantitative discussion. The current description ("5 s pulses for 5 min, leading to 2.5 min of total light delivery") is too brief to evaluate whether the chosen illumination parameters maintain the CaMPARI2 signal within its linear dynamic range. Because CaMPARI2 photoconversion reflects the time integral of 405 nm photoconverting light exposure in the presence of intracellular [Ca²⁺], the red/green fluorescence ratio is directly proportional to cumulative illumination time until saturation occurs. Previous characterization (PMID: 30361563) shows that photoconversion is approximately linear over the first 0-80 s of 405 nm exposure, after which red fluorescence plateaus. The total exposure used here (=150 s) may therefore exceed the linear regime, potentially obscuring differences between cells with moderate versus strong Ca²⁺ activity. The authors should (i) justify the selected illumination parameters, (ii) provide evidence that the chosen conditions remain within the linear response range for the specific optical setup, (iii) discuss how overexposure might affect quantitative interpretation of red/green ratios and comparisons between experimental groups. Inclusion of calibration data would substantially strengthen the methodological rigor and reproducibility of the study.

      (2) For Figure 8a (middle panels), the data points for 16G and KCl show overlaps, raising the possibility that at it 16G may already be saturated. The authors should comment on the potential for CaMPARI2 saturation at 16G, and clarify whether this affects the interpretation of the KCl results "At maximal stimulation by KCl, there was no size-function correlation (R = 0.15, p = 0.14)."

      (3) The term "calcium activity" is used throughout the manuscript but remains vague. Pancreatic islets typically display a biphasic Ca²⁺ response to high glucose-an initial sustained peak followed by repetitive oscillations - and these phases differ in both kinetics and physiological meaning. Ca²⁺ responses are usually quantified using parameters such as rise time, amplitude, and duration for the initial peak, and amplitude, frequency, burst duration, and duty cycle for the oscillatory phase. The authors should clarify how "calcium activity" is defined in their analyses and discuss the appropriateness of directly comparing Ca²⁺ signals with distinct temporal patterns.

      (4) The CaMPARI2 red/green ratio reflects the time-integral of 405 nm photoconverting light exposure in the presence of Ca²⁺, two Ca²⁺ responses with the same duty cycle but different amplitudes could, in principle, yield the same red/green ratios. This raises an important question regarding how well the CaMPARI2 signal distinguishes differences in Ca²⁺ amplitude versus time spent above threshold. The authors should directly relate single-cell Ca²⁺ traces to corresponding red/green ratios to demonstrate the extent to which CaMPARI2 photoconversion truly reflects "Ca²⁺ activity." Such validation would clarify whether the metric is sensitive to variations in oscillation amplitude, duty cycle, or both, and would strengthen the interpretation of CaMPARI2-based functional comparisons.

    1. Reviewer #2 (Public review):

      Summary:

      Liu et al. use whole genome sequencing data from several strains of chicken as well as a subspecies of the chicken wild ancestor to study the impact of domestication on the recombination landscape. They analyze these data using several machine-learning/AI based methods, using simulation to partially inform their analysis. The authors claim to find substantial deviations in the fine-scale recombination landscape between breeds, and surprising patterns between recombination and introgression/selection. However, there are substantial inconsistencies between the author's findings and the current understanding in the field, supported by indirect evidence that is hard to interpret at best.

      Strengths:

      The data produced by the authors of this and a previous paper is well-suited to answer the questions that they pose. The authors use simulations to support some decisions made in analyzing this data, which partially alleviates some potential questions, and could be extended to address additional concerns. Should further analysis support the claims currently made regarding hotspot turnover and introgression frequency vs. recombination rate, these findings would indeed be striking observations at odds with current understanding in the field.

      Weaknesses:

      I have several major concerns regarding the ability of the analyses to support the claims in this paper, summarized below.

      Substantial deviations from field-standard benchmarks the estimated recombination landscape appear to have been disregarded, particularly with regard to the WL breed.<br /> o For example, the number of detected hotspots per subspecies ranges from maybe 500 to over 100,000 based on figure 2A. While the mean is indeed comparable to estimates from other species (lines 315-317), this characterization masks that each recombination map has far too few or too many hotspots to be biologically accurate (at least without substantial corroboration from more direct analyses). As such, statements about hotspot overlap between breeds and hotspot conservation cannot be taken at face value. Authors might consider using alternative methods to detect hotspots, assessing their power to detect hotspots in each breed, and evaluating hotspot overlap between breeds with respect to random expectation.<br /> o Furthermore, the authors consider the recombination landscape at promoters (Figure S10) and H3K4me3 sites (Figure 2C) and find that levels are slightly elevated, but the magnitude of the elevation (negligible to ~1.5x) is substantially lower than that of any other species studied to date without PRDM9. The magnitude of elevation for both comparisons is especially small for WL, which suggests that the recombination estimates for this breed are particularly noisy, and yet this breed is the focus of the introgression analysis.

      Introgression and strong selection can both be thought of as changing the local Ne along the genome. Estimating recombination from patterns of LD most directly estimates rho (the population recombination rate, 4*Ne*r), and disentangling local changes in Ne from local changes in r is non-trivial. Furthermore, selective sweeps, particularly easy-to-detect hard sweeps, are often characterized by having very little genetic variation. Estimating recombination rate from patterns of LD in regions with very little variation seems particularly challenging, and could bias results such as in Figure S15. The authors do not discuss the implications of these challenges for their analyses, which seems particularly relevant for their analyses of introgression and selection with recombination, as well as comparisons between WL (which the authors report to have undergone more selection and introgression) with other breeds. Authors should quantify their ability/power to detect recombination rates and hotspots under these conditions using simulation - some of these simulations are already mentioned in the paper, but are not analyzed in this way. Also useful would be quantifying the impact of simulated bottlenecks on estimates of recombination rate.

      In many analyses (e.g. hotspot and coldspot overlap, histone mark analysis), authors appear to use 1000 randomly selected regions of the same length as a control. If this characterization is accurate, authors should match the number of control regions to the number of features that they're comparing to. A more careful analysis might also select random regions from the same chromosome, match for GC content where appropriate, etc.

      Authors provide very little detail about the number/locations of coldspots or selective sweeps- how many were detected in each subspecies? Does the fraction of hotspots and coldspots which overlap selective sweeps vary between species? It is unclear whether the numbers in the text (lines 356-364) represent a single breed or an analysis across breeds.

    1. Tout est dit. L'avenir de l'Europe est de se "rapprocher" de l'Afrique. Rapprochons nous avec les noirs et les arabes pour compter.

      Pour Guetta, nous sommes comme à la fin de la 2ème guerre mondiale, à construire l'Europe après une guerre terrible... Alors que la guerre bat son plein et que les nazis sont toujours là...

    1. Volgens zijn medebestuurders was Wing stiekem bezig de Europese activiteiten van Nexperia over te hevelen naar China. Hij zou technologie laten verplaatsen en bijna de helft van het Europese personeel willen ontslaan. Daarom stapten drie bestuurders, die waren ontslagen nadat ze kritiek hadden geuit, op 1 oktober naar de Ondernemingskamer.

      Nexperia medebestuurders stelden dat Wing het werk v Nexperia naar China aan het overhevelen was. Na kritiek werden ze ontslagen. Zij spanden een zaak bij de Ondernemingskamer aan.

    1. Jean François Bayart est un maitre de premier rang.

      Gramsci: la révolution passive L'élite au pouvoir coopte les potentiels leaders révolutionnaires en son sein pour avoir la paix.

      La révolution conservatrice est passive... Les ingénieurs juristes plutôt que les intellectuels.

      L'éthique de la chauve souris ! Le romantisme technicisé.

      La question du retour en arrière après la Révolution... Plutôt que de retourner en arrière, "arracher la révolution des mains des révolutionnaires".

      Mais Bayart ne se refait pas et reste de gauche, la révolution conservatrice mène bien à Hitler et veut en finir avec les lumières, et la paix, la bêtise identitaire menant à la guerre.

      Pour Bayard la révolution conservatrice est ALA FOIS réactionnaire et moderniste et c'est ce qui la caractérise.

    1. Ab dem zweiten Jahr: ab 19,95 € / Jahr

      "Ab dem zweiten Jahr 19,95 € / Jahr"

      Das zweite "ab" muss weg, da wir keine teurere Mitgliedschaft haben. Alles Weitere sind Addons/Goodies…

    2. Zweitpate hinzufügen?Für mehr Sicherheit kannst du optional einen Zweitpaten benennen. Das kostet +5 € pro Jahr ab dem zweiten Jahr.

      Bitte den gesamten blauen Kasten entfernen... Das sollte wir erst NACH der Registrierung anbieten. Nachlesen kann der User es ja unter den Addons...

    1. Web Accessibility Annotations

      Website Chosen - Shopify Accessibility Shopify is a digital brand that prioritizes inclusive and accessible web design. Analyzing this site aligns with the goals of EID as it demonstrates how accessibility principles are applied in real world digital spaces.

      Accessibility Annotations Annotation 1 - Clear Heading Hierarchy Feature: Proper use of headings This page demonstrates good accessibility practice through a clear and logical heading structure. Properly organized headings allow users to navigate pages efficiently and understand how the content is structured.

      Annotation 2- Plain and Inclusive Language Feature: Simple, Readable The language used throughout the page is clear, direct and easy to understand. This helps with overall accessibility by ensuring information is understandable to a wide audience.

      Annotation 3 - High Colour Contrast Feature: Text contrast against Background The strong contrast between the text and background improves readability for users with low vision or colour blindness.

      Annotation 4 - Descriptive Links Feature: Meaningful link text The links on the page use descriptive language rather than generic phases like "click here". This is good accessibility practice because screen reader users can understand the purpose of each link without relying on other surrounding context.

    1. Reviewer #3 (Public review):

      Summary:

      In their manuscript, Koch et al. describe a novel strategy to synchronize cells of the budding yeast Saccharomyces cerevisiae in metaphase I and metaphase II, thereby facilitating comparative analyses between these meiotic stages. This approach, termed SynSAC, adapts a method previously developed in fission yeast and human cells that enables the ectopic induction of a synthetic spindle assembly checkpoint (SAC) arrest by conditionally forcing the heterodimerization of two SAC components upon addition of the plant hormone abscisic acid (ABA). This is a valuable tool, which has the advantage that induces SAC-dependent inhibition of the anaphase promoting complex without perturbing kinetochores. Furthermore, since the same strategy and yeast strain can be also used to induce a metaphase arrest during mitosis, the methodology developed by Koch et al. enables comparative analyses between mitotic and meiotic cell divisions. To validate their strategy, the authors purified kinetochores from meiotic metaphase I and metaphase II, as well as from mitotic metaphase, and compared their protein composition and phosphorylation profiles. The results are presented clearly and in an organized manner. Despite the relevance of both the methodology and the comparative analyses, several main issues should be addressed:

      (1) In contrast to the strong metaphase arrest induced by ABA addition in mitosis (Supp. Fig. 2), the SynSAC strategy only promotes a delay in metaphase I and metaphase II as cells progress through meiosis. This delay extends the duration of both meiotic stages, but does not markedly increase the percentage of metaphase I or II cells in the population at a given timepoint of the meiotic time course (Fig. 1C). Therefore, although SynSAC broadens the time window for sample collection, it does not substantially improve differential analyses between stages compared with a standard NDT80 prophase block synchronization experiment. Could a higher ABA concentration or repeated hormone addition improve the tightness of the meiotic metaphase arrest?

      (2) Unlike the standard SynSAC strategy, introducing mutations that prevent PP1 binding to the SynSAC construct considerably extended the duration of the meiotic metaphase arrests. In particular, mutating PP1 binding sites in both the RVxF (RASA) and the SILK (4A) motifs of the Spc105(1-455)-PYL construct caused a strong metaphase I arrest that persisted until the end of the meiotic time course (Fig. 3A). This stronger and more prolonged 4A-RASA SynSAC arrest would directly address the issue raised above. It is unclear why the authors did not emphasize more this improved system. Indeed, the 4A-RASA SynSAC approach could be presented as the optimal strategy to induce a conditional metaphase arrest in budding yeast meiosis, since it not only adapts but also improves the original methods designed for fission yeast and human cells. Along the same lines, it is surprising that the authors did not exploit the stronger arrest achieved with the 4A-RASA mutant to compare kinetochore composition at meiotic metaphase I and II.

      (3) The results shown in Supp. Fig. 4C are intriguing and merit further discussion. Mitotic growth in ABA suggest that the RASA mutation silences the SynSAC effect, yet this was not observed for the 4A or the double 4A-RASA mutants. Notably, in contrast to mitosis, the SynSAC 4A-RASA mutation leads to a more pronounced metaphase I meiotic delay (Fig. 3A). It is also noteworthy that the RVAF mutation partially restores mitotic growth in ABA. This observation supports, as previously demonstrated in human cells, that Aurora B-mediated phosphorylation of S77 within the RVSF motif is important to prevent PP1 binding to Spc105 in budding yeast as well.

      (4) To demonstrate the applicability of the SynSAC approach, the authors immunoprecipitated the kinetochore protein Dsn1 from cells arrested at different meiotic or mitotic stages, and compared kinetochore composition using data independent acquisition (DIA) mass spectrometry. Quantification and comparative analyses of total and kinetochore protein levels were conducted in parallel for cells expressing either FLAG-tagged or untagged Dsn1 (Supp. Fig. 7A-B). To better detect potential changes, protein abundances were next scaled to Dsn1 levels in each sample (Supp. Fig. 7C-D). However, it is not clear why the authors did not normalize protein abundance in the immunoprecipitations from tagged samples at each stage to the corresponding untagged control, instead of performing a separate analysis. This would be particularly relevant given the high sensitivity of DIA mass spectrometry, which enabled quantification of thousands of proteins. Furthermore, the authors compared protein abundances in tagged-samples from mitotic metaphase and meiotic prophase, metaphase I and metaphase II (Supp. Fig. 7E-F). If protein amounts in each case were not normalized to the untagged controls, as inferred from the text (lines 333 to 338), the observed differences could simply reflect global changes in protein expression at different stages rather than specific differences in protein association to kinetochores.

      (5) Despite the large amount of potentially valuable data generated, the manuscript focuses mainly on results that reinforce previously established observations (e.g., premature SAC silencing in meiosis I by PP1, changes in kinetochore composition, etc.). The discussion would benefit from a deeper analysis of novel findings that underscore the broader significance of this study.

      Significance:

      Koch et al. describe a novel methodology, SynSAC, to synchronize budding yeast cells in metaphase I or metaphase II during meiosis, as well and in mitotic metaphase, thereby enabling differential analyses among these cell division stages. Their approach builds on prior strategies originally developed in fission yeast and human cells models to induce a synthetic spindle assembly checkpoint (SAC) arrest by conditionally forcing the heterodimerization of two SAC proteins upon addition of abscisic acid (ABA). The results from this manuscript are of special relevance for researchers studying meiosis and using Saccharomyces cerevisiae as a model. Moreover, the differential analysis of the composition and phosphorylation of kinetochores from meiotic metaphase I and metaphase II adds interest for the broader meiosis research community. Finally, regarding my expertise, I am a researcher specialized in the regulation of cell division.

    2. Author response:

      General Statements

      We are delighted that all reviewers found our manuscript to be a technical advance by providing a much sought after method to arrest budding yeast cells in metaphase of mitosis or both meiotic metaphases. The reviewers also valued our use of this system to make new discoveries in two areas. First, we provided evidence that the spindle checkpoint is intrinsically weaker in meiosis I and showed that this is due to PP1 phosphatase. Second, we determined how the composition and phosphorylation of the kinetochore changes during meiosis, providing key insights into kinetochore function and providing a rich dataset for future studies.

      The reviewers also made some extremely helpful suggestions to improve our manuscript, which we will now implement:

      (1) Improvements to the discussion throughout the manuscript. The reviewers recommended that we focus our discussion on the novel findings of the manuscript and drew out some key points of interest that deserve more attention. We fully agree with this and we will address this in a revised version.

      (2) We will add a new supplemental figure to help interpret the mass spectrometry data, to address Reviewer #3, point 4.

      (3) We are currently performing an additional control experiment to address the minor point 1 from reviewer #3. Our experiment to confirm that SynSAC relies on endogenous checkpoint proteins was missing the cell cycle profile of cells where SynSAC was not induced for comparison. We will add this control to our full revision.

      (4) In our full revision we will also include representative images of spindle morphology as requested by Reviewer #1, point 2

      Description of the planned revisions

      Reviewer #1 (Evidence, reproducibility and clarity):

      These authors have developed a method to induce MI or MII arrest. While this was previously possible in MI, the advantage of the method presented here is that it works for MII, and chemically inducible because it is based on a system that is sensitive to the addition of ABA. Depending on when the ABA is added, they achieve a MI or MII delay. The ABA promotes dimerizing fragments of Mps1 and Spc105 that can't bind their chromosomal sites. The evidence that the MI arrest is weaker than the MII arrest is convincing and consistent with published data and indicating the SAC in MI is less robust than MII or mitosis. The authors use this system to find evidence that the weak MI arrest is associated with PP1 binding to Spc105. This is a nice use of the system.

      The remainder of the paper uses the SynSAC system to isolate populations enriched for MI or MII stages and conduct proteomics. This shows a powerful use of the system but more work is needed to validate these results, particularly in normal cells.

      Overall the most significant aspect of this paper is the technical achievement, which is validated by the other experiments. They have developed a system and generated some proteomics data that maybe useful to others when analyzing kinetochore composition at each division. Overall, I have only a few minor suggestions.

      We appreciate the reviewers’ support of our study.

      (1) In wild-type - Pds1 levels are high during M1 and A1, but low in MII. Can the authors comment on this? In line 217, what is meant by "slightly attenuated? Can the authors comment on how anaphase occurs in presence of high Pds1? There is even a low but significant level in MII.

      The higher levels of Pds1 in meiosis I compared to meiosis II has been observed previously using immunofluorescence and live imaging[1–3]. Although the reasons are not completely clear, we speculate that there is insufficient time between the two divisions to re-accumulate Pds1 prior to separase re-activation.

      We agree “slightly attenuated” was confusing and we have re-worded this sentence to read “Addition ABA at the time of prophase release resulted in Pds1securin stabilisation throughout the time course, consistent with delays in both metaphase I and II”.

      We do not believe that either anaphase I or II occur in the presence of high Pds1. Western blotting represents the amount of Pds1 in the population of cells at a given time point. The time between meiosis I and II is very short even when treated with ABA. For example, in Figure 2B, spindle morphology counts show that the anaphase I peak is around 40% at its maxima (105 min) and around 40% of cells are in either metaphase I or metaphase II, and will be Pds1 positive. In contrast, due to the better efficiency of meiosis II, anaphase II hardly occurs at all in these conditions, since anaphase II spindles (and the second nuclear division) are observed at very low frequency (maximum 10%) from 165 minutes onwards. Instead, metaphase II spindles partially or fully breakdown, without undergoing anaphase extension. Taking Pds1 levels from the western blot and the spindle data together leads to the conclusion that at the end of the time-course, these cells are biochemically in metaphase II, but unable to maintain a robust spindle. Spindle collapse is also observed in other situations where meiotic exit fails, and potentially reflects an uncoupling of the cell cycle from the programme governing gamete differentiation[3–5]. We will explain this point in a revised version while referring to representative images that from evidence for this, as also requested by the reviewer below.

      (2) The figures with data characterizing the system are mostly graphs showing time course of MI and MII. There is no cytology, which is a little surprising since the stage is determined by spindle morphology. It would help to see sample sizes (ie. In the Figure legends) and also representative images. It would also be nice to see images comparing the same stage in the SynSAC cells versus normal cells. Are there any differences in the morphology of the spindles or chromosomes when in the SynSAC system?

      This is an excellent suggestion and will also help clarify the point above. We will provide images of cells at the different stages. For each timepoint, 100 cells were scored. We have already included this information in the figure legends 

      (3) A possible criticism of this system could be that the SAC signal promoting arrest is not coming from the kinetochore. Are there any possible consequences of this? In vertebrate cells, the RZZ complex streams off the kinetochore. Yeast don't have RZZ but this is an example of something that is SAC dependent and happens at the kinetochore. Can the authors discuss possible limitations such as this? Does the inhibition of the APC effect the native kinetochores? This could be good or bad. A bad possibility is that the cell is behaving as if it is in MII, but the kinetochores have made their microtubule attachments and behave as if in anaphase.

      In our view, the fact that SynSAC does not come from kinetochores is a major advantage as this allows the study of the kinetochore in an unperturbed state. It is also important to note that the canonical checkpoint components are all still present in the SynSAC strains, and perturbations in kinetochore-microtubule interactions would be expected to mount a kinetochore-driven checkpoint response as normal. Indeed, it would be interesting in future work to understand how disrupting kinetochore-microtubule attachments alters kinetochore composition (presumably checkpoint proteins will be recruited) and phosphorylation but this is beyond the scope of this work. In terms of the state at which we are arresting cells – this is a true metaphase because cohesion has not been lost but kinetochore-microtubule attachments have been established. This is evident from the enrichment of microtubule regulators but not checkpoint proteins in the kinetochore purifications from metaphase I and II. While this state is expected to occur only transiently in yeast, since the establishment of proper kinetochore-microtubule attachments triggers anaphase onset, the ability to capture this properly bioriented state will be extremely informative for future studies. We appreciate the reviewers’ insight in highlighting these interesting discussion points which we will include in a revised version.

      Reviewer #1 (Significance):

      These authors have developed a method to induce MI or MII arrest. While this was previously possible in MI, the advantage of the method presented here is it works for MII, and chemically inducible because it is based on a system that is sensitive to the addition of ABA. Depending on when the ABA is added, they achieve a MI or MII delay. The ABA promotes dimerizing fragments of Mps1 and Spc105 that can't bind their chromosomal sites. The evidence that the MI arrest is weaker than the MII arrest is convincing and consistent with published data and indicating the SAC in MI is less robust than MII or mitosis. The authors use this system to find evidence that the weak MI arrest is associated with PP1 binding to Spc105. This is a nice use of the system.

      The remainder of the paper uses the SynSAC system to isolate populations enriched for MI or MII stages and conduct proteomics. This shows a powerful use of the system but more work is needed to validate these results, particularly in normal cells.

      Overall the most significant aspect of this paper is the technical achievement, which is validated by the other experiments. They have developed a system and generated some proteomics data that maybe useful to others when analyzing kinetochore composition at each division.

      We appreciate the reviewer’s enthusiasm for our work.

      Reviewer #2 (Evidence, reproducibility and clarity):

      The manuscript submitted by Koch et al. describes a novel approach to collect budding yeast cells in metaphase I or metaphase II by synthetically activating the spinde checkpoint (SAC). The arrest is transient and reversible. This synchronization strategy will be extremely useful for studying meiosis I and meiosis II, and compare the two divisions. The authors characterized this so-named syncSACapproach and could confirm previous observations that the SAC arrest is less efficient in meiosis I than in meiosis II. They found that downregulation of the SAC response through PP1 phosphatase is stronger in meiosis I than in meiosis II. The authors then went on to purify kinetochore-associated proteins from metaphase I and II extracts for proteome and phosphoproteome analysis. Their data will be of significant interest to the cell cycle community (they compared their datasets also to kinetochores purified from cells arrested in prophase I and -with SynSAC in mitosis).

      I have only a couple of minor comments:

      (1) I would add the Suppl Figure 1A to main Figure 1A. What is really exciting here is the arrest in metaphase II, so I don't understand why the authors characterize metaphase I in the main figure, but not metaphase II. But this is only a suggestion.

      This is a good suggestion, we will do this in our full revision.

      (2) Line 197, the authors state: “...SyncSACinduced a more pronounced delay in metaphase II than in metaphase I”. However, line 229 and 240 the authors talk about a "longer delay in metaphase <i compared to metaphase II"... this seems to be a mix-up.

      Thank you for pointing this out, this is indeed a typo and we have corrected it.

      (3) The authors describe striking differences for both protein abundance and phosphorylation for key kinetochore associated proteins. I found one very interesting protein that seems to be very abundant and phosphorylated in metaphase I but not metaphase II, namely Sgo1. Do the authors think that Sgo1 is not required in metaphase II anymore? (Top hit in suppl Fig 8D).

      This is indeed an interesting observation, which we plan to investigate as part of another study in the future. Indeed, data from mouse indicates that shugoshin-dependent cohesin deprotection is already absent in meiosis II in mouse oocytes[6], though whether this is also true in yeast is not known. Furthermore, this does not rule out other functions of Sgo1 in meiosis II (for example promoting biorientation). We will include this point in the discussion.

      Reviewer #2 (Significance):

      The technique described here will be of great interest to the cell cycle community. Furthermore, the authors provide data sets on purified kinetochores of different meiotic stages and compare them to mitosis. This paper will thus be highly cited, for the technique, and also for the application of the technique.

      Reviewer #3 (Evidence, reproducibility and clarity):

      In their manuscript, Koch et al. describe a novel strategy to synchronize cells of the budding yeast Saccharomyces cerevisiae in metaphase I and metaphase II, thereby facilitating comparative analyses between these meiotic stages. This approach, termed SynSAC, adapts a method previously developed in fission yeast and human cells that enables the ectopic induction of a synthetic spindle assembly checkpoint (SAC) arrest by conditionally forcing the heterodimerization of two SAC components upon addition of the plant hormone abscisic acid (ABA). This is a valuable tool, which has the advantage that induces SAC-dependent inhibition of the anaphase promoting complex without perturbing kinetochores. Furthermore, since the same strategy and yeast strain can be also used to induce a metaphase arrest during mitosis, the methodology developed by Koch et al. enables comparative analyses between mitotic and meiotic cell divisions. To validate their strategy, the authors purified kinetochores from meiotic metaphase I and metaphase II, as well as from mitotic metaphase, and compared their protein composition and phosphorylation profiles. The results are presented clearly and in an organized manner.

      We are grateful to the reviewer for their support.

      Despite the relevance of both the methodology and the comparative analyses, several main issues should be addressed:

      (1) In contrast to the strong metaphase arrest induced by ABA addition in mitosis (Supp. Fig. 2), the SynSAC strategy only promotes a delay in metaphase I and metaphase II as cells progress through meiosis. This delay extends the duration of both meiotic stages, but does not markedly increase the percentage of metaphase I or II cells in the population at a given timepoint of the meiotic time course (Fig. 1C). Therefore, although SynSAC broadens the time window for sample collection, it does not substantially improve differential analyses between stages compared with a standard NDT80 prophase block synchronization experiment. Could a higher ABA concentration or repeated hormone addition improve the tightness of the meiotic metaphase arrest?

      For many purposes the enrichment and extended time for sample collection is sufficient, as we demonstrate here. However, as pointed out by the reviewer below, the system can be improved by use of the 4A-RASA mutations to provide a stronger arrest (see our response below). We did not experiment with higher ABA concentrations or repeated addition since the very robust arrest achieved with the 4A-RASA mutant deemed this unnecessary.

      (2) Unlike the standard SynSAC strategy, introducing mutations that prevent PP1 binding to the SynSAC construct considerably extended the duration of the meiotic metaphase arrests. In particular, mutating PP1 binding sites in both the RVxF (RASA) and the SILK (4A) motifs of the Spc105(1-455)-PYL construct caused a strong metaphase I arrest that persisted until the end of the meiotic time course (Fig. 3A). This stronger and more prolonged 4A-RASA SynSAC arrest would directly address the issue raised above. It is unclear why the authors did not emphasize more this improved system. Indeed, the 4A-RASA SynSAC approach could be presented as the optimal strategy to induce a conditional metaphase arrest in budding yeast meiosis, since it not only adapts but also improves the original methods designed for fission yeast and human cells. Along the same lines, it is surprising that the authors did not exploit the stronger arrest achieved with the 4A-RASA mutant to compare kinetochore composition at meiotic metaphase I and II.

      We agree that the 4A-RASA mutant is the best tool to use for the arrest and going forward this will be our approach. We collected the proteomics data and the data on the SynSAC mutant variants concurrently, so we did not know about the improved arrest at the time the proteomics experiment was done. Because very good arrest was already achieved with the unmutated SynSAC construct, we could not justify repeating the proteomics experiment which is a large amount of work using significant resources. However, we will highlight the potential of the 4A-RASA mutant more prominently in our full revision.

      (3) The results shown in Supp. Fig. 4C are intriguing and merit further discussion. Mitotic growth in ABA suggest that the RASA mutation silences the SynSAC effect, yet this was not observed for the 4A or the double 4A-RASA mutants. Notably, in contrast to mitosis, the SynSAC 4A-RASA mutation leads to a more pronounced metaphase I meiotic delay (Fig. 3A). It is also noteworthy that the RVAF mutation partially restores mitotic growth in ABA. This observation supports, as previously demonstrated in human cells, that Aurora B-mediated phosphorylation of S77 within the RVSF motif is important to prevent PP1 binding to Spc105 in budding yeast as well.

      We agree these are intriguing findings that highlight key differences as to the wiring of the spindle checkpoint in meiosis and mitosis and potential for future studies, however, currently we can only speculate as to the underlying cause. The effect of the RASA mutation in mitosis is unexpected and unexplained. However, the fact that the 4A-RASA mutation causes a stronger delay in meiosis I compared to mitosis can be explained by a greater prominence of PP1 phosphatase in meiosis. Indeed, our data (Figure 4A) show that the PP1 phosphatase Glc7 and its regulatory subunit Fin1 are highly enriched on kinetochores at all meiotic stages compared to mitosis.

      We agree that the improved growth of the RVAF mutant is intriguing and points to a role of Aurora B-mediated phosphorylation, though previous work has not supported such a role [7].

      We will include a discussion of these important points in a revised version.

      (4) To demonstrate the applicability of the SynSAC approach, the authors immunoprecipitated the kinetochore protein Dsn1 from cells arrested at different meiotic or mitotic stages, and compared kinetochore composition using data independent acquisition (DIA) mass spectrometry. Quantification and comparative analyses of total and kinetochore protein levels were conducted in parallel for cells expressing either FLAG-tagged or untagged Dsn1 (Supp. Fig. 7A-B). To better detect potential changes, protein abundances were next scaled to Dsn1 levels in each sample (Supp. Fig. 7C-D). However, it is not clear why the authors did not normalize protein abundance in the immunoprecipitations from tagged samples at each stage to the corresponding untagged control, instead of performing a separate analysis. This would be particularly relevant given the high sensitivity of DIA mass spectrometry, which enabled quantification of thousands of proteins. Furthermore, the authors compared protein abundances in tagged-samples from mitotic metaphase and meiotic prophase, metaphase I and metaphase II (Supp. Fig. 7E-F). If protein amounts in each case were not normalized to the untagged controls, as inferred from the text (lines 333 to 338), the observed differences could simply reflect global changes in protein expression at different stages rather than specific differences in protein association to kinetochores.

      While we agree with the reviewer that at first glance, normalising to no tag appears to be the most appropriate normalisation, in practice there is very low background signal in the no tag sample which means that any random fluctuations have a big impact on the final fold change used for normalisation. This approach therefore introduces artefacts into the data rather than improving normalisation.

      To provide reassurance that our kinetochore immunoprecipitations are specific, and that the background (no tag) signal is indeed very low, we will provide a new supplemental figure showing the volcanos comparing kinetochore purifications at each stage with their corresponding no tag control.

      It is also important to note that our experiment looks at relative changes of the same protein over time, which we expect to be relatively small in the whole cell lysate. We previously documented proteins that change in abundance in whole cell lysates throughout meiosis[8]. In this study, we found that relatively few proteins significantly change in abundance.

      Our aim in the current study was to understand how the relative composition of the kinetochore changes and for this, we believe that a direct comparison to Dsn1, a central kinetochore protein which we immunoprecipitated is the most appropriate normalisation.

      (5) Despite the large amount of potentially valuable data generated, the manuscript focuses mainly on results that reinforce previously established observations (e.g., premature SAC silencing in meiosis I by PP1, changes in kinetochore composition, etc.). The discussion would benefit from a deeper analysis of novel findings that underscore the broader significance of this study.

      We strongly agree with this point and we will re-frame the discussion to focus on the novel findings, as also raised by the other reviewers.

      Finally, minor concerns are:

      (1) Meiotic progression in SynSAC strains lacking Mad1, Mad2 or Mad3 is severely affected (Fig. 1D and Supp. Fig. 1), making it difficult to assess whether, as the authors state, the metaphase delays depend on the canonical SAC cascade. In addition, as a general note, graphs displaying meiotic time courses could be improved for clarity (e.g., thinner data lines, addition of axis gridlines and external tick marks, etc.).

      We will generate the data to include a checkpoint mutant +/- ABA for direct comparison. We will take steps to improve the clarity of presentation of the meiotic timecourse graphs, though our experience is that uncluttered graphs make it easier to compare trends.

      (2) Spore viability following SynSAC induction in meiosis was used as an indicator that this experimental approach does not disrupt kinetochore function and chromosome segregation. However, this is an indirect measure. Direct monitoring of genome distribution using GFP-tagged chromosomes would have provided more robust evidence. Notably, the SynSAC mad3Δ mutant shows a slight viability defect, which might reflect chromosome segregation defects that are more pronounced in the absence of a functional SAC.

      Spore viability is a much more sensitive way of analysing segregation defects that GFP-labelled chromosomes. This is because GFP labelling allows only a single chromosome to be followed. On the other hand, if any of the 16 chromosomes mis-segregate in a given meiosis this would result in one or more aneuploid spores in the tetrad, which are typically inviable. The fact that spore viability is not significantly different from wild type in this analysis indicates that there are no major chromosome segregation defects in these strains, and we therefore do not plan to do this experiment.

      (3) It is surprising that, although SAC activity is proposed to be weaker in metaphase I, the levels of CPC/SAC proteins seem to be higher at this stage of meiosis than in metaphase II or mitotic metaphase (Fig. 4A-B).

      We agree, this is surprising and we will point this out in the revised discussion. We speculate that the challenge in biorienting homologs which are held together by chiasmata, rather than back-to-back kinetochores results in a greater requirement for error correction in meiosis I. Interestingly, the data with the RASA mutant also point to increased PP1 activity in meiosis I, and we additionally observed increased levels of PP1 (Glc7 and Fin1) on meiotic kinetochores, consistent with the idea that cycles of error correction and silencing are elevated in meiosis I.

      (4) Although a more detailed exploration of kinetochore composition or phosphorylation changes is beyond the scope of the manuscript, some key observations could have been validated experimentally (e.g., enrichment of proteins at kinetochores, phosphorylation events that were identified as specific or enriched at a certain meiotic stage, etc.).

      We agree that this is beyond the scope of the current study but will form the start of future projects from our group, and hopefully others.

      (5) Several typographical errors should be corrected (e.g., "Knetochores" in Fig. 4 legend, "250uM ABA" in Supp. Fig. 1 legend, etc.)

      Thank you for pointing these out, they have been corrected.

      Reviewer #3 (Significance):

      Koch et al. describe a novel methodology, SynSAC, to synchronize budding yeast cells in metaphase I or metaphase II during meiosis, as well and in mitotic metaphase, thereby enabling differential analyses among these cell division stages. Their approach builds on prior strategies originally developed in fission yeast and human cells models to induce a synthetic spindle assembly checkpoint (SAC) arrest by conditionally forcing the heterodimerization of two SAC proteins upon addition of abscisic acid (ABA). The results from this manuscript are of special relevance for researchers studying meiosis and using Saccharomyces cerevisiae as a model. Moreover, the differential analysis of the composition and phosphorylation of kinetochores from meiotic metaphase I and metaphase II adds interest for the broader meiosis research community. Finally, regarding my expertise, I am a researcher specialized in the regulation of cell division.

      Description of the revisions that have already been incorporated in the transferred manuscript

      We have only corrected minor typos as detailed above.

      Description of analyses that authors prefer not to carry out

      The revisions we plan are detailed above. There are just two revisions we believe are either unnecessary or beyond the scope, both minor concerns of Reviewer #3. For clarity we have reproduced them, along with our justification below. In the latter case, the reviewer also acknowledged that further work in this direction is beyond the scope of the current study.

      (2) Spore viability following SynSAC induction in meiosis was used as an indicator that this experimental approach does not disrupt kinetochore function and chromosome segregation. However, this is an indirect measure. Direct monitoring of genome distribution using GFP-tagged chromosomes would have provided more robust evidence. Notably, the SynSAC mad3Δ mutant shows a slight viability defect, which might reflect chromosome segregation defects that are more pronounced in the absence of a functional SAC.

      Spore viability is a much more sensitive way of analysing segregation defects that GFP-labelled chromosomes. This is because GFP labelling allows only a single chromosome to be followed. On the other hand, if any of the 16 chromosomes mis-segregate in a given meiosis this would result in one or more aneuploid spores in the tetrad, which are typically inviable. The fact that spore viability is not significantly different from wild type in this analysis indicates that there are no major chromosome segregation defects in these strains, and we therefore do not plan to do this experiment.

      (4) Although a more detailed exploration of kinetochore composition or phosphorylation changes is beyond the scope of the manuscript, some key observations could have been validated experimentally (e.g., enrichment of proteins at kinetochores, phosphorylation events that were identified as specific or enriched at a certain meiotic stage, etc.).

      We agree that this is beyond the scope of the current study but will form the start of future projects from our group, and hopefully others.

      (1) Salah, S.M., and Nasmyth, K. (2000). Destruction of the securin Pds1p occurs at the onset of anaphase during both meiotic divisions in yeast. Chromosoma 109, 27–34.

      (2) Matos, J., Lipp, J.J., Bogdanova, A., Guillot, S., Okaz, E., Junqueira, M., Shevchenko, A., and Zachariae, W. (2008). Dbf4-dependent CDC7 kinase links DNA replication to the segregation of homologous chromosomes in meiosis I. Cell 135, 662–678.

      (3) Marston, A.L.A.L., Lee, B.H.B.H., and Amon, A. (2003). The Cdc14 phosphatase and the FEAR network control meiotic spindle disassembly and chromosome segregation. Developmental cell 4, 711–726. https://doi.org/10.1016/S1534-5807(03)00130-8.

      (4) Attner, M.A., and Amon, A. (2012). Control of the mitotic exit network during meiosis. Molecular Biology of the Cell 23, 3122–3132. https://doi.org/10.1091/mbc.E12-03-0235.

      (5) Pablo-Hernando, M.E., Arnaiz-Pita, Y., Nakanishi, H., Dawson, D., del Rey, F., Neiman, A.M., and de Aldana, C.R.V. (2007). Cdc15 Is Required for Spore Morphogenesis Independently of Cdc14 in Saccharomyces cerevisiae. Genetics 177, 281–293. https://doi.org/10.1534/genetics.107.076133.

      (6) El Jailani, S., Cladière, D., Nikalayevich, E., Touati, S.A., Chesnokova, V., Melmed, S., Buffin, E., and Wassmann, K. (2025). Eliminating separase inhibition reveals absence of robust cohesin protection in oocyte metaphase II. EMBO J 44, 5187–5214. https://doi.org/10.1038/s44318-025-00522-0.

      (7) Rosenberg, J.S., Cross, F.R., and Funabiki, H. (2011). KNL1/Spc105 Recruits PP1 to Silence the Spindle Assembly Checkpoint. Current Biology 21, 942–947. https://doi.org/10.1016/j.cub.2011.04.011.

      (8) Koch, L.B., Spanos, C., Kelly, V., Ly, T., and Marston, A.L. (2024). Rewiring of the phosphoproteome executes two meiotic divisions in budding yeast. EMBO J 43, 1351–1383. https://doi.org/10.1038/s44318-024-00059-8.

    1. Learn more about the Information and Communication Technology (ICT) accessibility standards, including the EN 301 549, which includes WCAG 2.1 level A and AA, when purchasing goods or services or designing a project, roles, and teams related to the Government of Canada.

      With this text, there is no option for text to speech, making the perceivable aspect of this website poor for people with visual impairments. Additionally, the website does not allow manipulation of text sizes.

    1. L'Impact du Redoublement Scolaire : Analyse de son Efficacité, de ses Coûts et des Alternatives Pédagogiques

      Synthèse

      Le redoublement, bien que traditionnellement ancré dans les pratiques scolaires françaises pour consolider les acquis, fait l'objet d'une remise en question profonde par la recherche scientifique et les politiques publiques.

      Les données actuelles révèlent que l'efficacité pédagogique du redoublement est limitée dans le temps et s'accompagne d'effets délétères systématiques, tels que le décrochage scolaire, la stigmatisation psychologique et un préjudice économique durable pour l'élève et l'État.

      Face à ce constat, le recours au redoublement a chuté de manière significative au cours des deux dernières décennies.

      Pour pallier les difficultés d'apprentissage sans retarder le parcours scolaire, de nouvelles approches émergent, notamment l'utilisation des technologies de l'éducation (EdTech) qui permettent une personnalisation accrue des enseignements.

      Évolution et Pratiques du Redoublement

      Le redoublement est historiquement utilisé par les enseignants, particulièrement au CP ou au CE1, pour renforcer des notions fondamentales jugées fragiles.

      Il peut également résulter d'une demande parentale, notamment en fin de classe de troisième, dans l'espoir d'améliorer le dossier scolaire de l'élève pour une orientation en seconde générale.

      Toutefois, les statistiques montrent un déclin marqué de cette pratique :

      Taux de redoublement en 3ème : Passage de 6,6 % en 2000 à 2,2 % en 2022.

      Facteurs de baisse : Cette diminution est le fruit d'une volonté politique conjuguée aux apports de la recherche scientifique.

      Analyse de l'Efficacité et Conséquences pour l'Élève

      La recherche scientifique remet en question la pertinence du redoublement en tant qu'outil de réussite scolaire.

      Impact Pédagogique

      Bénéfices éphémères : Si des effets positifs peuvent être observés un ou deux ans après le redoublement, ils s'estompent généralement après trois à cinq ans.

      Niveau final identique : À terme, le niveau scolaire des élèves ayant redoublé est sensiblement le même que celui des élèves de niveau initial comparable n'ayant pas redoublé.

      La seule différence réelle est la perte d'une année de scolarité.

      Impact Psychologique et Social

      Décrochage scolaire : Le redoublement est systématiquement associé à une augmentation des risques de décrochage.

      Stigmatisation : Le sentiment de découragement et la stigmatisation liée au redoublement poussent certains élèves à interrompre prématurément leur scolarité.

      Préjudice sur le Marché du Travail

      L'impact se prolonge bien au-delà de la scolarité :

      Entrée différée : Le redoublant entre mécaniquement sur le marché du travail un an plus tard que ses pairs.

      Écart salarial : Le salaire progressant avec l'âge et l'expérience durant les premières années de carrière, un ancien redoublant aura, à âge égal, un revenu inférieur à celui d'un non-redoublant en raison d'un déficit d'expérience.

      Conséquences Économiques pour l'État

      Le maintien du redoublement représente un coût substantiel pour les finances publiques. Selon l'Institut des politiques publiques, le coût est estimé à environ 2 milliards d'euros par an.

      | Facteur de Coût | Description | | --- | --- | | Frais de scolarité | L'État finance une année supplémentaire d'enseignement pour chaque redoublant. | | Main-d'œuvre | Perte d'une année de contribution active à l'économie nationale par l'élève. | | Décalage des économies | Les économies réalisées par la réduction du redoublement ne sont effectives qu'après une dizaine d'années, au moment où l'élève aurait normalement terminé sa terminale. |

      Alternatives et Solutions Pédagogiques

      L'arrêt du redoublement impose de gérer l'hétérogénéité des niveaux au sein des classes supérieures. Plusieurs leviers sont identifiés pour accompagner les élèves en difficulté.

      Les Technologies de l'Éducation (EdTech)

      Les plateformes numériques d'apprentissage permettent de personnaliser le parcours de l'élève en toute autonomie.

      Fonctionnement : Des outils comme l'application « Lalilo » utilisent l'intelligence artificielle pour proposer des exercices adaptés au niveau de chaque enfant.

      Avantages pour l'enseignant : Un tableau de bord permet de suivre en temps réel les réussites et les échecs, facilitant des interventions ciblées le lendemain.

      Pédagogie différenciée : Tous les élèves visent le même objectif final, mais par des biais et des rythmes différents. Si un exercice est réussi à plus de 80 %, la difficulté augmente ; sinon, l'outil propose des situations alternatives.

      Autres Leviers de Soutien

      Le document souligne que le numérique n'est pas une solution unique et doit s'inscrire dans un panel de méthodes :

      Tutorat entre élèves ou avec des adultes.

      Travail en effectifs réduits pour un encadrement plus serré.

      Maintien du même enseignant sur plusieurs années consécutives pour assurer une continuité pédagogique.

      En conclusion, si le redoublement s'avère être une « punition à long terme » tant pour l'individu que pour la collectivité, la diversification des méthodes pédagogiques offre des alternatives plus prometteuses pour réduire les écarts de niveau sans freiner la progression des élèves.

    1. Synthèse des Enquêtes Internationales : Enjeux et Perspectives pour le Système Éducatif Français

      Résumé Exécutif

      L'analyse des enquêtes internationales (PISA, TIMSS, PIRLS) révèle une situation contrastée pour l'éducation en France.

      Si le pays maintient une position proche de la moyenne de l'OCDE dans certains domaines, des signaux d'alarme majeurs apparaissent, notamment une baisse tendancielle du niveau en mathématiques depuis 30 ans et une corrélation exceptionnellement forte entre l'origine sociale et la réussite scolaire.

      Les points critiques identifiés incluent :

      Un déclin marqué en mathématiques : À peine 20 % des élèves de 6ème maîtrisent le concept des fractions sur une ligne numérique.

      Des inégalités sociales persistantes : La France est l'un des pays où le milieu socio-économique prédit le mieux les résultats.

      Un déficit de compétences psychosociales : Les élèves français manifestent une anxiété élevée, une faible persévérance et un sentiment d'appartenance à l'école réduit.

      Un climat scolaire dégradé : Les perturbations en classe sont nettement supérieures à la moyenne internationale.

      Toutefois, des motifs d'optimisme existent, notamment la résilience des scores de lecture au niveau primaire malgré la pandémie de COVID-19, et le succès d'expérimentations ciblées (groupes de besoins, réformes structurelles au Maroc et en Estonie).

      La recherche scientifique préconise un passage du simple diagnostic à l'action par l'expérimentation rigoureuse et le renforcement de la formation des enseignants.

      --------------------------------------------------------------------------------

      I. Panorama des Évaluations Internationales

      Le Conseil Scientifique de l'Éducation Nationale (CSEN) souligne l'importance d'utiliser ces enquêtes non comme des classements médiatiques, mais comme des outils de diagnostic et des leviers de transformation pédagogique.

      1. Les trois piliers de l'évaluation

      | Enquête | Organisme | Population cible | Domaines évalués | | --- | --- | --- | --- | | PISA | OCDE | Élèves de 15 ans | Culture mathématique, scientifique et compréhension de l'écrit (littératie). | | TIMSS | IEA | CM1 et 4ème | Mathématiques et Sciences. | | PIRLS | IEA | CM1 | Compréhension de l'écrit (processus de lecture). |

      2. Distinction entre PISA et TIMSS/PIRLS

      PISA adopte un point de vue "extérieur" aux programmes scolaires, évaluant la capacité des jeunes à mobiliser leurs connaissances dans des situations de la vie réelle à la fin de la scolarité obligatoire.

      TIMSS et PIRLS sont plus étroitement liés aux programmes d'enseignement (curriculum) et se basent sur des niveaux scolaires spécifiques (Grade 4 et Grade 8).

      --------------------------------------------------------------------------------

      II. Analyse du Système Français : Constats et Diagnostics

      1. Performances Académiques : Un déclin hétérogène

      Mathématiques : C'est le point noir du système français.

      Les résultats en CM1 et 4ème montrent un décrochage net par rapport à la moyenne de l'Union européenne.

      L'écart se creuse particulièrement en 4ème, avec seulement 3 % d'élèves très performants contre 11 % au niveau européen et 50 % à Singapour.

      Lecture : La situation est plus encourageante au primaire.

      La France est l'un des rares pays à avoir progressé ou stabilisé ses scores en lecture (PIRLS 2021) malgré la crise sanitaire.

      Cette résilience est attribuée à une fermeture limitée des écoles (comparée à d'autres pays) et potentiellement aux politiques de dédoublement des classes en éducation prioritaire.

      Compétences Numériques et Civiques : Dans les enquêtes ICILS (numérique) et ICCS (citoyenneté), la France obtient des résultats honorables, se situant dans la moyenne ou légèrement au-dessus, notamment en pensée informatique et en adhésion aux valeurs d'égalité.

      2. Le Poids des Inégalités Sociales et de Genre

      La France se distingue par une "surdétermination" des performances par l'origine sociale.

      La variance expliquée par le milieu socio-économique est de 17-19 % en France, contre 13-14 % dans les autres pays de l'OCDE.

      De plus, un "effet de genre" émerge dès le CP : les garçons prennent rapidement l'avantage sur les filles en mathématiques, un écart qui s'accentue jusqu'au CM1 (23 points d'écart en 2023).

      3. Climat Scolaire et Facteurs Psychologiques

      Les enquêtes mettent en lumière des fragilités comportementales spécifiques aux élèves français :

      Anxiété mathématique : Bien qu'en baisse, elle reste notable.

      Climat de classe : 29 % des élèves déclarent ne pas pouvoir travailler correctement en mathématiques à cause du bruit et du désordre (moyenne OCDE : 23 %).

      Esprit de croissance : Moins d'un élève sur deux en France pense que son intelligence peut se développer par l'effort.

      Coopération : La France obtient l'un des indices de coopération entre élèves les plus faibles de l'OCDE.

      --------------------------------------------------------------------------------

      III. Enseignements Internationaux : Modèles de Réussite

      L'analyse de pays aux trajectoires variées permet d'identifier des facteurs clés de succès.

      1. L'Estonie : Le modèle d'efficacité nordique

      Le succès estonien repose sur :

      L'autonomie des établissements : Les écoles gèrent leur propre programme tout en respectant un socle national.

      La haute qualification des enseignants : Le Master est obligatoire pour un contrat permanent.

      L'éducation précoce : Un programme scolaire dès la maternelle (4-6 ans) incluant lecture et jeux.

      La transparence des données : Une évaluation externe régulière dont les résultats guident les améliorations locales.

      2. Le Maroc : La réforme des "Écoles Pionnières"

      Face à des résultats historiquement faibles, le Maroc a lancé un programme massif incluant :

      L'approche TARL (Teaching at the Right Level) : Remédiation intensive basée sur le niveau réel de l'élève plutôt que sur son âge.

      L'enseignement explicite : Des leçons structurées et scriptées pour soutenir les enseignants.

      Un encadrement de proximité : Les inspecteurs passent d'un rôle de contrôle à un rôle de coaching hebdomadaire.

      Résultats : Un gain d'impact de 0,9 écart-type en une seule année dans les écoles pilotes.

      3. Le Portugal : La leçon de la continuité

      L'expérience portugaise montre qu'une politique de "hautes attentes" (examens nationaux exigeants, programmes basés sur les contenus) a permis une remontée spectaculaire entre 2000 et 2015.

      Inversement, l'assouplissement de ces exigences et le passage à une "flexibilité curriculaire" après 2016 ont coïncidé avec une baisse des résultats.

      --------------------------------------------------------------------------------

      IV. Leviers de Transformation pour la France

      Le CSEN et les experts réunis suggèrent plusieurs pistes pour inverser la courbe du déclin.

      1. Améliorer la maîtrise des fondamentaux

      Enseignement des fractions : Des interventions ciblées de 4 à 5 semaines, utilisant des logiciels de pointage numérique avec feedback immédiat, ont montré une progression spectaculaire des élèves de CM2 et 6ème.

      Enseignement de la compréhension : Contrairement aux pays anglophones, la France enseigne peu les stratégies explicites de compréhension (inférences, analyse de structure de texte).

      Il est recommandé d'intégrer ces pratiques dès le primaire.

      2. Renforcer la formation et l'attractivité

      Investissement : La part du PIB consacrée à l'éducation en France a baissé de près d'un point depuis les années 90 (représentant un manque à gagner de 25 milliards d'euros).

      Formation continue : Nécessité de former les enseignants aux apports des sciences cognitives pour identifier les "obstacles cognitifs" (erreurs de logique, recours excessif aux connaissances personnelles au détriment du texte).

      3. Agir sur le climat et les compétences sociales

      Développer l'esprit de croissance : Encourager les élèves à voir l'erreur comme une étape d'apprentissage.

      Favoriser la coopération : Réduire la compétition pour améliorer le bien-être et la motivation, particulièrement chez les élèves les plus fragiles.

      4. Utiliser l'évaluation comme diagnostic

      L'évaluation ne doit pas être vécue comme une sanction.

      Elle doit permettre de créer des "groupes de besoins" temporaires et ciblés, permettant de traiter les lacunes spécifiques (comme les automatismes de calcul) avant qu'elles ne deviennent insurmontables.

      --------------------------------------------------------------------------------

      Conclusion

      Les enquêtes internationales confirment que le déclin n'est pas une fatalité.

      Des pays aux contextes variés (Estonie, Maroc, Portugal) ont réussi à transformer leur système en s'appuyant sur la cohérence des programmes, la formation des acteurs et une culture de l'évaluation diagnostique.

      Pour la France, l'enjeu réside dans sa capacité à traduire ces données scientifiques en pratiques de classe quotidiennes et en politiques publiques stables.

    1. Central Venous Catheter Insertion

      Central venous catheter insertion = büyük toplardamara kateter yerleştirilmesi

      Kateter, vücuda sıvı vermek veya sıvı boşaltmak amacıyla damar, boşluk ya da kanallara yerleştirilen ince, esnek tüptür.

    2. ETCO₂ >50 mmHg or a single increase in ETCO₂ >10 mmHg indicateshypoventilation.

      ETCO₂’nin 50 mmHg’nin üzerinde olması ya da ETCO₂’de tek seferde 10 mmHg’den fazla artış görülmesi, hipoventilasyonu (yetersiz solunumu) gösterir.

    3. Children under 5 months old;•Should not have milk or solid food for 4 hours before sedation.•Children between 5 months and 36 months;•Should not have milk or solid food for 6 hours before sedation.•Children older than 36 months;•Should not have milk or solid food for 8 hours before sedation

      ① Children under 5 months old; 5 aydan küçük çocuklar:

      ② Should not have milk or solid food for 4 hours before sedation. Sedasyondan önce 4 saat boyunca süt veya katı gıda almamalıdır.

      ③ Children between 5 months and 36 months; 5 ay – 36 ay arasındaki çocuklar:

      ④ Should not have milk or solid food for 6 hours before sedation. Sedasyondan önce 6 saat boyunca süt veya katı gıda almamalıdır.

      ⑤ Children older than 36 months; 36 aydan büyük çocuklar:

      ⑥ Should not have milk or solid food for 8 hours before sedation. Sedasyondan önce 8 saat boyunca süt veya katı gıda almamalıdır.

    4. Moderate sedation is characterized by a depression in the level ofconsciousness, where the patient exhibits a slow but meaningful motorresponse to simple verbal or tactile stimuli.

      Orta (moderat) sedasyon, bilinç düzeyinde azalma ile karakterizedir; bu durumda hasta basit sözel ya da dokunsal uyaranlara yavaş fakat anlamlı motor yanıt verir.

    5. t is a state of sedation that allows the maintenance ofcardiorespiratory functions while preventing the discomfort of theprocedure through the administration of sedatives or dissociativeagents, with or without analgesic

      İşlem yapılırken hastaya sakinleştirici (sedatif) veya bilinçten koparıcı (disosiyatif) ilaçlar verilir. Bu ilaçlar ağrı kesiciyle birlikte ya da tek başına kullanılabilir.

    6. It is a state of sedation that allows the maintenance ofcardiorespiratory functions while preventing the discomfort of theprocedure through the administration of sedatives or dissociativeagents, with or without analgesics

      İşlem sırasında sedatif veya disosiyatif ajanların, analjeziklerle birlikte ya da analjezikler olmaksızın uygulanması yoluyla hastanın rahatsızlık hissetmesini önlerken, kardiyorespiratuvar fonksiyonların korunmasına izin veren bir sedasyon durumudur.

    Annotators

    1. La Salle

      René-Robert Cavelier, Sieur de La Salle (November 22, 1643 – March 19, 1687), was a French explorer and fur trader in North America. He explored the Great Lakes region of the United States and Canada, and the Mississippi River. He is best known for an early 1682 expedition in which he canoed the lower Mississippi River from the mouth of the Illinois River to the Gulf of Mexico.

    1. Reviewer #3 (Public review):

      Summary:

      This manuscript introduces a high-resolution, open-source light-sheet fluorescence microscope optimized for sub-cellular imaging.

      The system is designed for ease of assembly and use, incorporating a custom-machined baseplate and in silico optimized optical paths to ensure robust alignment and performance.

      The important feature of the microscope is the clever and elegant adaptation of simple gaussian beams, smart beam shaping, galvo pivoting and high NA objectives to ensure a uniform thin light-sheet of around 400 nm in thickness, over a 266 micron wide Field of view, pushing the axial resolution of the system beyond the regular diffraction limited-based tradeoffs of light-sheet fluorescence microscopy.

      Compelling validation using fluorescent beads multicolor cellular imaging and dual-color live-cell imaging highlights the system's performance. Moreover, a very extensive and comprehensive manual of operation is provided in the form of supplementary materials. This provides a DIY blueprint for researchers that want to implement such a system, providing also estimate costs and a detailed description of needed expertises.

      Strengths:

      - Strong and accessible technical innovation.

      With an elegant combination of beam shaping and optical modelling, the authors provide a high resolution light-sheet system that overcomes the classical light-sheet tradeoff limit of thin light-sheet and small field of view. In addition, the integration of in silico modelling with a custom-machined baseplate is very practical and allows for ease of alignment procedures. Combining these features with the solid and super-extensive guide provided in the supplementary information, this provides a protocol for replicating the microscope in any other lab.

      - Impeccable optical performances and ease of mounting of samples

      The system takes advantage of the same sample-holding method seen already in other implementations, but reduces the optical complexity. At the same time, the authors claim to achieve similar lateral and axial resolution to Lattice-light-sheet microscopy (although without a direct comparison (see below in the "weaknesses" section). The optical characterization of the system is comprehensive and well-detailed. Additionally, the authors validate the system imaging sub-cellular structures in mammalian cells.

      -Transparency and comprehensiveness of documentation and resources.

      A very detailed protocol provides detailed documentation about the setup, the optical modeling and the total cost.

      Conclusion:

      Altair-LSFM represents a well-engineered and accessible light-sheet system that addresses a longstanding need for high-resolution, reproducible, and affordable sub-cellular light-sheet imaging. At this stage, I believe the manuscript makes a compelling case for Altair-LSFM as a valuable contribution to the open microscopy scientific community.

      Comments on revisions:

      I appreciate the details and the care expressed by the authors in answering all my concerns, both the bigger ones (lack of live cell imaging demonstration) and to the smaller ones (about data storage, costs, expertise needed, and so on). The manuscript has been greatly improved, and I have no other comments to make.

    2. Author response:

      The following is the authors’ response to the original reviews.

      eLife Assessment

      This useful study presents Altair-LSFM, a solid and well-documented implementation of a light-sheet fluorescence microscope (LSFM) designed for accessibility and cost reduction. While the approach offers strengths such as the use of custom-machined baseplates and detailed assembly instructions, its overall impact is limited by the lack of live-cell imaging capabilities and the absence of a clear, quantitative comparison to existing LSFM platforms. As such, although technically competent, the broader utility and uptake of this system by the community may be limited.

      We thank the editors and reviewers for their thoughtful evaluation of our work and for recognizing the technical strengths of the Altair-LSFM platform, including the custom-machined baseplates and detailed documentation provided to promote accessibility and reproducibility. Below, we provide point-by-point responses to each referee comment. In the process, we have significantly revised the manuscript to include live-cell imaging data and a quantitative evaluation of imaging speed. We now more explicitly describe the different variants of lattice light-sheet microscopy—highlighting differences in their illumination flexibility and image acquisition modes—and clarify how Altair-LSFM compares to each. We further discuss challenges associated with the 5 mm coverslip and propose practical strategies to overcome them. Additionally, we outline cost-reduction opportunities, explain the rationale behind key equipment selections, and provide guidance for implementing environmental control. Altogether, we believe these additions have strengthened the manuscript and clarified both the capabilities and limitations of AltairLSFM.

      Public Reviews:

      Reviewer #1 (Public review): 

      Summary: 

      The article presents the details of the high-resolution light-sheet microscopy system developed by the group. In addition to presenting the technical details of the system, its resolution has been characterized and its functionality demonstrated by visualizing subcellular structures in a biological sample.

      Strengths: 

      (1) The article includes extensive supplementary material that complements the information in the main article.

      (2) However, in some sections, the information provided is somewhat superficial.

      We thank the reviewer for their thoughtful assessment and for recognizing the strengths of our manuscript, including the extensive supplementary material. Our goal was to make the supplemental content as comprehensive and useful as possible. In addition to the materials provided with the manuscript, our intention is for the online documentation (available at thedeanlab.github.io/altair) to serve as a living resource that evolves in response to user feedback. We would therefore greatly appreciate the reviewer’s guidance on which sections were perceived as superficial so that we can expand them to better support readers and builders of the system.

      Weaknesses:

      (1) Although a comparison is made with other light-sheet microscopy systems, the presented system does not represent a significant advance over existing systems. It uses high numerical aperture objectives and Gaussian beams, achieving resolution close to theoretical after deconvolution. The main advantage of the presented system is its ease of construction, thanks to the design of a perforated base plate.

      We appreciate the reviewer’s assessment and the opportunity to clarify our intent. Our primary goal was not to introduce new optical functionality beyond that of existing high-performance light-sheet systems, but rather to substantially reduce the barrier to entry for non-specialist laboratories. Many open-source implementations, such as OpenSPIM, OpenSPIN, and Benchtop mesoSPIM, similarly focused on accessibility and reproducibility rather than introducing new optical modalities, yet have had a measureable impact on the field by enabling broader community participation. Altair-LSFM follows this tradition, providing sub-cellular resolution performance comparable to advanced systems like LLSM, while emphasizing reproducibility, ease of construction through a precision-machined baseplate, and comprehensive documentation to facilitate dissemination and adoption.

      (2) Using similar objectives (Nikon 25x and Thorlabs 20x), the results obtained are similar to those of the LLSM system (using a Gaussian beam without laser modulation). However, the article does not mention the difficulties of mounting the sample in the implemented configuration.

      We appreciate the reviewer’s comment and agree that there are practical challenges associated with handling 5 mm diameter coverslips in this configuration. In the revised manuscript, we now explicitly describe these challenges and provide practical solutions. Specifically, we highlight the use of a custommachined coverslip holder designed to simplify mounting and handling, and we direct readers to an alternative configuration using the Zeiss W Plan-Apochromat 20×/1.0 objective, which eliminates the need for small coverslips altogether.

      (3) The authors present a low-cost, open-source system. Although they provide open source code for the software (navigate), the use of proprietary electronics (ASI, NI, etc.) makes the system relatively expensive. Its low cost is not justified.

      We appreciate the reviewer’s perspective and understand the concern regarding the use of proprietary control hardware such as the ASI Tiger Controller and NI data acquisition cards. Our decision to use these components was intentional: relying on a unified, professionally supported and maintained platform minimizes complexity associated with sourcing, configuring, and integrating hardware from multiple vendors, thereby reducing non-financial barriers to entry for non-specialist users.

      Importantly, these components are not the primary cost driver of Altair-LSFM (they represent roughly 18% of the total system cost). Nonetheless, for individuals where the price is prohibitive, we also outline several viable cost-reduction options in the revised manuscript (e.g., substituting manual stages, omitting the filter wheel, or using industrial CMOS cameras), while discussing the trade-offs these substitutions introduce in performance and usability. These considerations are now summarized in Supplementary Note 1, which provides a transparent rationale for our design and cost decisions.

      Finally, we note that even with these professional-grade components, Altair-LSFM remains substantially less expensive than commercial systems offering comparable optical performance, such as LLSM implementations from Zeiss or 3i.

      (4) The fibroblast images provided are of exceptional quality. However, these are fixed samples. The system lacks the necessary elements for monitoring cells in vivo, such as temperature or pH control.

      We thank the reviewer for their positive comment regarding the quality of our data. As noted, the current manuscript focuses on validating the optical performance and resolution of the system using fixed specimens to ensure reproducibility and stability.

      We fully agree on the importance of environmental control for live-cell imaging. In the revised manuscript, we now describe in detail how temperature regulation can be achieved using a custom-designed heated sample chamber, accompanied by detailed assembly instructions on our GitHub repository and summarized in Supplementary Note 2. For pH stabilization in systems lacking a 5% CO₂ atmosphere, we recommend supplementing the imaging medium with 10–25 mM HEPES buffer. Additionally, we include new live-cell imaging data demonstrating that Altair-LSFM supports in vitro time-lapse imaging of dynamic cellular processes under controlled temperature conditions.

      Reviewer #2 (Public review): 

      Summary: 

      The authors present Altair-LSFM (Light Sheet Fluorescence Microscope), a high-resolution, open-source microscope, that is relatively easy to align and construct and achieves sub-cellular resolution. The authors developed this microscope to fill a perceived need that current open-source systems are primarily designed for large specimens and lack sub-cellular resolution or are difficult to construct and align, and are not stable. While commercial alternatives exist that offer sub-cellular resolution, they are expensive. The authors' manuscript centers around comparisons to the highly successful lattice light-sheet microscope, including the choice of detection and excitation objectives. The authors thus claim that there remains a critical need for high-resolution, economical, and easy-to-implement LSFM systems. 

      We thank the reviewer for their thoughtful summary. We agree that existing open-source systems primarily emphasize imaging of large specimens, whereas commercial systems that achieve sub-cellular resolution remain costly and complex. Our aim with Altair-LSFM was to bridge this gap—providing LLSM-level performance in a substantially more accessible and reproducible format. By combining high-NA optics with a precision-machined baseplate and open-source documentation, Altair offers a practical, high-resolution solution that can be readily adopted by non-specialist laboratories.

      Strengths: 

      The authors succeed in their goals of implementing a relatively low-cost (~ USD 150K) open-source microscope that is easy to align. The ease of alignment rests on using custom-designed baseplates with dowel pins for precise positioning of optics based on computer analysis of opto-mechanical tolerances, as well as the optical path design. They simplify the excitation optics over Lattice light-sheet microscopes by using a Gaussian beam for illumination while maintaining lateral and axial resolutions of 235 and 350 nm across a 260-um field of view after deconvolution. In doing so they rest on foundational principles of optical microscopy that what matters for lateral resolution is the numerical aperture of the detection objective and proper sampling of the image field on to the detection, and the axial resolution depends on the thickness of the light-sheet when it is thinner than the depth of field of the detection objective. This concept has unfortunately not been completely clear to users of high-resolution light-sheet microscopes and is thus a valuable demonstration. The microscope is controlled by an open-source software, Navigate, developed by the authors, and it is thus foreseeable that different versions of this system could be implemented depending on experimental needs while maintaining easy alignment and low cost. They demonstrate system performance successfully by characterizing their sheet, point-spread function, and visualization of sub-cellular structures in mammalian cells, including microtubules, actin filaments, nuclei, and the Golgi apparatus.

      We thank the reviewer for their thoughtful and generous assessment of our work. We are pleased that the manuscript’s emphasis on fundamental optical principles, design rationale, and practical implementation was clearly conveyed. We agree that Altair’s modular and accessible architecture provides a strong foundation for future variants tailored to specific experimental needs. To facilitate this, we have made all Zemax simulations, CAD files, and build documentation openly available on our GitHub repository, enabling users to adapt and extend the system for diverse imaging applications.

      Weaknesses:

      There is a fixation on comparison to the first-generation lattice light-sheet microscope, which has evolved significantly since then:

      (1) The authors claim that commercial lattice light-sheet microscopes (LLSM) are "complex, expensive, and alignment intensive", I believe this sentence applies to the open-source version of LLSM, which was made available for wide dissemination. Since then, a commercial solution has been provided by 3i, which is now being used in multiple cores and labs but does require routine alignments. However, Zeiss has also released a commercial turn-key system, which, while expensive, is stable, and the complexity does not interfere with the experience of the user. Though in general, statements on ease of use and stability might be considered anecdotal and may not belong in a scientific article, unreferenced or without data.

      We thank the reviewer for this thoughtful and constructive comment. We have revised the manuscript to more clearly distinguish between the original open-source implementation of LLSM and subsequent commercial versions by 3i and ZEISS. The revised Introduction and Discussion now explicitly note that while open-source and early implementations of LLSM can require expert alignment and maintenance, commercial systems—particularly the ZEISS Lattice Lightsheet 7—are designed for automated operation and stable, turn-key use, albeit at higher cost and with limited modifiability. We have also moderated earlier language regarding usability and stability to avoid anecdotal phrasing.

      We also now provide a more objective proxy for system complexity: the number of optical elements that require precise alignment during assembly and maintenance thereafter. The original open-source LLSM setup includes approximately 29 optical components that must each be carefully positioned laterally, angularly, and coaxially along the optical path. In contrast, the first-generation Altair-LSFM system contains only nine such elements. By this metric, Altair-LSFM is considerably simpler to assemble and align, supporting our overarching goal of making high-resolution light-sheet imaging more accessible to non-specialist laboratories.

      (2) One of the major limitations of the first generation LLSM was the use of a 5 mm coverslip, which was a hinderance for many users. However, the Zeiss system elegantly solves this problem, and so does Oblique Plane Microscopy (OPM), while the Altair-LSFM retains this feature, which may dissuade widespread adoption. This limitation and how it may be overcome in future iterations is not discussed.

      We thank the reviewer for this helpful comment. We agree that the use of 5 mm diameter coverslips, while enabling high-NA imaging in the current Altair-LSFM configuration, may pose a practical limitation for some users. We now discuss this more explicitly in the revised manuscript. Specifically, we note that replacing the detection objective provides a straightforward solution to this constraint. For example, as demonstrated by Moore et al. (Lab Chip, 2021), pairing the Zeiss W Plan-Apochromat 20×/1.0 detection objective with the Thorlabs TL20X-MPL illumination objective allows imaging beyond the physical surfaces of both objectives, eliminating the need for small-format coverslips. In the revised text, we propose this modification as an accessible path toward greater compatibility with conventional sample mounting formats. We also note in the Discussion that Oblique Plane Microscopy (OPM) inherently avoids such nonstandard mounting requirements and, owing to its single-objective architecture, is fully compatible with standard environmental chambers.

      (3) Further, on the point of sample flexibility, all generations of the LLSM, and by the nature of its design, the OPM, can accommodate live-cell imaging with temperature, gas, and humidity control. It is unclear how this would be implemented with the current sample chamber. This limitation would severely limit use cases for cell biologists, for which this microscope is designed. There is no discussion on this limitation or how it may be overcome in future iterations.

      We thank the reviewer for this important observation and agree that environmental control is critical for live-cell imaging applications. It is worth noting that the original open-source LLSM design, as well as the commercial version developed by 3i, provided temperature regulation but did not include integrated control of CO2 or humidity. Despite this limitation, these systems have been widely adopted and have generated significant biological insights. We also acknowledge that both OPM and the ZEISS implementation of LLSM offer clear advantages in this respect, providing compatibility with standard commercial environmental chambers that support full regulation of temperature, CO₂, and humidity.

      In the revised manuscript, we expand our discussion of environmental control in Supplementary Note 2, where we describe the Altair-LSFM chamber design in more detail and discuss its current implementation of temperature regulation and HEPES-based pH stabilization. Additionally, the Discussion now explicitly notes that OPM avoids the challenges associated with non-standard sample mounting and is inherently compatible with conventional environmental enclosures.

      (4) The authors' comparison to LLSM is constrained to the "square" lattice, which, as they point out, is the most used optical lattice (though this also might be considered anecdotal). The LLSM original design, however, goes far beyond the square lattice, including hexagonal lattices, the ability to do structured illumination, and greater flexibility in general in terms of light-sheet tuning for different experimental needs, as well as not being limited to just sample scanning. Thus, the Alstair-LSFM cannot compare to the original LLSM in terms of versatility, even if comparisons to the resolution provided by the square lattice are fair.

      We agree that the original LLSM design offers substantially greater flexibility than what is reflected in our initial comparison, including the ability to generate multiple lattice geometries (e.g., square and hexagonal), operate in structured illumination mode, and acquire volumes using both sample- and lightsheet–scanning strategies. To address this, we now include Supplementary Note 3 that provides a detailed overview of the illumination modes and imaging flexibility afforded by the original LLSM implementation, and how these capabilities compare to both the commercial ZEISS Lattice Lightsheet 7 and our AltairLSFM system. In addition, we have revised the discussion to explicitly acknowledge that the original LLSM could operate in alternative scan strategies beyond sample scanning, providing greater context for readers and ensuring a more balanced comparison.

      (5) There is no demonstration of the system's live-imaging capabilities or temporal resolution, which is the main advantage of existing light-sheet systems.

      In the revised manuscript, we now include a demonstration of live-cell imaging to directly validate AltairLSFM’s suitability for dynamic biological applications. We also explicitly discuss the temporal resolution of the system in the main text (see Optoelectronic Design of Altair-LSFM), where we detail both software- and hardware-related limitations. Specifically, we evaluate the maximum imaging speed achievable with Altair-LSFM in conjunction with our open-source control software, navigate.

      For simplicity and reduced optoelectronic complexity, the current implementation powers the piezo through the ASI Tiger Controller, which modestly reduces its bandwidth. Nonetheless, for a 100 µm stroke typical of light-sheet imaging, we achieved sufficient performance to support volumetric imaging at most biologically relevant timescales. These results, along with additional discussion of the design trade-offs and performance considerations, are now included in the revised manuscript and expanded upon in the supplementary material.

      While the microscope is well designed and completely open source, it will require experience with optics, electronics, and microscopy to implement and align properly. Experience with custom machining or soliciting a machine shop is also necessary. Thus, in my opinion, it is unlikely to be implemented by a lab that has zero prior experience with custom optics or can hire someone who does. Altair-LSFM may not be as easily adaptable or implementable as the authors describe or perceive in any lab that is interested, even if they can afford it. The authors indicate they will offer "workshops," but this does not necessarily remove the barrier to entry or lower it, perhaps as significantly as the authors describe.

      We appreciate the reviewer’s perspective and agree that building any high-performance custom microscope—Altair-LSFM included—requires a basic understanding of (or willingness to learn) optics, electronics, and instrumentation. Such a barrier exists for all open-source microscopes, and our goal is not to eliminate this requirement entirely but to substantially reduce the technical and logistical challenges that typically accompany the construction of custom light-sheet systems.

      Importantly, no machining experience or in-house fabrication capabilities are required. Users can simply submit the provided CAD design files and specifications directly to commercial vendors for fabrication. We have made this process as straightforward as possible by supplying detailed build instructions, recommended materials, and vendor-ready files through our GitHub repository. Our dissemination strategy draws inspiration from other successful open-source projects such as mesoSPIM, which has seen widespread adoption—over 30 implementations worldwide—through a similar model of exhaustive documentation, open-source software, and community support via user meetings and workshops.

      We also recognize that documentation alone cannot fully replace hands-on experience. To further lower barriers to adoption, we are actively working with commercial vendors to streamline procurement and assembly, and Altair-LSFM is supported by a Biomedical Technology Development and Dissemination (BTDD) grant that provides resources for hosting workshops, offering real-time community support, and developing supplementary training materials.

      In the revised manuscript, we now expand the Discussion to explicitly acknowledge these implementation considerations and to outline our ongoing efforts to support a broad and diverse user base, ensuring that laboratories with varying levels of technical expertise can successfully adopt and maintain the Altair-LSFM platform.

      There is a claim that this design is easily adaptable. However, the requirement of custom-machined baseplates and in silico optimization of the optical path basically means that each new instrument is a new design, even if the Navigate software can be used. It is unclear how Altair-LSFM demonstrates a modular design that reduces times from conception to optimization compared to previous implementations.

      We thank the reviewer for this insightful comment and agree that our original language regarding adaptability may have overstated the degree to which Altair-LSFM can be modified without prior experience. It was not our intention to imply that the system can be easily redesigned by users with limited technical background. Meaningful adaptations of the optical or mechanical design do require expertise in optical layout, optomechanical design, and alignment.

      That said, for laboratories with such expertise, we aim to facilitate modifications by providing comprehensive resources—including detailed Zemax simulations, complete CAD models, and alignment documentation. These materials are intended to reduce the development burden for expert users seeking to tailor the system to specific experimental requirements, without necessitating a complete re-optimization of the optical path from first principles.

      In the revised manuscript, we clarify this point and temper our language regarding adaptability to better reflect the realistic scope of customization. Specifically, we now state in the Discussion: “For expert users who wish to tailor the instrument, we also provide all Zemax illumination-path simulations and CAD files, along with step-by-step optimization protocols, enabling modification and re-optimization of the optical system as needed.” This revision ensures that readers clearly understand that Altair-LSFM is designed for reproducibility and straightforward assembly in its default configuration, while still offering the flexibility for modification by experienced users.

      Reviewer #3 (Public review):

      Summary: 

      This manuscript introduces a high-resolution, open-source light-sheet fluorescence microscope optimized for sub-cellular imaging. The system is designed for ease of assembly and use, incorporating a custommachined baseplate and in silico optimized optical paths to ensure robust alignment and performance. The authors demonstrate lateral and axial resolutions of ~235 nm and ~350 nm after deconvolution, enabling imaging of sub-diffraction structures in mammalian cells. The important feature of the microscope is the clever and elegant adaptation of simple gaussian beams, smart beam shaping, galvo pivoting and high NA objectives to ensure a uniform thin light-sheet of around 400 nm in thickness, over a 266 micron wide Field of view, pushing the axial resolution of the system beyond the regular diffraction limited-based tradeoffs of light-sheet fluorescence microscopy. Compelling validation using fluorescent beads and multicolor cellular imaging highlights the system's performance and accessibility. Moreover, a very extensive and comprehensive manual of operation is provided in the form of supplementary materials. This provides a DIY blueprint for researchers who want to implement such a system.

      We thank the reviewer for their thoughtful and positive assessment of our work. We appreciate their recognition of Altair-LSFM’s design and performance, including its ability to achieve high-resolution, imaging throughout a 266-micron field of view. While Altair-LSFM approaches the practical limits of diffraction-limited performance, it does not exceed the fundamental diffraction limit; rather, it achieves near-theoretical resolution through careful optical optimization, beam shaping, and alignment. We are grateful for the reviewer’s acknowledgment of the accessibility and comprehensive documentation that make this system broadly implementable.

      Strengths:

      (1) Strong and accessible technical innovation: With an elegant combination of beam shaping and optical modelling, the authors provide a high-resolution light-sheet system that overcomes the classical light-sheet tradeoff limit of a thin light-sheet and a small field of view. In addition, the integration of in silico modelling with a custom-machined baseplate is very practical and allows for ease of alignment procedures. Combining these features with the solid and super-extensive guide provided in the supplementary information, this provides a protocol for replicating the microscope in any other lab.

      (2) Impeccable optical performance and ease of mounting of samples: The system takes advantage of the same sample-holding method seen already in other implementations, but reduces the optical complexity.

      At the same time, the authors claim to achieve similar lateral and axial resolution to Lattice-light-sheet microscopy (although without a direct comparison (see below in the "weaknesses" section). The optical characterization of the system is comprehensive and well-detailed. Additionally, the authors validate the system imaging sub-cellular structures in mammalian cells.

      (3) Transparency and comprehensiveness of documentation and resources: A very detailed protocol provides detailed documentation about the setup, the optical modeling, and the total cost.

      We thank the reviewer for their thoughtful and encouraging comments. We are pleased that the technical innovation, optical performance, and accessibility of Altair-LSFM were recognized. Our goal from the outset was to develop a diffraction-limited, high-resolution light-sheet system that balances optical performance with reproducibility and ease of implementation. We are also pleased that the use of precisionmachined baseplates was recognized as a practical and effective strategy for achieving performance while maintaining ease of assembly.

      Weaknesses: 

      (1) Limited quantitative comparisons: Although some qualitative comparison with previously published systems (diSPIM, lattice light-sheet) is provided throughout the manuscript, some side-by-side comparison would be of great benefit for the manuscript, even in the form of a theoretical simulation. While having a direct imaging comparison would be ideal, it's understandable that this goes beyond the interest of the paper; however, a table referencing image quality parameters (taken from the literature), such as signalto-noise ratio, light-sheet thickness, and resolutions, would really enhance the features of the setup presented. Moreover, based also on the necessity for optical simplification, an additional comment on the importance/difference of dual objective/single objective light-sheet systems could really benefit the discussion.

      In the revised manuscript, we have significantly expanded our discussion of different light-sheet systems to provide clearer quantitative and conceptual context for Altair-LSFM. These comparisons are based on values reported in the literature, as we do not have access to many of these instruments (e.g., DaXi, diSPIM, or commercial and open-source variants of LLSM), and a direct experimental comparison is beyond the scope of this work.

      We note that while quantitative parameters such as signal-to-noise ratio are important, they are highly sample-dependent and strongly influenced by imaging conditions, including fluorophore brightness, camera characteristics, and filter bandpass selection. For this reason, we limited our comparison to more general image-quality metrics—such as light-sheet thickness, resolution, and field of view—that can be reliably compared across systems.

      Finally, per the reviewer’s recommendation, we have added additional discussion clarifying the differences between dual-objective and single-objective light-sheet architectures, outlining their respective strengths, limitations, and suitability for different experimental contexts.

      (2) Limitation to a fixed sample: In the manuscript, there is no mention of incubation temperature, CO₂ regulation, Humidity control, or possible integration of commercial environmental control systems. This is a major limitation for an imaging technique that owes its popularity to fast, volumetric, live-cell imaging of biological samples.

      We fully agree that environmental control is critical for live-cell imaging applications. In the revised manuscript, we now describe the design and implementation of a temperature-regulated sample chamber in Supplementary Note 2, which maintains stable imaging conditions through the use of integrated heating elements and thermocouples. This approach enables precise temperature control while minimizing thermal gradients and optical drift. For pH stabilization, we recommend the use of 10–25 mM HEPES in place of CO₂ regulation, consistent with established practice for most light-sheet systems, including the initial variant of LLSM. Although full humidity and CO₂ control are not readily implemented in dual-objective configurations, we note that single-objective designs such as OPM are inherently compatible with commercial environmental chambers and avoid these constraints. Together, these additions clarify how environmental control can be achieved within Altair-LSFM and situate its capabilities within the broader LSFM design space.

      (3) System cost and data storage cost: While the system presented has the advantage of being opensource, it remains relatively expensive (considering the 150k without laser source and optical table, for example). The manuscript could benefit from a more direct comparison of the performance/cost ratio of existing systems, considering academic settings with budgets that most of the time would not allow for expensive architectures. Moreover, it would also be beneficial to discuss the adaptability of the system, in case a 30k objective could not be feasible. Will this system work with different optics (with the obvious limitations coming with the lower NA objective)? This could be an interesting point of discussion. Adaptability of the system in case of lower budgets or more cost-effective choices, depending on the needs.

      We agree that cost considerations are critical for adoption in academic environments. We would also like to clarify that the quoted $150k includes the optical table and laser source. In the revised manuscript, Supplementary Note 1 now includes an expanded discussion of cost–performance trade-offs and potential paths for cost reduction.

      Last, not much is said about the need for data storage. Light-sheet microscopy's bottleneck is the creation of increasingly large datasets, and it could be beneficial to discuss more about the storage needs and the quantity of data generated.

      In the revised manuscript, we now include Supplementary Note 4, which provides a high-level discussion of data storage needs, approximate costs, and practical strategies for managing large datasets generated by light-sheet microscopy. This section offers general guidance—including file-format recommendations, and cost considerations—but we note that actual costs will vary by institution and contractual agreements.

      Conclusion:

      Altair-LSFM represents a well-engineered and accessible light-sheet system that addresses a longstanding need for high-resolution, reproducible, and affordable sub-cellular light-sheet imaging. While some aspects-comparative benchmarking and validation, limitation for fixed samples-would benefit from further development, the manuscript makes a compelling case for Altair-LSFM as a valuable contribution to the open microscopy scientific community. 

      Recommendations for the authors:

      Reviewer #2 (Recommendations for the authors):

      (1) A picture, or full CAD design of the complete instrument, should be included as a main figure.

      A complete CAD rendering of the microscope is now provided in Supplementary Figure 4.

      (2) There is no quantitative comparison of the effects of the tilting resonant galvo; only a cartoon, a figure should be included.

      The cartoon was intended purely as an educational illustration to conceptually explain the role of the tilting resonant galvo in shaping and homogenizing the light sheet. To clarify this intent, we have revised both the figure legend and corresponding text in the main manuscript. For readers seeking quantitative comparisons, we now reference the original study that provides a detailed analysis of this optical approach, as well as a review on the subject.

      (3) Description of L4 is missing in the Figure 1 caption.

      Thank you for catching this omission. We have corrected it.

      (4) The beam profiles in Figures 1c and 3a, please crop and make the image bigger so the profile can be appreciated. The PSFs in Figure 3c-e should similarly be enlarged and presented using a dynamic range/LUT such that any aberrations can be appreciated.

      In Figure 1c, our goal was to qualitatively illustrate the uniformity of the light-sheet across the full field of view, while Figure 1d provided the corresponding quantitative cross-section. To improve clarity, we have added an additional figure panel offering a higher-magnification, localized view of the light-sheet profile. For Figure 3c–e, we have enlarged the PSF images and adjusted the display range to better convey the underlying signal and allow subtle aberrations to be appreciated.

      (5) It is unclear why LLSM is being used as the gold standard, since in its current commercial form, available from Zeiss, it is a turn-key system designed for core facilities. The original LLSM is also a versatile instrument that provides much more than the square lattice for illumination, including structured illumination, hexagonal lattices, live-cell imaging, wide-field illumination, different scan modes, etc. These additional features are not even mentioned when compared to the Altair-LSFM. If a comparison is to be provided, it should be fair and balanced. Furthermore, as outlined in the public review, anecdotal statements on "most used", "difficult to align", or "unstable" should not be provided without data.

      In the revised manuscript, we have carefully removed anecdotal statements and, where appropriate, replaced them with quantitative or verifiable information. For instance, we now explicitly report that the square lattice was used in 16 of the 20 figure subpanels in the original LLSM publication, and we include a proxy for optical complexity based on the number of optical elements requiring alignment in each system.

      We also now clearly distinguish between the original LLSM design—which supports multiple illumination and scanning modes—and its subsequent commercial variants, including the ZEISS Lattice Lightsheet 7, which prioritizes stability and ease of use over configurational flexibility (see Supplementary Note 3).

      (6) The authors should recognize that implementing custom optics, no matter how well designed, is a big barrier to cross for most cell biology labs.

      We fully understand and now acknowledge in the main text that implementing custom optics can present a significant barrier, particularly for laboratories without prior experience in optical system assembly. However, similar challenges were encountered during the adoption of other open-source microscopy platforms, such as mesoSPIM and OpenSPIM, both of which have nonetheless achieved widespread implementation. Their success has largely been driven by exhaustive documentation, strong community support, and standardized design principles—approaches we have also prioritized in Altair-LSFM. We have therefore made all CAD files, alignment guides, and detailed build documentation publicly available and continue to develop instructional materials and community resources to further reduce the barrier to adoption.

      (7) Statements on "hands on workshops" though laudable, may not be appropriate to include in a scientific publication without some documentation on the influence they have had on implanting the microscope.

      We understand the concern. Our intention in mentioning hands-on workshops was to convey that the dissemination effort is supported by an NIH Biomedical Technology Development and Dissemination grant, which includes dedicated channels for outreach and community engagement. Nonetheless, we agree that such statements are not appropriate without formal documentation of their impact, and we have therefore removed this text from the revised manuscript.

      (8) It is claimed that the microscope is "reliable" in the discussion, but with no proof, long-term stability should be assessed and included.

      Our experience with Altair-LSFM has been that it remains well-aligned over time—especially in comparison to other light-sheet systems we worked on throughout the last 11 years—we acknowledge that this assessment is anecdotal. As such, we have omitted this claim from the revised manuscript.

      (9) Due to the reliance on anecdotal statements and comparisons without proof to other systems, this paper at times reads like a brochure rather than a scientific publication. The authors should consider editing their manuscript accordingly to focus on the technical and quantifiable aspects of their work.

      We agree with the reviewer’s assessment and have revised the manuscript to remove anecdotal comparisons and subjective language. Where possible, we now provide quantitative metrics or verifiable data to support our statements.

      Reviewer #3 (Recommendations for the authors):

      Other minor points that could improve the manuscript (although some of these points are explained in the huge supplementary manual): 

      (1) The authors explain thoroughly their design, and they chose a sample-scanning method. I think that a brief discussion of the advantages and disadvantages of such a method over, for example, a laserscanning system (with fixed sample) in the main text will be highly beneficial for the users.

      In the revised manuscript, we now include a brief discussion in the main text outlining the advantages and limitations of a sample-scanning approach relative to a light-sheet–scanning system. Specifically, we note that for thin, adherent specimens, sample scanning minimizes the optical path length through the sample, allowing the use of more tightly focused illumination beams that improve axial resolution. We also include a new supplementary figure illustrating how this configuration reduces the propagation length of the illumination light sheet, thereby enhancing axial resolution.

      (2) The authors justify selecting a 0.6 NA illumination objective over alternatives (e.g., Special Optics), but the manuscript would benefit from a more quantitative trade-off analysis (beam waist, working distance, sample compatibility) with other possibilities. Within the objective context, a comparison of the performances of this system with the new and upcoming single-objective light-sheet methods (and the ones based also on optical refocusing, e.g., DAXI) would be very interesting for the goodness of the manuscript.

      In the revised manuscript, we now provide a quantitative trade-off analysis of the illumination objectives in Supplementary Note 1, including comparisons of beam waist, working distance, and sample compatibility. This section also presents calculated point spread functions for both the 0.6 NA and 0.67 NA objectives, outlining the performance trade-offs that informed our design choice. In addition, Supplementary Note 3 now includes a broader comparison of Altair-LSFM with other light-sheet modalities, including diSPIM, ASLM, and OPM, to further contextualize the system’s capabilities within the evolving light-sheet microscopy landscape.

      (3) The modularity of the system is implied in the context of the manuscript, but not fully explained. The authors should specify more clearly, for example, if cameras could be easily changed, objectives could be easily swapped, light-sheet thickness could be tuned by changing cylindrical lens, how users might adapt the system for different samples (e.g., embryos, cleared tissue, live imaging), .etc, and discuss eventual constraints or compatibility issues to these implementations.

      Altair-LSFM was explicitly designed and optimized for imaging live adherent cells, where sample scanning and short light-sheet propagation lengths provide optimal axial resolution (Supplementary Note 3). While the same platform could be used for superficial imaging in embryos, systems implementing multiview illumination and detection schemes are better suited for such specimens. Similarly, cleared tissue imaging typically requires specialized solvent-compatible objectives and approaches such as ASLM that maximize the field of view. We have now added some text to the Design Principles section that explicitly state this.

      Altair-LSFM offers varying levels of modularity depending on the user’s level of expertise. For entry-level users, the illumination numerical aperture—and therefore the light-sheet thickness and propagation length—can be readily adjusted by tuning the rectangular aperture conjugate to the back pupil of the illumination objective, as described in the Design Principles section. For mid-level users, alternative configurations of Altair-LSFM, including different detection objectives, stages, filter wheels, or cameras, can be readily implemented (Supplementary Note 1). Importantly, navigate natively supports a broad range of hardware devices, and new components can be easily integrated through its modular interface. For expert users, all Zemax simulations, CAD models, and step-by-step optimization protocols are openly provided, enabling complete re-optimization of the optical design to meet specific experimental requirements.

      (4) Resolution measurements before and after deconvolution are central to the performance claim, but the deconvolution method (PetaKit5D) is only briefly mentioned in the main text, it's not referenced, and has to be clarified in more detail, coherently with the precision of the supplementary information. More specifically, PetaKit5D should be referenced in the main text, the details of the deconvolution parameters discussed in the Methods section, and the computational requirements should also be mentioned. 

      In the revised manuscript, we now provide a dedicated description of the deconvolution process in the Methods section, including the specific parameters and algorithms used. We have also explicitly referenced PetaKit5D in the main text to ensure proper attribution and clarity. Additionally, we note the computational requirements associated with this analysis in the same section for completeness.

      (5)  Image post-processing is not fully explained in the main text. Since the system is sample-scanning based, no word in the main text is spent on deskewing, which is an integral part of the post-processing to obtain a "straight" 3D stack. Since other systems implement such a post-processing algorithm (for example, single-objective architectures), it would be beneficial to have some discussion about this, and also a brief comparison to other systems in the main text in the methods section. 

      In the revised manuscript, we now explicitly describe both deskewing (shearing) and deconvolution procedures in the Alignment and Characterization section of the main text and direct readers to the Methods section. We also briefly explain why the data must be sheared to correct for the angled sample-scanning geometry for LLSM and Altair-LSFM, as well as both sample-scanning and laser-scanning-variants of OPMs.

      (6) A brief discussion on comparative costs with other systems (LLSM, dispim, etc.) could be helpful for non-imaging expert researchers who could try to implement such an optical architecture in their lab.

      Unfortunately, the exact costs of commercial systems such as LLSM or diSPIM are typically not publicly available, as they depend on institutional agreements and vendor-specific quotations. Nonetheless, we now provide approximate cost estimates in Supplementary Note 1 to help readers and prospective users gauge the expected scale of investment relative to other advanced light-sheet microscopy systems.

      (7) The "navigate" control software is provided, but a brief discussion on its advantages compared to an already open-access system, such as Micromanager, could be useful for the users.

      In the revised manuscript, we now include Supplementary Note 5 that discusses the advantages and disadvantages of different open-source microscope control platforms, including navigate and MicroManager. In brief, navigate was designed to provide turnkey support for multiple light-sheet architectures, with pre-configured acquisition routines optimized for Altair-LSFM, integrated data management with support for multiple file formats (TIFF, HDF5, N5, and Zarr), and full interoperability with OMEcompliant workflows. By contrast, while Micro-Manager offers a broader library of hardware drivers, it typically requires manual configuration and custom scripting for advanced light-sheet imaging workflows.

      (8) The cost and parts are well documented, but the time and expertise required are not crystal clear.Adding a simple time estimate (perhaps in the Supplement Section) of assembly/alignment/installation/validation and first imaging will be very beneficial for users. Also, what level of expertise is assumed (prior optics experience, for example) to be needed to install a system like this? This can help non-optics-expert users to better understand what kind of adventure they are putting themselves through.

      We thank the reviewer for this helpful suggestion. To address this, we have added Supplementary Table S5, which provides approximate time estimates for assembly, alignment, validation, and first imaging based on the user’s prior experience with optical systems. The table distinguishes between novice (no prior experience), moderate (some experience using but not assembling optical systems), and expert (experienced in building and aligning optical systems) users. This addition is intended to give prospective builders a realistic sense of the time commitment and level of expertise required to assemble and validate AltairLSFM.

      Minor things in the main text:

      (1) Line 109: The cost is considered "excluding the laser source". But then in the table of costs, you mention L4cc as a "multicolor laser source", for 25 K. Can you explain this better? Are the costs correct with or without the laser source? 

      We acknowledge that the statement in line 109 was incorrect—the quoted ~$150k system cost does include the laser source (L4cc, listed at $25k in the cost table). We have corrected this in the revised manuscript.

      (2) Line 113: You say "lateral resolution, but then you state a 3D resolution (230 nm x 230 nm x 370 nm). This needs to be fixed.

      Thank you, we have corrected this.

      (3) Line 138: Is the light-sheet uniformity proven also with a fluorescent dye? This could be beneficial for the main text, showing the performance of the instrument in a fluorescent environment.

      The light-sheet profiles shown in the manuscript were acquired using fluorescein to visualize the beam. We have revised the main text and figure legends to clearly state this.

      (4) Line 149: This is one of the most important features of the system, defying the usual tradeoff between light-sheet thickness and field of view, with a regular Gaussian beam. I would clarify more specifically how you achieve this because this really is the most powerful takeaway of the paper.

      We thank the reviewer for this key observation. The ability of Altair-LSFM to maintain a thin light sheet across a large field of view arises from diffraction effects inherent to high NA illumination. Specifically, diffraction elongates the PSF along the beam’s propagation direction, effectively extending the region over which the light sheet remains sufficiently thin for high-resolution imaging. This phenomenon, which has been the subject of active discussion within the light-sheet microscopy community, allows Altair-LSFM to partially overcome the conventional trade-off between light-sheet thickness and propagation length. We now clarify this point in the main text and provide a more detailed discussion in Supplementary Note 3, which is explicitly referenced in the discussion of the revised manuscript.

      (5) Line 171: You talk about repeatable assembly...have you tried many different baseplates? Otherwise, this is a complicated statement, since this is a proof-of-concept paper. 

      We thank the reviewer for this comment. We have not yet validated the design across multiple independently assembled baseplates and therefore agree that our previous statement regarding repeatable assembly was premature. To avoid overstating the current level of validation, we have removed this statement from the revised manuscript.

      (6) Line 187: same as above. You mention "long-term stability". For how long did you try this? This should be specified in numbers (days, weeks, months, years?) Otherwise, it is a complicated statement to make, since this is a proof-of-concept paper.

      We also agree that referencing long-term stability without quantitative backing is inappropriate, and have removed this statement from the revised manuscript.

      (7) Line 198: "rapid z-stack acquisition. How rapid? Also, what is the limitation of the galvo-scanning in terms of the imaging speed of the system? This should be noted in the methods section.

      In the revised manuscript, we now clarify these points in the Optoelectronic Design section. Specifically, we explicitly note that the resonant galvo used for shadow reduction operates at 4 kHz, ensuring that it is not rate-limiting for any imaging mode. In the same section, we also evaluate the maximum acquisition speeds achievable using navigate and report the theoretical bandwidth of the sample-scanning piezo, which together define the practical limits of volumetric acquisition speed for Altair-LSFM.

      (8) Line 234: Peta5Kit is discussed in the additional documentation, but should be referenced here, as well.

      We now reference and cite PetaKit5D.

      (9) Line 256: "values are on par with LLSM", but no values are provided. Some details should also be provided in the main text.

      In the revised manuscript, we now provide the lateral and axial resolution values originally reported for LLSM in the main text to facilitate direct comparison with Altair-LSFM. Additionally, Supplementary Note 3 now includes an expanded discussion on the nuances of resolution measurement and reporting in lightsheet microscopy.

      Figures:

      (1) Figure 1 could be implemented with Figure 3. They're both discussing the validation of the system (theoretically and with simulations), and they could be together in different panels of the same figure. The experimental light-sheet seems to be shown in a transmission mode. Showing a pattern in a fluorescent dye could also be beneficial for the paper.

      In Figure 1, our goal was to guide readers through the design process—illustrating how the detection objective’s NA sets the system’s resolution, which defines the required pixel size for Nyquist sampling and, in turn, the field of view. We then use Figure 1b–c to show how the illumination beam was designed and simulated to achieve that field of view. In contrast, Figure 3 presents the experimental validation of the illumination system. To avoid confusion, we now clarify in the text that the light sheet shown in Figure 3 was visualized in a fluorescein solution and imaged in transmission mode. While we agree that Figures 1 and 3 both serve to validate the system, we prefer to keep them as separate figures to maintain focus within each panel. We believe this organization better supports the narrative structure and allows readers to digest the theoretical and experimental validations independently.

      (2) Figure 3: Panels d and e show the same thing. Why would you expect that xz and yz profiles should be different? Is this due to the orientation of the objectives towards the sample?

      In Figure 3, we present the PSF from all three orthogonal views, as this provides the most transparent assessment of PSF quality—certain aberration modes can be obscured when only select perspectives are shown. In principle, the XZ and YZ projections should be equivalent in a well-aligned system. However, as seen in the XZ projection, a small degree of coma is present that is not evident in the YZ view. We now explicitly note this observation in the revised figure caption to clarify the difference between these panels.

      (3) Figure 4's single boxes lack a scale bar, and some of the Supplementary Figures (e.g. Figure 5) lack detailed axis labels or scale bars. Also, in the detailed documentation, some figures are referred to as Figure 5. Figure 7 or, for example, figure 6. Figure 8, and this makes the cross-references very complicated to follow

      In the revised manuscript, we have corrected these issues. All figures and supplementary figures now include appropriate scale bars, axis labels, and consistent formatting. We have also carefully reviewed and standardized all cross-references throughout the main text and supplementary documentation to ensure that figure numbering is accurate and easy to follow.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:  

      ZMAT3 is a p53 target gene that the Lal group and others have shown is important for p53mediated tumor suppression, and which plays a role in the control of RNA splicing. In this manuscript, Lal and colleagues perform quantitative proteomics of cells with ZMAT3 knockout and show that the enzyme hexokinase HKDC1 is the most upregulated protein. Mechanistically, the authors show that ZMAT3 does not appear to directly regulate the expression of HKDC1; rather, they show that the transcription factor c-JUN was strongly enriched in ZMAT3 pull-downs in IP-mass spec experiments, and they perform IP-western to demonstrate an interaction between c-JUN and ZMAT3. Importantly, the authors demonstrate, using ChIP-qPCR, that JUN is present at the HKDC1 gene (intron 1) in ZMAT3 WT cells and shows markedly enhanced binding in ZMAT3 KO cells. The data best fit a model whereby p53 transactivates ZMAT3, leading to decreased JUN binding to the HKDC1 promoter, and altered mitochondrial respiration.  

      Strengths:

      The authors use multiple orthogonal approaches to test the majority of their findings.  The authors offer a potentially new activity of ZMAT3 in tumor suppression by p53: the control of mitochondrial respiration.  

      Weaknesses:

      Some indication as to whether other c-JUN target genes are also regulated by ZMAT3 would improve the broad relevance of the authors' findings.  

      We thank the reviewer for the kind words and the thoughtful suggestion. As recommended, to identify additional c-JUN targets potentially regulated by ZMAT3, we intersected the genes upregulated upon ZMAT3 knockout (from our RNA-seq data) with the ChIP-Atlas dataset for human c-JUN and cross-referenced these with c-JUN peaks from three ENCODE cell lines. From this analysis, we selected for further analysis the top 4 candidate genes - LAMA2, VSNL1, SAMD3, and IL6R (Figure 5-figure supplement 2A-D). Like HKDC1, these genes were upregulated in ZMAT3-KO cells, and this upregulation was abolished upon siRNA-mediated JUN knockdown in ZMAT3-KO cells (Figure 5-figure supplement 2E). Moreover, by ChIP-qPCR we observed increased JUN binding to the JUN peak for these genes in ZMAT3-KO cells as compared to the ZMAT3-WT (Figure 5- figure supplement 2F). As described on page 11 of the revised manuscript, these results suggest that the ZMAT3/JUN axis negatively regulates HKDC1 expression and additional c-JUN target genes.   

      Reviewer #2 (Public review):

      Summary:

      The study elucidates the role of the recently discovered mediator of p53 tumor suppressive activity, ZMAT3. Specifically, the authors find that ZMAT3 negatively regulates HKDC1, a gene involved in the control of mitochondrial respiration and cell proliferation.  

      Strengths:

      Mechanistically, ZMAT3 suppresses HKDC1 transcription by sequestering JUN and preventing its binding to the HKDC1 promoter, resulting in reduced HKDC1 expression. Conversely, p53 mutation leads to ZMAT3 downregulation and HKDC1 overexpression, thereby promoting increased mitochondrial respiration and proliferation. This mechanism is novel; however, the authors should address several points.  

      Weaknesses:

      The authors conduct mechanistic experiments (e.g., transcript and protein quantification, luciferase assays) to demonstrate regulatory interactions between p53, ZMAT3, JUN, and HKDC1. These findings should be supported with functional assays, such as proliferation, apoptosis, or mitochondrial respiration analyses.  

      We thank the reviewer for appreciating our work and for this valuable suggestion. The reviewer rightly pointed out that supporting the regulatory interactions between p53, ZMAT3, JUN and HKDC1 with functional assays such as proliferation, apoptosis and mitochondrial respiration analyses would strengthen our mechanistic data. During the revision of our manuscript, we attempted to address this point by performing simultaneously knockdown of these proteins; however, we observed substantial toxicity under these conditions, making the functional assays technically unfeasible. This outcome was not unexpected as knockdown of JUN or HKDC1 individually results in growth defects.  We therefore focused our efforts on addressing the recommendation for authors.  

      Reviewer #3 (Public review):

      Summary:  

      In their manuscript, Kumar et al. investigate the mechanisms underlying the tumor suppressive function of the RNA binding protein ZMAT3, a previously described tumor suppressor in the p53 pathway. To this end, they use RNA-sequencing and proteomics to characterize changes in ZMAT3-deficient cells, leading them to identify the hexokinase HKDC1 as upregulated with ZMAT3 deficiency first in colorectal cancer cells, then in other cell types of both mouse and human origin. This increase in HKDC1 is associated with increased mitochondrial respiration. As ZMAT3 has been reported as an RNA-binding and DNA-binding protein, the authors investigated this via PAR-CLIP and ChIP-seq but did not observe ZMAT3 binding to HKDC1 pre-mRNA or DNA. Thus, to better understand how ZMAT3 regulates HKDC1, the authors used quantitative proteomics to identify ZMAT3interacting proteins. They identified the transcription factor JUN as a ZMAT3-interacting protein and showed that JUN promotes the increased HKDC1 RNA expression seen with ZMAT3 inactivation. They propose that ZMAT3 inhibits JUN-mediated transcriptional induction of HKDC1 as a mechanism of tumor suppression. This work uncovers novel aspects of the p53 tumor suppressor pathway.  

      Strengths:

      This novel work sheds light on one of the most well-established yet understudied p53 target genes, ZMAT3, and how it contributes to p53's tumor suppressive functions. Overall, this story establishes a p53-ZMAT3-HKDC1 tumor suppressive axis, which has been strongly substantiated using a variety of orthogonal approaches, in different cell lines and with different data sets.  

      Weaknesses:

      While the role of p53 and ZMAT3 in repressing HKDC1 is well substantiated, there is a gap in understanding how ZMAT3 acts to repress JUN-driven activation of the HKDC1 locus. How does ZMAT3 inhibit JUN binding to HKDC1? Can targeted ChIP experiments or RIP experiments be used to make a more definitive model? Can ZMAT3 mutants help to understand the mechanisms? Future work can further establish the mechanisms underlying how ZMAT3 represses JUN activity.  

      We thank the reviewer for the kind words and the invaluable suggestion. The reviewer has an excellent point regarding how ZMAT3 inhibits JUN binding to HKDC1 locus.Our new data included in the revised manuscript show that the ZMAT3-JUN interaction is lost in the presence of DNase or RNase, indicating that the interaction requires both DNA and RNA. This result suggests that ZMAT3 and JUN  form an RNA-dependent, chromatin- associated complex. Although not directly investigated in our study, this finding is consistent with emerging evidence that RBPs can function as chromatin-associated cofactors in transcription. For example, functional interplay between transcription factor YY1 and the RNA binding protein RBM25 co-regulates a broad set of genes, where RBM25 appears to engage promoters first and then recruit YY1, with RNA proposed to guide target recognition. We have discussed this possibility in the discussion section of revised manuscript (page 13). We agree that future work using ZMAT3 mutants and targeted ChIP or RIP assays will be valuable to delineate the precise mechanism by which ZMAT3 inhibits JUN binding to its target genes.   

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      ZMAT3 is a p53 target gene that the Lal group and others have shown is important for p53mediated tumor suppression, and which plays a role in the control of RNA splicing. In this manuscript, Lal and colleagues perform quantitative proteomics of cells with ZMAT3 knockout and show that the enzyme hexokinase HKDC1 is the most upregulated protein. HKDC1 is emerging as an important player in human cancer. Importantly, the authors show both acute (gene silencing) and chronic (CRISPR KO) approaches to silence ZMAT3, and they do this in several cell lines. Notably, they show that ZMAT3 silencing leads to impaired mitochondrial respiration, in a manner that is rescued by silencing of HKDC1. Mechanistically, the authors show that ZMAT3 does not appear to directly regulate the expression of HKDC1; rather, they show that the transcription factor c-JUN was strongly enriched in ZMAT3 pull-downs in IP-mass spec experiments, and they perform IP-western to demonstrate an interaction between c-JUN and ZMAT3. Importantly, the authors demonstrate, using ChIP-qPCR, that JUN is present at the HKDC1 gene (intron 1) in ZMAT3 WT cells, and shows markedly enhanced binding in ZMAT3 KO cells. The data best fit a model whereby p53 transactivates ZMAT3, leading to decreased JUN binding to the HKDC1 promoter (intron 1), and altered mitochondrial respiration. The findings are compelling, and the authors use multiple orthogonal approaches to test most findings. And the authors offer a potentially new activity of ZMAT3 in tumor suppression by p53: the control of mitochondrial respiration. As such, enthusiasm is high for this manuscript. 

      Addressing the following question would improve the manuscript. 

      It is not clear how many (other) c-JUN target genes might be impacted by ZMAT3; other important c-JUN targets in cancer include GLS1, WEE1, SREBP1, GLUT1, and CD36, so there could be a global impact on metabolism in ZMAT3 KO cells. Can the authors perform qPCR on these targets in ZMAT3 WT and KO cells and see if these target genes are differentially expressed? 

      We thank the reviewer for this thoughtful suggestion. As recommended, we examined the expression of key c-JUN target genes GLS1 (also known as GLS), WEE1, SREBP1, GLUT1, and CD36 in ZMAT3-WT and ZMAT3-KO cells. We first analyzed publicly available JUN ChIP-Seq data from three ENCODE cell lines, which revealed JUN binding peaks near or upstream of exon 1 for GLS1/GLS, SREBP1, and SLC2A1/GLUT1, but not for WEE1 or CD36 (Appendix 1, panels A-E). Based on these results, we performed RT-qPCR for GLS1/GLS, SREBP1 and SLC2A1 in ZMAT3-WT and ZMAT3-KO cells, with or without JUN knockdown. GLS mRNA was significantly reduced upon JUN knockdown in both ZMAT3-WT cells and ZMAT3-KO cells, but it was not upregulated upon loss of ZMAT3, indicating that GLS is a JUN target gene, but it is not regulated by ZMAT3. In contrast, SREBF1 or SLC2A1 expression remained unchanged upon ZMAT3 loss or JUN knockdown (Appendix 1 panels F-H). These data suggest that the ZMAT3/JUN axis does not regulate the expression of these genes.

      To identify additional c-JUN targets potentially regulated by ZMAT3, we intersected the genes upregulated upon ZMAT3 knockout (from our RNA-seq data) with the ChIP-Atlas dataset for human c-JUN and cross-referenced these with c-JUN peaks from three ENCODE cell lines. From this analysis, we selected for further analysis the top 4 candidate genes - LAMA2, VSNL1, SAMD3, and IL6R (Figure 5-figure supplement 2A-D). Like HKDC1, these genes were upregulated in ZMAT3-KO cells, and this upregulation was abolished upon siRNA-mediated JUN knockdown in ZMAT3-KO cells (Figure 5-figure supplement 2E). Moreover, by ChIP-qPCR we observed increased JUN binding to the JUN peak for these genes in ZMAT3-KO cells as compared to the ZMAT3-WT (Figure 5- figure supplement 2F). As described on page 11 of the revised manuscript, these results suggest that the ZMAT3/JUN axis negatively regulates HKDC1 expression and additional c-JUN target genes.   

      Minor concerns: 

      (1) Line 150: observed a modest. 

      (2) Line 159: Figure 2G appears to be inaccurately cited. 

      (3) Line 191: assays to measure. 

      We thank the reviewer for pointing these out. These minor concerns have been addressed in the text.  

      Reviewer #2 (Recommendations for the authors): 

      (1) Figure 1E: Can the authors clarify what the numbers on the left side of the chart represent? Do they refer to the scale?

      The numbers on the Y-axis represent the -log 10 (p- value) where higher values correspond to more significant changes. For visualization purposes, the significant changes are shown in red.  

      (2) Page 5, line 123: The sentence "As expected, ZMAT3 mRNA levels were decreased in the ZMAT3-KO cells" is redundant, as this information was already mentioned on page 4, line 103.  

      We thank the reviewer for noticing this redundancy. The repeated sentence has been removed in the revised manuscript.  

      (3) Page 5: The authors state: "Transcriptome-wide, upon loss of ZMAT3, 606 genes were significantly up-regulated (adj. p < 0.05 and 1.5-fold change) and 552 were down-regulated, with a median fold change of 1.76 and 0.55 for the up- and down-regulated genes, respectively." Later, on page 6, they write: "Comparison of the RNA-seq data from ZMAT3WT vs. ZMAT3-KO and CTRL siRNA vs. ZMAT3 siRNA-transfected HCT116 cells indicated that 1023 genes were commonly up-regulated, and 1042 were commonly down-regulated upon ZMAT3 loss (Figure S2C and D)." Why is the number of deregulated transcripts higher in the ZMAT3-WT vs. ZMAT3-KO comparison than in the CTRL siRNA vs. ZMAT3 siRNA comparison? Are the authors using less stringent criteria in the second analysis? This point should be clarified. 

      We thank the reviewer for highlighting this point. The reviewer is correct that less stringent criteria were used in the second analysis. On page 5, we applied stringent thresholds (adjusted p-value < 0.05 and 1.5-fold change) to identify high-confidence transcriptome-wide changes upon ZMAT3 loss. In contrast, for the comparison of both RNA-seq datasets (ZMAT3-WT vs. KO and siCTRL vs. siZMAT3), we included genes that were consistently up- or downregulated, without applying a fold change threshold, focusing instead on significantly altered genes (adjusted p < 0.05) in both datasets. This allowed us to capture broader and more reproducible transcriptomic changes that occur upon ZMAT3 depletion, including modest but significant changes upon transient ZMAT3 knockdown with siRNAs. We have now clarified this distinction on page 6 of the revised manuscript.

      (4) Figures 2B and 2E: The authors should provide quantification of HKDC1 protein levels normalized to a loading control. In addition, they should assess HKDC1 protein abundance upon ZMAT3 interference in SWI1222 and HCEC1CT cells, not just in HepG2 and HCT116 cells. 

      We thank the reviewer for this suggestion. We have now quantified all immunoblots presented throughout the manuscript, including those shown in Figures 2B and 2E, and all other figures containing protein analyses. Band intensities were quantified using ImageJ densitometry and normalized to GAPDH as the loading control. In addition, as suggested, we examined HKDC1 protein levels following ZMAT3 knockdown in two additional cell lines, SW1222 and HCEC-1CT. Consistent with our observations in HepG2 and HCT116 cells, ZMAT3 depletion led to increased HKDC1 protein levels in both SW1222 and HCEC-1CT cells. These new data are now included in Figure 2-figure supplement 1F and G. We have updated the Results section, figure legends, and figures to reflect these additions.

      (5) Figure 3A: It is unclear which gene was knocked out in the "KO cells." The authors should clearly specify this.

      We thank the reviewer for pointing this out. We have now updated Figure 3A.

      (6) Figure 3D: The result appears counterintuitive in comparison to Figure 3E. Why does HKDC1 knockdown reduce cell confluency more in ZMAT3 KO cells than in control (ZMAT3 wild-type) cells? The authors should explain this discrepancy more clearly.

      We thank the reviewer for this insightful comment. As shown in Figure 3D and 3E, knockdown of HKDC1 resulted in a greater decrease in proliferation in ZMAT3-KO cells than in ZMAT3-WT cells. This observation was indeed unexpected, given that HKDC1 acts downstream of ZMAT3. One possible explanation is that elevated HKDC1 expression in ZMAT3-KO cells increases their reliance on HKDC1 for sustaining proliferation, and that HKDC1 may also participate in additional pathways in ZMAT3-KO cells. Consequently, transient knockdown of HKDC1 in ZMAT3-KO cells would have a more pronounced effect on proliferation due to their increased dependency on HKDC1 activity. In contrast, ZMAT3WT cells which express lower levels of HKDC1 are less dependent on its function and therefore less sensitive to its depletion. We have now clarified this point on page 8 of the revised manuscript.  

      Reviewer #3 (Recommendations for the authors):  

      (1) Why do the authors start their analysis by knocking out the p53 response element in Zmat3? That should be clarified. In addition, since clones were picked after CRISPR KO of Zmat3, were experiments done to confirm that p53 signaling was not disrupted?

      We thank the reviewer for this thoughtful question. We began our study by targeting the p53 response element (p53RE) in the ZMAT3 locus because the basal expression of ZMAT3 is regulated by p53 (Muys, Bruna R. et al., Genes & Development, 2021). Deleting the p53RE therefore allowed us to markedly reduce ZMAT3 expression without disrupting the entire ZMAT3 locus. We have clarified this rationale on page 4 of the revised manuscript. To ensure that p53 signaling was not affected by this modification, we verified that canonical p53 targets such as p21 were equivalently induced in both ZMAT3WT and KO cells following Nutlin treatment and that p53 induction was unchanged(Figure 4F and Figure 1 – figure supplement 1A).

      (2) Throughout the text, many immunoblots are used to validate the knockouts and knockdowns used, but some clarification is needed. In Figure S1A, the Zmat3-WT sample seems to have significantly more p53 than the Zmat3 KO sample. Does Zmat3 KO compromise p53 levels in other experiments? It would be good to understand if Zmat3 affects p53 function by affecting its levels. Also, the p21 blot is overloaded.

      We thank the reviewer for this helpful observation. To determine whether ZMAT3 knockout affects p53 function by affecting its levels, we repeated the experiment three independent times. Western blots from these biological replicates, together with protein quantification, are now included in Appendix-2 and Figure 1-figure supplement 1A. These data show no significant differences in p53 or p21 induction between ZMAT3-WT and ZMAT3-KO cells following Nutlin treatment. In the revised manuscript, we have replaced the blot in Figure 1-figure supplement 1A with a more representative image from one of these replicate experiments.

      In Figure 2E, HKDC1 protein levels are not shown for the SW1222 and HCEC-1CT cell lines, 

      We thank the reviewer for this suggestion. HKDC1 protein levels in SW1222 and HCEC1-CT cells following ZMAT3 knockdown are now included as Figure 2- figure supplement 1F and 1G, together with the corresponding quantification.

      and Zmat3 does not appear as its characteristic two bands on the blot. What does this signify?

      We thank the reviewer for this observation. Endogenous ZMAT3 typically appears as two closely migrating bands on immunoblots. As shown in Figure 4D and Appendix 2A and 2B, these two bands are observed at the expected molecular weight following Nutlin treatment and are specific to ZMAT3, as they are markedly reduced in ZMAT3-KO cells. In contrast, only a single ZMAT3 band is visible in Figure 2E. This likely reflects limited resolution of the two bands in some blots rather than a biological difference.   

      (3) Why does HKDC1 knockdown only have an effect on metabolic phenotypes when ZMAT3 is gone? In Figure 3A, there does not seem to be a decrease in hexokinase activity in the siCTRL + siHKDC1 condition compared to siCTRL alone. Also, in Figure 3A, does phosphorylation activity of HKDC1 necessarily reflect glucose uptake, as stated? Additionally, in Figure 3C, there is no effect on mitochondrial respiration with siHKDC1, even though recent studies have shown a significant effect of HKDC1 on this.

      We thank the reviewer for raising these important questions. As noted, HKDC1 knockdown alone in wild-type cells (siCTRL + siHKDC1) does not significantly reduce hexokinase activity (Figure 3A). This likely reflects the low basal expression of HKDC1 in these cells. Thus, the metabolic phenotype may only become apparent when HKDC1 expression exceeds a functional threshold, as observed in ZMAT3-KO cells where HKDC1 is upregulated.

      Regarding the glucose uptake assay, HKDC1 itself is not phosphorylated; rather, it phosphorylates a non-catabolizable glucose analog, 2-deoxyglucose (2-DG) upon cellular uptake. According to the manufacturer’s protocol, intracellular 2-DG is phosphorylated by hexokinases to 2-deoxyglucose-6-phosphate (2-DG6P), which cannot be further metabolized and therefore accumulates. The accumulated 2-DG6P is quantified using a luminescence-based readout. This assay is widely used as a surrogate for glucose uptake because it reflects both glucose import and phosphorylation — the first step of glycolytic flux. As for the lack of change in mitochondrial respiration (Figure 3C), we acknowledge that some studies have reported mitochondrial roles for HKDC1 under basal conditions; however, such effects may be cell type-specific.

      (4) The emphasis on glycolysis signatures is confusing, as in the end, glycolysis does not seem to be affected by ZMAT3 status, but mitochondrial respiration is affected. Can the text be clarified to address this? It is also difficult to understand the role of oxygen consumption rate (OCR) in ZMAT3 phenotypes, as it does not fully track with proliferation. For example, ZMAT3 KD has the highest OCR, and the other conditions have similar OCRs but different proliferative rates in Figure 3D. Also, the colors used in Figure 3 to denote different genotypes change between B/C and D, which is confusing.

      We thank the reviewer for pointing out the inconsistency in the colors of the graph in Figure 2, which we have now corrected. Our data indicates that ZMAT3 regulates mitochondrial respiration without significantly affecting glycolysis. It is possible that mitochondria in ZMAT3-KO cells are oxidizing more substrates that are not produced by glycolysis. Additional work will be required to fully determine these mechanisms. We have clarified this on page 8 of the revised manuscript.      

      (5) The lack of ZMAT3 binding to RNAs in PAR-CLIP is not proof that it does not do so. A more targeted approach should be used, using individual RIP assays. The authors should also analyze the splicing of HKDC1, which could be affected by ZMAT3.

      As suggested, we performed ZMAT3 RNA IP experiments (RIP) using doxycycline-inducible HCT116-ZMAT3-FLAG cells. However, we did not observe significant enrichment of HKDC1 mRNA in the ZMAT3 IPs (Figure 5 – figure supplement 1A), consistent with previously published ZMAT3 RIP-seq data (Bersani et al, Oncotarget, 2016). These findings further support the notion that ZMAT3 does not directly bind to HKDC1 mRNA in these cells. We Accordingly, we have modified the text on page 10 of the revised manuscript.

      In addition, as suggested by the reviewer, we analyzed changes in splicing of HKDC1 pre-mRNA using rMATS in HCT116 cells by comparing our previously published RNA-seq data from siCTRL and siZMAT3-transfected HCT116 cells (Muys et al, Genes Dev, 2021). We focused on splicing events with an FDR < 0.05 and a delta PSI > |0.1| (representing at least a 10% change in splicing). The splicing analysis (data not shown) did not reveal any significant alterations in HKDC1 pre-mRNA splicing upon ZMAT3 knockdown. Corresponding text has been updated on page 10 of the revised manuscript.

      (6) The authors say that they examine JUN binding at the HKDC1 promoter several times, but they focus on intron 1 in Figure 5. They should revise the text accordingly, and they should also show JUN ChIP data traces for the whole HKDC1 locus in Figure 5C.

      We thank the reviewer for this helpful suggestion. As recommended, we have revised the text throughout the manuscript and replaced HKDC1 promoter with HKDC1 intron 1 DNA to accurately reflect our analysis, and Figure 5 now shows the JUN ChIP-seq signal across the entire HKDC1 locus.

      (7) In the ZMAT3 and JUN interaction assays, were these tested in the presence of DNAse or RNAse to determine if nucleic acids mediate the interaction?

      We thank the reviewer for this valuable suggestion. To test whether nucleic acids mediate the ZMAT3-JUN interaction, we performed ZMAT3 immunoprecipitation (IPs) in the presence or absence of DNase and RNase from doxycycline-inducible ZMAT3-FLAG expressing HCT116 cells. The ZMAT3-JUN interaction was lost upon treatment with either DNase or RNase, indicating that the interaction is mediated by nucleic acids. This data has been added in the revised manuscript (Figure 5-figure supplement 1D and on page 11).

    1. DTF potisk na vybraných místech materiálu Polyesterové tkaniny a polyestry potažené PVC lze potisknout technikou DTF pouze na určených plochách materiálu, což umožňuje aplikaci přesných a barevných grafických prvků tam, kde jsou potřeba. Tisk je flexibilní, odolný proti oděru a dobře přilne k povlakovaným i nepovlakovaným tkaninám.   Technologie DTF umožňuje dosáhnout výrazných, sytých barev a skvěle se hodí pro loga, nápisy a prvky vyžadující vysokou úroveň detailu.

      DTF potisk na vybraných místech Polyesterové látky (vč. těch potažených PVC) lze potisknout technologií DTF pouze na určených plochách materiálu. Tisk je flexibilní, odolný proti oděru a dobře přilne k podkladové látce.

      Technologie DTF umožňuje dosáhnout výrazných, sytých barev a skvěle se hodí pro menší loga, nápisy a grafické prvky.

    2. Trvanlivý UV tisk Polyestery potažené PVC tiskneme jednostranně UV technologií, která poskytuje mimořádnou trvanlivost a odolnost vůči povětrnostním vlivům. Okamžité vytvrzení inkoustu UV světlem zaručuje intenzivní barvy a ostré detaily.   Tisk se vyznačuje vysokou odolností proti oděru a slunečnímu záření – na úrovni 7-8 na vlněné stupnici – což zajišťuje dlouhodobě estetický vzhled materiálu.

      Trvanlivý UV tisk Polyester potažený PVC tiskneme jednostranně UV technologií, která poskytuje mimořádnou trvanlivost a odolnost vůči povětrnostním vlivům. Okamžité vytvrzení inkoustu UV světlem zaručuje intenzivní barvy a ostré detaily.

      Tisk se vyznačuje vysokou odolností proti oděru a slunečnímu záření na stupni 7-8 (Blue Wool Scale), což zajišťuje dlouhodobě estetický vzhled materiálu.

    3. Dokonalý sublimovací tisk Polyesterové tkaniny používané v našich reklamních stanech jsou jednostranně potiskovány sublimací, která zaručuje estetický a mimořádně trvanlivý výsledek. Tisk je odolný proti oděru a UV záření na úrovni 5-6 na vlněné stupnici, takže barvy zůstávají intenzivní i při dlouhodobém venkovním používání.   Sublimace zajišťuje vysoké sytosti barev, ostré hrany grafiky a velmi dobrou kvalitu detailů – bez ohledu na složitost projektu.

      Dokonalý sublimační tisk Polyesterové tkaniny používané v reklamních stanech tiskneme jednostranně sublimací, která zaručuje estetický a trvanlivý výsledek. Tisk je odolný proti oděru a UV záření na stupni 5-6 (Blue Wool Scale), takže barvy zůstávají intenzivní i při dlouhodobém venkovním používání.

      Sublimace zajišťuje věrné a syté podání barev, ostré hrany a vysokou kvalitu tištěných detailů grafiky.

    4. Vlastní digitální tiskárna Disponujeme moderní, plně vybavenou digitální tiskárnou, ve které realizujeme potisky technologií sublimace, UV a DTF. Díky tomu máme plnou kontrolu nad kvalitou, barevností a termíny každé zakázky.   Sublimace zajišťuje trvanlivé, hluboce syté barvy na polyesterových tkaninách, UV tisk umožňuje vytvářet mimořádně odolnou, ostrou a intenzivní grafiku – také na polyestru potaženém PVC, a technologie DTF umožňuje aplikovat přesné grafické prvky na vybraná místa materiálu, a to jak na povlakované, tak nepovlakované tkaniny.   Kombinace těchto tří technologií nám poskytuje maximální flexibilitu výroby a umožňuje realizovat jednoduché i velmi náročné projekty – od drobných detailů až po rozsáhlé reklamní plochy.

      Vlastní digitální tiskárny Disponujeme moderními digitálními tiskárnami, na kterých realizujeme potisky technologií sublimace, UV a DTF. Díky tomu máme plnou kontrolu nad kvalitou, barevností a termínem každé zakázky.

      Sublimace zajišťuje trvanlivé a syté barvy na polyesterových tkaninách. UV tisk umožňuje vytvářet mimořádně odolnou a intenzivní grafiku i na polyesteru potaženém PVC. Technologie DTF umožňuje aplikovat částečný potisk na libovolnou tkaninu.

      Kombinace těchto tří technologií nám poskytuje maximální flexibilitu výroby a umožňuje realizovat jednoduché i velmi náročné projekty.

    1. Author response:

      The following is the authors’ response to the previous reviews

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      In the manuscript submission by Zhao et al. entitled, "Cardiac neurons expressing a glucagon-like receptor mediate cardiac arrhythmia induced by high-fat diet in Drosophila" the authors assert that cardiac arrhythmias in Drosophila on a high fat diet is due in part to adipokinetic hormone (Akh) signaling activation. High fat diet induces Akh secretion from activated endocrine neurons, which activate AkhR in posterior cardiac neurons. Silencing or deletion of Akh or AkhR blocks arrhythmia in Drosophila on high fat diet. Elimination of one of two AkhR expressing cardiac neurons results in arrhythmia similar to high fat diet.

      Strengths:

      The authors propose a novel mechanism for high fat diet induced arrhythmia utilizing the Akh signaling pathway that signals to cardiac neurons.

      Comments on revisions:

      The authors have addressed my other concerns. The only outstanding issue is in regard to the following comment:

      The authors state that "HFD led to increased heartbeat and an irregular rhythm." In representative examples shown, HFD resulted in pauses, slower heart rate, and increased irregularity in rhythm but not consistently increased heart rate (Figures 1B, 3A, and 4C). Based on the cited work by Ocorr et al (https://doi.org/10.1073/pnas.0609278104), Drosophila heart rate is highly variable with periods of fast and slow rates, which the authors attributed to neuronal and hormonal inputs. Ocorr et al then describe the use of "semi-intact" flies to remove autonomic input to normalize heart rate. Were semi-intact flies used? If not, how was heart rate variability controlled? And how was heart rate "increase" quantified in high fat diet compared to normal fat diet? Lastly, how does one measure "arrhythmia" when there is so much heart rate variability in normal intact flies?

      The authors state that 8 sec time windows were selected at the discretion of the imager for analysis. I don't know how to avoid bias unless the person acquiring the imaging is blinded to the condition and the analysis is also done blind. Can you comment whether data acquisition and analysis was done in a blinded fashion? If not, this should be stated as a limitation of the study.

      Drosophila heart rate is highly variable. During the recording, we were biased to choose a time window when heartbeat was fairly stable. This is a limitation of the study, which we mentioned in the revised version. We chose to use intact over “semi-intact” flies with an intention to avoid damaging the cardiac neurons. 

      Reviewer #3 (Public review):

      Zhao et al. provide new insights into the mechanism by which a high-fat diet (HFD) induces cardiac arrhythmia employing Drosophila as a model. HFD induces cardiac arrhythmia in both mammals and Drosophila. Both glucagon and its functional equivalent in Drosophila Akh are known to induce arrhythmia. The study demonstrates that Akh mRNA levels are increased by HFD and both Akh and its receptor are necessary for high-fat diet-induced cardiac arrhythmia, elucidating a novel link. Notably, Zhao et al. identify a pair of AKH receptor-expressing neurons located at the posterior of the heart tube. Interestingly, these neurons innervate the heart muscle and form synaptic connections, implying their roles in controlling the heart muscle. The study presented by Zhao et al. is intriguing, and the rigorous characterization of the AKH receptor-expressing neurons would significantly enhance our understanding of the molecular mechanism underlying HFD-induced cardiac arrhythmia.

      Many experiments presented in the manuscript are appropriate for supporting the conclusions while additional controls and precise quantifications should help strengthen the authors' arguments. The key results obtained by loss of Akh (or AkhR) and genetic elimination of the identified AkhR-expressing cardiac neurons do not reconcile, complicating the overall interpretation.

      We thank the reviewer for the positive comments. We believe that more signaling pathways are active in the AkhR neurons and regulate rhythmic heartbeat. We are current searching for the molecules and pathways that act on the AkhR cardiac neurons to regulate the heartbeat. Thus, AkhR neuron x shall have a more profound effect. Loss of AkhR is not equivalent to AkhR neuron ablation. 

      The most exciting result is the identification of AkhR-expressing neurons located at the posterior part of the heart tube (ACNs). The authors attempted to determine the function of ACNs by expressing rpr with AkhR-GAL4, which would induce cell death in all AkhRexpressing cells, including ACNs. The experiments presented in Figure 6 are not straightforward to interpret. Moreover, the conclusion contradicts the main hypothesis that elevated Akh is the basis of HFD-induced arrhythmia. The results suggest the importance of AkhR-expressing cells for normal heartbeat. However, elimination of Akh or AkhR restores normal rhythm in HFD-fed animals, suggesting that Akh and AkhR are not important for maintaining normal rhythms. If Akh signaling in ACNs is key for HFD-induced arrhythmia, genetic elimination of ACNs should unalter rhythm and rescue the HFD-induced arrhythmia. An important caveat is that the experiments do not test the specific role of ACNs. ACNs should be just a small part of the cells expressing AkhR. Specific manipulation of ACNs will significantly improve the study. Moreover, the main hypothesis suggests that HFD may alter the activity of ACNs in a manner dependent on Akh and AkhR. Testing how HFD changes calcium, possibly by CaLexA (Figure 2) and/or GCaMP, in wild-type and AkhR mutant could be a way to connect ACNs to HFD-induced arrhythmia. Moreover, optogenetic manipulation of ACNs may allow for specific manipulation of ACNs.

      We thank the reviewer for suggesting the detailed experiments and we believe that address these points shall consolidate the results. As AkhR-Gal4 also expresses in the fat body, we set out to build a more specific driver. We planned to use split-Gal4 system (Luan et al. 2006. PMID: 17088209). The combination of pan neuronal Elav-Gal4.DBD and AkhRp65.AD shall yield AkhR neuron specific driver. We selected 2580 bp AkhR upstream DNA and cloned into pBPp65ADZpUw plasmid (Addgene plasmid: #26234). After two rounds of injection, however, we were not able to recover a transgenic line.

      We used GCaMP to record the calcium signal in the AkhR neurons. AkhR-Gal4>GCaMP has extremely high levels of fluorescence in the cardiac neurons under normal condition.

      We are screening Gal4 drivers, trying to find one line that is specific to the cardiac neurons and has a lower level of driver activity.   

      Interestingly, expressing rpr with AkhR-GAL4 was insufficient to eliminate both ACNs. It is not clear why it didn't eliminate both ACNs. Given the incomplete penetrance, appropriate quantifications should be helpful. Additionally, the impact on other AhkR-expressing cells should be assessed. Adding more copies of UAS-rpr, AkhR-GAL4, or both may eliminate all ACNs and other AkhR-expressing cells. The authors could also try UAS-hid instead of UASrpr.

      We quantified the AkhR neuron ablation and found that about 69% (n=28) showed a single ACN in AkhR-Gal4>rpr flies. It is more challenging to quantify other AkhR-expressing cells, as they are wide-spread distributed. We tried to add more copies of UAS-rpr or AkhR-Gal4, which caused developmental defects (pupa lethality). Thus, as mentioned above, we are trying to find a more specific driver for targeting the cardiac neurons.

      Recommendations for the authors:

      Reviewer #3 (Recommendations for the authors):

      The authors refer 'crop' as the functional equivalent of the human stomach. Considering the difference in their primary functions, this cannot be justified.

      In Drosophila, the crop functions analogously to the stomach in vertebrates. It is a foregut storage and preliminary processing organ that regulates food passage into the midgut. It’s more than a simple reservoir. Crop engages in enzymatic mixing, neural control, and active motility.

      Line 163 and 166, APCs are not neurons.

      Akh-producing cells (APCs) in Drosophila are neuroendocrine cells, residing in the corpora cardiaca (CC). While they produce and secrete the hormone AKH (akin to glucagon), they are not brain interneurons per se. APCs share many neuronal features (vesicular release, axon-like projections) and receive neural inputs, effectively functioning as a peripheral endocrine center.

    1. Reviewer #4 (Public review):

      This is an important paper that can do much to set an example for thoughtful and rigorous evaluation of a discipline-wide body of literature. The compiled website of publications in Drosophila immunity is by itself a valuable contribution to the field. There is much to praise in this work, especially including the extensive and careful evaluation of the published literature. However, there are also cautions.

      One notable concern is that the validation experiments are generally done at low sample sizes and low replication rates, and often lack statistical analysis. This is slippery ground for declaring a published study to be untrue. Since the conclusions reported here are nearly all negative, it is essential that the experiments be performed with adequate power to detect the originally described effects. At a minimum, they should be performed with the same sample size and replication structure as the originally reported studies.

      The first section of Results should be an overview of the general accuracy of the literature. Of all claims made in the 400 evaluated papers, what proportion fell into each category of "verified", "unchallenged", "challenged", "mixed", or "partially verified"? This summary overview would provide a valuable assessment of the field as a whole. A detailed dispute of individual highlighted claims could follow the summary overview.

      Section headings are phrased as declarative statements, "Gene X is not involved in process Y", which is more definitive phrasing than we typically use in scientific research. It implies proving a negative, which is difficult and rare, and the evidence provided in the present manuscript generally does not reach that threshold. A more common phrasing would be "We find no evidence that gene X contributes to process Y". A good model for this more qualified phrasing is the "We conclude that while Caspar might affect the Imd pathway in certain tissue-specific contexts, it is unlikely to act as a generic negative regulator of the Imd pathway," concluding the section on the role of Caspar. I am sure the authors feel that the softer, more qualified phrasing would undermine their article's goal of cleansing the literature of inaccuracies, but the hard declarative 'never' statements are difficult to justify unless every validation experiment is done with a high degree of rigor under a variety of experimental conditions. This caveat is acknowledged in the 3rd paragraph of the Discussion, but it is not reflected in the writing of the Results. The caveat should also appear in the Introduction.

      The article is clear that "Claims were assessed as verified, unchallenged, challenged, mixed, or partially verified," but the project is called "reproducibility project" in the 7th line of the abstract, and the website is "ReproSci". The fourth line of the abstract and the introduction call some published research "irreproducible". Most of the present manuscript does not describe reproduction or replication. It describes validation, or independent experimental tests for consistency. Published work is considered validated if subsequent studies using distinct approaches yielded consistent results. For work that the authors consider suspicious, or that has not been subsequently tested, the new experiments provided here do not necessarily recreate the published experiment. Instead, the published result is evaluated with experiments that use different tools or methods, again testing for consistency of results. This is an important form of validation, but it is not reproduction, and it should not be referred to as such. I strongly suggest that variations of the words "reproducible" or "replication" be removed from the manuscript and replaced with "validation". This will be more scientifically accurate and will have the additional benefit of reducing the emotional charge that can be associated with declaring published research to be irreproducible.

      The manuscript includes an explanatory passage in the Results section, "Our project focuses on assessing the strength of the claims themselves (inferential/indirect reproducibility) rather than testing whether the original methods produce repeatable results (results/direct reproducibility). Thus, our conclusions do not directly challenge the initial results leading to a claim, but rather the general applicability of the claim itself." Rather than first appearing in Results, this statement should appear prominently in the abstract and introduction because it is a core element of the premise of the study. This can be combined with the content of the present Disclaimer section into a single paragraph in the Introduction instead of appearing in two redundant passages. I would again encourage the authors to substitute the word validation for reproduction, which would eliminate the need for the invented distinction between indirect versus direct reproduction. It is notable that the authors have chosen to title the relevant Methods section "Experimental Validation" and not "Replication".

      Experimental data "from various laboratories" in the last paragraph of the Introduction and the first paragraph of the Results are ambiguous. Since these new experiments are part of the central core of the manuscript, the specific laboratories contributing them should be named in the two paragraphs. If experiments are being contributed by all authors on the manuscript, it would suffice to say "the authors' laboratories". The attribution to "various labs" appears to be contradicted by the Discussion paragraph 2, which states "the host laboratory has expertise in" antibacterial and antifungal defense, implying a single lab. The claim of expertise by the lead author's laboratory is unnecessary and can be deleted if the Lemaitre lab is the ultimate source of all validation experiments.

      The passage on the controversial role of Duox in the gut is balanced and scholarly, and stands out for its discussion of multiple alternative lines of evidence in the published literature and supplement. This passage may benefit from research by multiple groups following up on the original claims that are not available for other claims, but the tone of the Duox section can be a model for the other sections.

      Comments on other sections and supplements:

      I understand the desire to explain how original results may have been obtained when they are not substantiated by subsequent experiments. However, statements such as "The initial results may have been obtained due to residual impurities in preparations of recombinant GNBP1" and "Non-replicable results on the roles of Spirit, Sphinx and Spheroide in Toll pathway activation may be due to off-target effects common to first-generation RNAi tools" are speculation. No experimental data are presented to support these assertions, so these statements and others like them (currently at the end of most "insights" sections) should not appear in Results. I recognize that the authors are trying to soften their criticism of prior studies by providing explanations for how errors may have occurred innocently. If they wish to do so, the speculative hypotheses should appear in the Discussion.

      The statement in Results that "The initial claim concerning wntD may be explained by a genetic background effect independent of wntD" similarly appears to be a speculation based on the reading of the main text Results. However, the Discussion clarifies that "Here, we obtained the same results as the authors of the claim when using the same mutant lines, but the result does not stand when using an independent mutant of the same gene, indicating the result was likely due to genetic background." That additional explanation in the Discussion greatly increases reader confidence in the Result and should be explained with reference to S5 in the Results. Such complete explanations should be provided everywhere possible without requiring the reader to check the Supplement in each instance.

      In some cases, such as "The results of the initial papers are likely due to the use of ubiquitous overexpression of PGRP-LE, resulting in melanization due to overactivation of the Imd pathway and resulting tissue damage", the claim to explain the original finding would be easy to test. The authors should perform those tests where they can, if they wish to retain the statements in the manuscript. Similarly, the claim "The published data are most consistent with a scenario in which RNAi generated off-target knockdown of a protein related to retinophilin/undertaker, while Undertaker itself is unlikely to have a role in phagocytosis" would be stronger if the authors searched the Drosophila genome for a plausible homolog that might have been impacted by the RNAi construct, and then put forth an argument as to why the off-target gene is more likely to have generated the original phenotype than the nominally targeted gene. There is a brief mention in S19 that junctophilin is the authors' preferred off-target candidate, but no evidence or rationale is presented to support that assertion. If the original RNAi line is still available, it would be easy enough to test whether junctophilin is knocked down as an off-target, and ideally then to use an independent knockdown of junctophilin to recapitulate the original phenotype. Otherwise, the off-target knockdown hypothesis is idle speculation.

      A good model is the passage on extracellular DNA, which states, "experiments performed for ReproSci using the original DNAse IIlo hypomorph show that elevated Diptericin expression in the hypomorph is eliminated by outcrossing of chromosome II, and does not occur in an independent DNAse II null mutant, indicating that this effect is due to genetic background (Supplementary S11)." In this case, the authors have performed a clear experiment that explains the original finding, and inclusion of that explanation is warranted. Similar background replacement experiments in other validations are equally compelling.

      The statement "Analysis of several fly stocks expected to carry the PGRP-SDdS3 mutation used in the initial study revealed the presence of a wild-type copy PGRP-SD, suggesting that either the stock used in this study did not carry the expected mutation, or that the mutation was lost by contamination prior to sharing the stock with other labs" provides a documentable explanation of a potential error in the original two manuscripts, but the subsequent "analysis of several fly stocks" needs citations to published literature or explanation in the supplement. It is unclear from this passage how the wildtype allele in the purportedly mutant stocks could have led to the misattribution of function to PGRP-SD, so that should be explained more clearly in the manuscript.

      The originally claimed anorexia of the Gr28b mutation is explained as having been "likely obtained due to comparison to a wild-type line with unusually high feeding rates". This claim would be stronger if the wildtype line in question were named and data showing a high rate of feeding were presented in the supplement or cited from published literature. Otherwise, this appears to be speculation.

      In the section "The Toll immune pathway is not negatively regulated by wntD", FlyAtlas is cited as evidence that wntD is not expressed in adult flies. However, the FlyAtlas data is not adequately sensitive to make this claim conclusively. If the present authors wish to state that wntD is not expressed in adults, they should do a thorough test themselves and report it in the Supplement.

      Alternatively, the statement "data from FlyAtlas show that wntD is only expressed at the embryonic stage and not at the adult stage at which the experiments were performed by (Gordon et al., 2005a)" could be rephrased to something like "data from FlyAtlas show strong expression of wntD in the embryo but not the adult" and it should be followed by a direct statement that adult expression was also found to be near-undetectable by qPCR in supplement S5. That data is currently "not shown" in the supplement, but it should be shown because this is a central result that is being used to refute the original claim. This manuscript passage should also describe the expression data described in Gordon et al. (2005), for contrast, which was an experimental demonstration of expression in the embryo and a claim "RT-PCR was used to confirm expression of endogenous wntD RNA in adults (data not shown)."

      Inclusion of the section on croquemort is curious because it seems to be focused exclusively on clearance of apoptotic cells in the embryo, not on anything related to immunity. The subsection is titled "Croquemort is not a phagocytic engulfment receptor for apoptotic cells or bacteria", but the text passage contains no mention of phagocytosis of bacteria, and phagocytosis of bacteria is not tested in the S17 supplement. I would suggest deleting this passage entirely if there is not going to be any discussion of the immune-related phenotypes.

      The claim "Toll is not activated by overexpression of GNBP3 or Grass: Experiments performed for ReproSci find that contrary to previous reports, overexpression of GNBP3 (Gottar et al., 2006) or<br /> Grass (El Chamy et al., 2008) in the absence of immune challenge does not effectively activate Toll signaling (Supplementaries S6, S7)" is overly strongly stated unless the authors can directly repeat the original published studies with identical experimental conditions. In the absence of that, the claim in the present manuscript needs to be softened to "we find no evidence that..." or something similar. The definitive claim "does not" presumes that the current experiments are more accurate or correct than the published ones, but no explanation is provided as to why that should be the case. In the absence of a clear and compelling argument as to why the current experiment is more accurate, it appears that there is one study (the original) that obtained a certain result and a second study (the present one) that did not. This can be reported as an inconsistency, but the second experiment does not prove that the first was an error. The same comment applies to the refutation of the roles for Edin and IRC. Even though the current experiments are done in the context of a broader validation study, this does not automatically make them more correct. The present work should adhere to the same standards of reporting that we expect in any other piece of science.

      The statement "Furthermore, evidence from multiple papers suggests that this result, and other instances where mutations have been found to specifically eliminate Defensin expression, is likely due to segregating polymorphisms within Defensin that disrupt primer binding in some genetic backgrounds and lead to a false negative result (Supplementary S20)" should include citations to the multiple papers being referenced. This passage would benefit from a brief summary of the logic presented in S20 regarding the various means of quantifying Defensin expression.

      In S22 Results, the statement "For general characterization of the IrcMB11278 mutant, including developmental and motor defects and survival to septic injury, see additional information on the ReproSci website" is not acceptable. All necessary information associated with the paper needs to be included in the Supplement. There cannot be supporting data relegated to an independent website with no guaranteed stability or version control. The same comment applies to "Our results show that eiger flies do not have reduced feeding compared to appropriate controls (See ReproSci website)" in S25.

      Supplement S21 appears to show a difference between the wildtype and hemese mutants in parasitoid encapsulation, which would support the original finding. However, the validation experiment is performed at a small sample size and is not replicated, so there can be no statistical analysis. There is no reported quantification of lamellocytes or total hemocytes. The validation experiment does not support the conclusion that the original study should be refuted. The S21 evaluation of hemese must either be performed rigorously or removed from the Supplement and the main text.

      In S22, the second sentence of the passage "Due to the fact that IrcMB11278 flies always survived at least 24h prior to death after becoming stuck to the substrate by their wings, we do not attribute the increased mortality in Ecc15-fed IrcMB11278 flies primarily to pathogen ingestion, but rather to locomotor defects. The difference in survival between sucrose-fed and Ecc15-fed IrcMB11278 flies may be explained by the increased viscosity of the Ecc15-containing substrate compared to the sucrose-containing substrate" is quite strange. The first sentence is plausible and a reasonable interpretation of the observations. But to then conclude that the difference between the bacterial treatment versus the control is more plausibly due to substrate viscosity than direct action of the bacteria on the fly is surprising. If the authors wish to put forward that interpretation, they need to test substrate viscosity and demonstrate that fly mortality correlates with viscosity. Otherwise, they must conclude that the validation experiment is consistent with the original study.

      In S27, the visualization of eiger expression using a GFP reporter is very non-standard as a quantitative assay. The correct assay is qPCR, as is performed in other validation experiments, and which can easily be done on dissected fat body for a tissue-specific analysis. S27 Figure 1 should be replaced with a proper experiment and quantitative analysis. In S27 Figure 2, the authors should add a panel showing that eiger is successfully knocked down with each driver>construct combination. This is important because the data being reported show no effect of knockdown; it is therefore imperative to show that the knockdown is actually occurring. The same comment applies everywhere there is an RNAi to demonstrate a lack of effect.

      The Drosomycin expression data in S3 Figure 2A look extremely noisy and are presented without error bars or statistical analysis. The S4 claim that sphinx and spheroid are not regulators of the Toll pathway because quantitative expression levels of these genes do not correlate with Toll target expression levels is an extremely weak inference. The RNAi did not work in S4, so no conclusion should be inferred from those experiments. Although the original claims in dispute may be errors in both cases, the validation data used to refute the original claims must be rigorous and of an acceptable scientific standard.

      In S6 Figure 1, it is inappropriate to plot n=2 data points as a histogram with mean and standard errors. If there are fewer than four independent points, all points should be plotted as a dot plot. This comment applies to many qPCR figures throughout the supplement. In S7 Figure 1, "one representative experiment" out of two performed is shown. This strongly suggests that the two replicates are noisy, and a cynical reader might suspect that the authors are trying to hide the variance. This also applies to S5 Fig 3. Particularly in the context of a validation study, it is imperative to present all data clearly and objectively, especially when these are the specific data that are being used to refute the claim.

      Other comments:

      In S26, the authors suggest that much of the observed melanization arises from excessive tissue damage associated with abdominal injection contrasted to the lesser damage associated with thoracic injection. I believe there may be a methodological difference here. The Methods of S27 are not entirely clear, but it appears that the validation experiment was done with a pinprick, whereas the original Mabary and Schneider study was done with injection via a pulled capillary. My lab group (and I personally) have extensive experience with both techniques. In our hands, pinpricks to the abdomen do indeed cause substantial injury, and the physically less pliable thorax is more robust to pinpricks. However, capillary injections to the abdomen do virtually no tissue damage - very probably less than thoracic injections - and result in substantially higher survivals of infection even than thoracic injections. Thus, the present manuscript may infer substantial tissue damage in the original study because they are employing a different technique.

    1. Reviewer #2 (Public review):

      In this manuscript, Zhang et al describe a method for cryo-EM reconstruction of small (sub-50kDa) complexes using 2D template matching. This presents an alternative, complementary path for high-resolution structure determination when there is a prior atomic model for alignment. Importantly, regions of the atomic model can be deleted to avoid bias in reconstructing the structure of these regions, serving as an important mechanism of validation.

      The manuscript focuses its analysis on a recently published dataset of the 40kDa kinase complex deposited to EMPIAR. The original processing workflow produced a medium resolution structure of the kinase (GSFSC ~4.3A, though features of the map indicate ~6-7A resolution); at this resolution, the binding pocket and ligand were not resolved in the original published map. With 2DTM, the authors produce a much higher resolution structure, showing clear density for the ATP binding pocket and the bound ATP molecule. With careful curation of the particle images using statistically derived 2DTM p-values, a high-resolution 2DTM structure was reconstructed from just 8k particles (2.6A non-gold standard FSC; ligand Q-score of 0.6), in contrast to the 74k particles from the original publication. This aligns with recent trends that fewer, higher-quality particles can produce a higher-quality structure. The authors perform a detailed analysis of some of the design choices of the method (e.g., p-value cutoff for particle filtering; how large a region of the template to delete).

      Overall, the workflow is a conceptually elegant alternative to the traditional bottom-up reconstruction pipeline. The authors demonstrate that the p-values from 2DTM correlations provide a principled way to filter/curate which particle images to extract, and the results are impressive. There are only a few minor recommendations that I could make for improvement.

    1. Keine Panikmache, bitte. Konkretes ? Gibt’s nicht Übrigens bei Contrails.org kann man die Contrails verfolgen, es sind Zirren und erzeugen zuweilen Zirren, deren Nettoeffekt ist erwärmend, weil die die Sonneneinstrahlung weniger abblocken, als die IR Abstrahlung der Erde. Man kann sie größtenteils verhindern, wenn man die Flughöhe anpasst, wurde von Holland aus schon mal gemacht. Aber warum eigentlich ? Sie verursachen ja kein global warming, sondern ‚Regional warming‘, halt da, wo der Flugverkehr stattfindet, also Westeuropa, Nordamerika & Ostasien. Und weil sie kühlen, wenn die Sonne scheint und erwärmen, wenn sie nicht scheint, erwärmen sie im Winter mehr und kühlen im Sommer mehr. Dagegen ist doch eigentlich nichts einzuwenden ?! Ja, JP-8 enthält Additive, aber die Meteoriten bringen viel mehr Metalle auf die Erde. Wird geladen …
    1. Pérez told me stories of scientists who sacrificed their academic careers to build software, because building software counted for so little in their field: The creator of matplotlib, probably the most widely used tool for generating plots in scientific papers, was a postdoc in neuroscience but had to leave academia for industry. The same thing happened to the creator of NumPy, a now-ubiquitous tool for numerical computing. Pérez himself said, “I did get straight-out blunt comments from many, many colleagues, and from senior people and mentors who said: Stop doing this, you’re wasting your career, you’re wasting your talent.” Unabashedly, he said, they’d tell him to “go back to physics and mathematics and writing papers.”

      También he vivido la subvaloración asociada a publicar y sostener software libre en y desde contextos comunitarios en contraste con la publicación en circuitos académicos clásicos. Y si bien las universidades locales se están pensando esto en aras de visibilizar innovación, lo hacen muy lentamente, como es habitual, mientras los incentivos siguen estando alineados a las métricas convencionales

    2. As science becomes more about computation, the skills required to be a good scientist become increasingly attractive in industry. Universities lose their best people to start-ups, to Google and Microsoft. “I have seen many talented colleagues leave academia in frustration over the last decade,” he wrote, “and I can’t think of a single one who wasn’t happier years later.”

      Yo he escuchado esa sirena en el pasado, e incluso me propusieron trabajar con una de las big pharma y con una aseguradora, básicamente por mis conocimientos en programación (Pharo Smalltalk, específicamente). Sin embargo, "recaí a la academia, después de ser un académico en rehabilitación", como suelo decir y ahora volví de tiempo completo. Creo que una alternativa entre un camino y otro es ser académico/consultor, produciendo bienes comunes que uno trae del sector de las MiPyMes (micro, pequeña y medianas empresas) que hacen innovación local hacia la academía, en lo que uno esperaría que sea un ciclo virtuoso.

      Lo anterior es lo que he intentando con mutabiT de manera sostenida desde hace un par de décadas, gracias a las economías de los afectos (incluyendo mi mamá, mi hermana, Adriana y otros amigues) y si bien eso ha permitido hacer investigación de largo aliento desde las latitudes de la Mayoría Global sin mayores pérdidas de dinero, tampoco ha sido un esfuerzo lucrativo. Creo que, en caso de no poder continuar produciendo bienes comunes que vinculen los mundos académicos y productivos en esa escala sostenible y autónoma en los contextos locales, tendría que decidirme entre dejar alguno de los dos, como cuentan que han hecho estos académicos de otras latitudes.

    3. Stephen Wolfram who titled a book about his own work on cellular automata A New Kind of Science. In his blog post about computational essays, he writes, “At the core of computational essays is the idea of expressing computational thoughts using the Wolfram Language.”

      Esta idea de vincular sus productos a sus discursos hace ver todas las charlas de Stephen Wolfram, como charlas de mercadeo de sus productos, más que de sus ideas, con la consecuente necesidad de mostrar su productos como las únicas alternativas valiosas para explorar ideas que ocurren en muchos lados y de muchas formas.

    4. it might be too much to ask publishers to abandon PDFs, an open format, for a proprietary product. “Right now if you make a Mathematica notebook and you try to send that to a journal,” Gray says, “they’re gonna complain: Well, we don’t have Mathematica, this is an expensive product—give us something that’s more of a standard.”

      Hoy podrían enviarle una libreta computacional libre, incluso con un contenedor que reproduzca todo el entorno y los datos que hacen el artículo posible. Yo experimenté con algo así en 2016 durante mi pasantía doctoral para mi prototipo titulado "Panama Papers: a case for reproducible research, data activism and frictionless data" e incluso creé una versión web y una versión PDF, con su respectivo repositorio de código. Dado que fue un enfoque original cuando aún no conocía de los esfuerzos resonantes en el Norte Global, usé un entorno más ligero con Grafoscopio y la imagen de Pharo en lugar de contenedores.

      Hoy, lugares como NextJournal o Marimo están pensando en otras maneras de publicar para la web usando libretas computacionales interactivas y continúan con tradiciones del Norte Global, a la vez que ignoran lo que hemos hecho desde la mayoría Global, como es habitual. Sin embargo es bueno ver esas miradas en resonancia e incluso los adelantos que tenemos acá en publicaciones multiformato, de fuenté única (Perro Tuerto, del MIAU, también hablaba de esto)

    5. The Mathematica notebook is the more coherently designed, more polished product—in large part because every decision that went into building it emanated from the mind of a single, opinionated genius. “I see these Jupyter guys,” Wolfram said to me, “they are about on a par with what we had in the early 1990s.” They’ve taken shortcuts, he said. “We actually want to try and do it right.”

      Desde mediados/finales de los noventas no uso Mathematica, e incluso en ese momento era un gran sistema, altamente integrado y coherente. Sin embargo, en la medida en que me decanté por el software libre, empecé prontamente a buscar alternativas e inicié con TeXmacs, del cual traduje la mayor parte de su documentación al español, como una de mis primeras contribuciones a un proyecto de software libre (creo que aún la traducción es la que se está usando y por aquella época usábamos SVN para coordinar cambio e incluso enviábamos archivos compresos, pues el control de versiones no era muy popular).Por ejemplo el bonito y minimalista Yacas, con el que hiciera muchas de mis tareas en pregrado y colocara algunos talleres y corrigiría parciales cuando me convirtiese en profesor del departamento de Matemáticas

      TeXmacs, a diferencia de sistemas monolíticos como Mathematica, se conectaba ya desde ese entonces con una gran variedad de Sistemas de Álgebra Computacional (o CAS, por sus siglas en inglés) exponiéndonos a una diversidad de enfoques y paradigmas CAS, con sus sintaxis e idiosincracias particulares, en una riqueza que Mathematica nunca tendrá.

      TeXmacs también me expondría a ideas poderosas, como poder cambiar el software fácilmente a partir de pequeños scripts (en Scheme), que lo convirtieron en el primer software libre que modifiqué, y las poderosas S-expressions que permitían definir un documento y su interacción con CAS externos, si bien TeXmas ofrecía un lenguaje propio mas legible y permitía pasar de Scheme a este y viceversa.

      En general esa es la diferencia de los sistemas privativos con los libres: una monocultura versus una policultura, con las conveniencias de la primera respecto a los enfoques unificantes contra la diversidad de la segunda. Si miramos lo que ha ocurrido con Python y las libretas computacionales abiertas como Marimo y Jupyter, estos han ganado en la conciencia popular con respecto a Mathematica y han incorporado funcionalidad progresiva que Mathematica tenía, mientras que otra sigue estando aún presente en los sistemas privativos y no en los libres y viceversa. Yo no diría que las libretas computacionales libres están donde estaba Mathematica en los 90's, sino que han seguido rutas históricas diferentes, cada una con sus valores y riquezas.

    1. If you wake up in the middle of the night, don’t get up (unless you really have to pee). Instead, lie on your back and do 10 rounds of 4-7-8 breathing (inhaling for four seconds, holding it for seven and exhaling for eight). Then count backward from 300 by threes. The breaths slow your heart rate, while the math keeps your mind from racing.

      Dat ga ik uitproberen. Ik heb weleens een woordspelletje gedaan, maar dat was mentaal te intensief.

    1. L'Agrément des Associations : Guide Stratégique et Opérationnel

      Résumé Exécutif

      L'agrément associatif constitue une validation officielle de l'État, distincte de la simple déclaration en préfecture. Bien qu'il ne soit pas systématiquement obligatoire, il agit comme un label de sérieux, de transparence et de démocratie interne.

      Pour certaines structures, notamment dans les secteurs du sport, de la jeunesse ou de l'environnement, l'agrément est une condition sine qua non pour exercer certaines activités ou accéder à des financements publics.

      Ce document analyse les distinctions juridiques, les avantages stratégiques et les modalités pratiques d'obtention de ce "tampon officiel".

      --------------------------------------------------------------------------------

      I. Définitions et Distinctions Fondamentales

      Il est crucial de ne pas confondre l'agrément avec d'autres statuts ou étapes de la vie associative.

      Le tableau suivant précise ces distinctions :

      | Statut / Étape | Définition et Portée | | --- | --- | | Déclaration en Préfecture | Étape de base qui donne naissance officiellement à l'association (réception du récépissé). | | Agrément | Validation par l'État ou une collectivité attestant du sérieux, de la gestion transparente et du fonctionnement démocratique. | | Intérêt Général | Statut permettant de délivrer des reçus fiscaux, mais ne conférant pas automatiquement un agrément. | | Reconnaissance d'Utilité Publique | Niveau supérieur réservé aux grandes associations, validé par décret en Conseil d'État avec une procédure très exigeante. |

      --------------------------------------------------------------------------------

      II. L'Utilité de l'Agrément : Pourquoi le Solliciter ?

      L'agrément n'est pas une simple distinction honorifique ; il débloque des leviers opérationnels et financiers majeurs pour le développement d'une structure.

      1. Accès aux Financements et Partenariats

      Subventions publiques : L'agrément est souvent une condition obligatoire pour postuler à certaines aides financières de l'État.

      Conventions : Il permet de signer des accords officiels avec l'État ou les collectivités territoriales.

      2. Crédibilité et Signal Fort

      • ** Gage de sérieux :** Il rassure les partenaires, les bénévoles et les financeurs.

      Transparence : Il atteste que l'association respecte des standards élevés de gestion et de gouvernance.

      --------------------------------------------------------------------------------

      III. Le Caractère Obligatoire selon le Secteur d'Activité

      Toutes les associations n'ont pas besoin d'un agrément pour exister ou fonctionner. Cependant, il devient un passage obligé dans les cas suivants :

      Secteur Sportif : Nécessaire pour participer à des compétitions officielles (via l'agrément du ministère des Sports ou l'affiliation à une fédération agréée).

      Jeunesse et Éducation Populaire : Indispensable pour certaines activités et subventions spécifiques.

      Protection de l'Environnement : Requis pour accéder à certains types de financements ou pour mener des actions spécifiques dans ce domaine.

      --------------------------------------------------------------------------------

      IV. Typologie des Principaux Agréments

      Les critères varient selon le secteur d'activité et l'autorité de tutelle :

      Agrément Jeunesse et Éducation Populaire ("Jeunesse & Sport") : Concerne les activités éducatives, culturelles ou citoyennes destinées aux jeunes.

      Exige un encadrement qualifié et le respect des valeurs de l'éducation populaire.

      Agrément Éducation Nationale : Destiné aux associations intervenant en milieu scolaire (écoles, collèges, lycées). Il valide la cohérence du projet avec les missions de l'école.

      Agréments Spécifiques : Incluent l'agrément Sport, l'agrément Environnement ou encore l'agrément ESUS (Entreprise Solidaire d'Utilité Sociale).

      --------------------------------------------------------------------------------

      V. Procédure d'Obtention et Critères de Validation

      L'obtention d'un agrément est un processus administratif rigoureux qui nécessite une préparation minutieuse.

      1. Les Conditions de Fond

      Pour être éligible, l'association doit impérativement démontrer :

      • Un fonctionnement démocratique réel.

      • Une gestion financière transparente (comptes clairs).

      • Un objet social relevant de l'intérêt général.

      • Des statuts à jour.

      2. La Constitution du Dossier

      Le dossier doit généralement être déposé auprès de la préfecture, d'un ministère ou d'un service déconcentré de l'État. Il comprend :

      • Le formulaire administratif (type Cerfa).

      • Les statuts de l'association.

      • Les comptes annuels.

      • Le procès-verbal (PV) de la dernière assemblée générale.

      3. Délais et Vigilance

      Il est fortement déconseillé d'attendre la veille d'une demande de subvention pour solliciter un agrément.

      Les délais de traitement administratif peuvent atteindre plusieurs mois.

      La qualité de la présentation et l'exhaustivité des justificatifs sont déterminantes pour le succès de la demande.

      --------------------------------------------------------------------------------

      Conclusion : Une Décision Stratégique

      Si l'agrément est une contrainte légale pour les secteurs du sport et de la jeunesse, il demeure un choix stratégique pour les autres.

      Il transforme une association déclarée en un partenaire reconnu par les pouvoirs publics, facilitant ainsi son développement à long terme par le renforcement de sa légitimité et de ses capacités de financement.

    1. This can become even more challenging whenosseointegrated, implantsupported, fixed prosthesesare present in both jaws

      ① Bu durum, hem üst hem alt çenede osseointegrasyon sağlanmış, implant destekli sabit protezler bulunduğunda daha da zorlaşabilir.

    2. Implants and the rigidly attached implant restorations do not move.➢ Thus any occlusal disharmony will have repercussions at either the restoration-to-implantconnection, the bone-to-implant interface, or both.

      mplants and the rigidly attached implant restorations do not move. ① İmplantlar ve sert bir şekilde bağlanmış implant restorasyonları hareket etmez.

      ➢ Thus any occlusal disharmony will have repercussions at either the restoration-to-implant connection, the bone-to-implant interface, or both. ② Bu nedenle herhangi bir kapanış uyumsuzluğu, ya restorasyon-implant bağlantısında, ya kemik-implant arayüzünde, ya da her ikisinde birden sorunlara yol açacaktır.

    3. (one‐stage procedure

      1️⃣ One-stage procedure (Tek aşamalı işlem)

      İmplant cerrahisinde implantın yerleştirildiği ve abutment ya da geçici başlığın aynı anda yerleştirildiği yöntemdir.

      Yani, implantın üstü açık bırakılmaz; yumuşak doku kapatıldıktan sonra abutment de görünür şekilde kalır.

      Avantajı: Tek seferde hem implant hem abutment konulduğu için hastanın ikinci bir cerrahiye ihtiyacı azalır.

      2️⃣ Two-stage procedure (İki aşamalı işlem)

      İmplant çene kemiğine yerleştirilir ve üstü tamamen yumuşak doku ile kapatılır.

      İyileşme süresi sonunda ikinci bir cerrahi ile implant açılır ve abutment yerleştirilir.

      Avantajı: İyileşme süreci boyunca implant tamamen kemikle korunur, özellikle kemik ve yumuşak dokunun yeterli olmadığı durumlarda tercih edilir.

    Annotators

    1. Respecting thesedimensions not onlyprevents damage to theadjacent root structurebut also aids in thepreservation ofinterproximal peri-implantbone and soft tissuevolume.

      ① Bu ölçülere uyulması, yalnızca komşu diş kök yapısına zarar gelmesini önlemekle kalmaz, aynı zamanda implantlar arası (interproksimal) peri-implant kemik ve yumuşak doku hacminin korunmasına da yardımcı olur.

    Annotators

    1. Crowns and otherprostheses arecemented or screwedto the abutment.

      ① Kuronlar ve diğer protezler abutment üzerine ya simante edilir (yapıştırılır) ya da vidalanır.

    Annotators

    1. Unfortunately, this means that the desired axial loading of the implants is impossible in this region, which is a less favorablebiomechanical condition compared to other craniofacial implant sites.

      Ne yazık ki, bu durum bu bölgede implantların istenen aksiyal yüklemesinin mümkün olmadığı anlamına gelir; bu da diğer kraniofasiyal implant bölgelerine kıyasla daha az elverişli bir biyomekanik durumdur.

    2. An implant should be placed at 9 o’clock and 11o’clock positions for the right ear and at 1 o’clockand 3 o’clock positions for the left ea

      Sağ kulak için bir implant 9 ve 11 saat pozisyonlarına, sol kulak için ise 1 ve 3 saat pozisyonlarına yerleştirilmelidir.

    3. a

      a-kemik bölgeleri:

      "Dental implantların yanı sıra zigomatik implantların kullanımına da imkan tanıyan bu bölgelerde, kemik hacmi 6 mm veya daha fazladır. Üst çenenin (maksilla) ön kısmı, zigomatik ark (elmacık kemiği kemeri) ve zigoma (elmacık kemiği) bu bölgelere örnektir. Yüz iskeleti üzerindeki bu kemik bölgeleri; anterior maksilla, zigoma ve/veya zigomatik artriti* içerir.

    Annotators

    1. begrippen de terminologie, het vocabulaire die voor MIM 2.0 wordt gehanteerd.

      tekstueel: het moet ofwel 'begrippen de terminologie, het vocabulaire, die voor MIM 2.0 wordt gehanteerd.' ofwel 'begrippen de terminologie, het vocabulaire dat voor MIM 2.0 wordt gehanteerd.' Let op de ',' na 'vocabulaire'.

      Inhoudelijk: een vocabulaire is een woordenschat, een verzameling van woorden. Dat volgen we niet precies, het vocabulaire is een verzameling van termen. De begrippen vormen niet het vocabulaire, maar de (voorkeurs- en alternatieve) termen waarmee die begrippen worden aangeduid wel.

    1. Loss of consciousness• Loss of sensation• Amnesia• Analgesia• Immobility• Suppression of reflexes in response to surgical stimuli

      Loss of consciousness 👉 Bilinç kaybı Hasta çevresinin farkında değildir, uyanık değildir.

      Loss of sensation 👉 Duyunun kaybı Dokunma, basınç gibi duyular algılanmaz.

      Amnesia 👉 Hafıza kaybı Hasta işlem sırasında veya sonrasında olanları hatırlamaz.

      Analgesia 👉 Ağrı hissinin ortadan kaldırılması Cerrahi ağrı algılanmaz.

      Immobility 👉 Hareketsizlik Hasta istemsiz ya da istemli hareket yapmaz.

      Suppression of reflexes in response to surgical stimuli 👉 Cerrahi uyaranlara karşı reflekslerin baskılanması

    Annotators

    1. Phase 0: Inspiratory phasePhase I: Dead space and minimal or absent CO2Phase II: Alveolar and dead space mixturePhase III: Alveolar plateau and end-expiratory CO2 peak(PETCO2)

      ① Phase 0 – Inspiratory phase (İnspiratuvar faz) → Nefes alma evresi → Akciğerlere taze hava girer → CO₂ yoktur ya da sıfıra yakındır

      ② Phase I – Dead space phase → Anatomik ölü boşluktan gelen hava → Gaz alışverişi olmayan alanlardan gelir → CO₂ minimal veya yoktur

      ③ Phase II – Alveolar + dead space karışımı → Alveollerden gelen CO₂’li hava ile ölü boşluk havası karışır → CO₂ hızla yükselir

      ④ Phase III – Alveolar plateau → Tamamen alveoler gaz boşalır → CO₂ plato yapar → Fazın sonunda ölçülen değer: PETCO₂ (end-ekspiratuvar CO₂) → Ventilasyon, perfüzyon ve metabolizma hakkında bilgi verir

    2. Distal ischemia• pseudoaneurysm• AV fistula• hemorrhage• artery embolism• infection• peripheral neuropathy• equipment misuse.

      ① Distal ischemia Damarın uç kısmında kan akımının azalması veya kesilmesi sonucu gelişen oksijensizlik.

      ② Pseudoaneurysm (Psödoanevrizma) Damar duvarı yırtılıp kanın çevre dokuda yalancı kese oluşturması.

      ③ AV fistula (Arteriovenöz fistül) Atardamar ile toplardamar arasında anormal bağlantı oluşması.

      ④ Hemorrhage Kontrolsüz kanama.

      ⑤ Artery embolism Bir pıhtı ya da yabancı maddenin damarı tıkayarak kan akışını engellemesi.

      ⑥ Infection Girişim yapılan bölgede mikrobiyal enfeksiyon gelişmesi.

      ⑦ Peripheral neuropathy Periferik sinirlerin hasarı sonucu uyuşma, karıncalanma veya güç kaybı.

      ⑧ Equipment misuse Tıbbi ekipmanın yanlış veya hatalı kullanımı.

    Annotators

    1. Rapport de Synthèse : Séminaire du Forum for World Education (FWE) sur l'Éducation de la Petite Enfance

      Résumé Analytique

      Ce document synthétise les interventions du séminaire du Forum for World Education (FWE) consacré à l'éducation de la petite enfance.

      Les travaux soulignent que l'apprentissage commence bien avant la scolarisation formelle, s'appuyant sur le développement de l'attention, de la curiosité et de l'empathie.

      Le séminaire met en lumière une transition critique dans la pensée éducative : la nécessité d'éduquer les parents autant que les enfants, car l'environnement familial et l'attachement sécurisant constituent le socle de toute réussite future.

      Les points clés incluent l'importance neurobiologique des trois premières années (1 million de nouvelles connexions neuronales par seconde), le rôle prédictif de la curiosité et de l'autodétermination sur la réussite académique à long terme, et les disparités alarmantes entre les enfants favorisés et défavorisés.

      Enfin, des mises en garde sérieuses sont émises concernant l'usage passif de la technologie et de l'intelligence artificielle chez les très jeunes enfants, menaçant leur développement cognitif.

      --------------------------------------------------------------------------------

      I. Les Fondements Neurobiologiques et Psychologiques

      L'ABC de l'Apprenant

      L'éducation précoce repose sur ce que l'expert John Altman nomme « l'ABC de l'apprenant » :

      Attention et Attachement (Bonding)

      Curiosité

      Découverte

      Empathie

      La Plasticité Cérébrale Précoce

      Au cours des trois premières années de vie, plus d'un million de nouvelles connexions neuronales (synapses) se forment chaque seconde.

      Ce rythme ne se reproduira jamais plus au cours de la vie. Ces connexions façonnent les contours distinctifs de la conscience de chaque enfant.

      L'Importance Cruciale de l'Attachement

      L'attachement, ou le lien affectif entre le parent et l'enfant, est le fondement de l'épanouissement futur :

      Avantages neurologiques : Un attachement sécurisant est lié à un volume de matière grise plus important dans les régions du cerveau responsables de la perception sociale et du traitement émotionnel.

      Régulation du stress : Les enfants ayant un attachement sécurisant présentent des niveaux de cortisol plus bas et une amygdale mieux régulée, évitant le « stress toxique » qui entrave l'apprentissage.

      Fonctions exécutives : Ces enfants surpassent leurs pairs dans les tâches de planification, de flexibilité cognitive et de mémoire.

      --------------------------------------------------------------------------------

      II. Dynamiques de l'Apprentissage et de la Curiosité

      Exploration vs Contrôle

      Le rôle du parent n'est pas de concevoir la personnalité de l'enfant ou de contrôler sa destination, mais de fournir la « subsistance pour le voyage ». L'amour inconditionnel crée une base de sécurité permettant à l'enfant de s'aventurer vers l'inconnu.

      Les Deux Types de Curiosité

      1. Curiosité de découverte : Alimentée par la nouveauté, elle est le moteur principal durant la petite enfance.

      2. Curiosité épistémique (de maîtrise) : Apparaît vers 6 ou 7 ans. C'est le désir de comprendre en profondeur, nécessitant un effort cognitif soutenu et de la persévérance face à la difficulté.

      Le Cycle Vertueux de la Maîtrise

      La pratique mène à la compétence, qui génère la confiance et un sentiment d'auto-efficacité, motivant ensuite une pratique accrue. Ce processus favorise également un comportement moral en renforçant le sentiment d'appartenance à un groupe.

      --------------------------------------------------------------------------------

      III. Perspectives des Parents et Arbitrages Éducatifs

      Lors du panel de parents, plusieurs thématiques liées aux choix éducatifs ont émergé :

      | Thématique | Insights et Arbitrages | | --- | --- | | Valeurs fondamentales | Priorité à la formation de l'humain plutôt qu'à la création de « calculatrices humaines ». Importance de la résilience et de la tolérance à l'échec. | | Multilinguisme | Certains parents choisissent de prioriser la langue dominante (ex: l'anglais) pour construire la confiance sociale de l'enfant avant de réintroduire les langues héritées. | | Compétences douces | Accent mis sur l'art oratoire, la pensée critique et les compétences sociales comme leviers de réussite à long terme. | | Socialisation | Préférence parfois accordée au développement social et émotionnel plutôt qu'à l'accélération académique (ex: refuser de sauter une classe pour préserver les amitiés). |

      --------------------------------------------------------------------------------

      IV. Enjeux Globaux, Équité et Politiques Publiques

      L'Analyse de l'OCDE (Andreas Schleicher)

      L'écart de réussite : À l'âge de 5 ans, les enfants issus de milieux défavorisés ont déjà 20 mois de retard en termes de comportement pro-social et un an de retard en littératie émergente.

      Le paradoxe de l'investissement : Les dépenses publiques sont souvent élevées à la naissance, chutent drastiquement vers l'âge d'un an, pour ne reprendre qu'à 3 ou 4 ans. Ce déficit d'investissement précoce est préjudiciable.

      Mentalité de croissance (Growth Mindset) : La conviction que l'effort mène au succès est l'un des prédicteurs les plus puissants de la réussite dans les tests PISA à 15 ans.

      La Fracture Sociale et le Langage

      Les enfants défavorisés entendent environ 30 millions de mots de moins que leurs pairs favorisés avant l'âge de trois ans.

      Si l'environnement familial ne fournit pas la stimulation nécessaire, les structures d'accueil de la petite enfance deviennent le seul filet de sécurité pour garantir l'égalité des chances.

      --------------------------------------------------------------------------------

      V. Les Risques Technologiques et l'Intelligence Artificielle

      Une préoccupation majeure concerne l'utilisation de la technologie comme « baby-sitter » :

      Impact sur le développement : L'exposition à l'IA avant l'âge de trois ans peut interférer avec le développement cognitif profond et la capacité de réflexion critique.

      Usage passif : L'utilisation de tablettes pour occuper les enfants empêche l'apprentissage de la gestion de l'ennui et de l'interaction sociale.

      Recommandation : Ne jamais laisser un enfant de moins de trois ans utiliser seul un jouet intégrant de l'IA. L'interaction doit être médiée par un parent.

      --------------------------------------------------------------------------------

      VI. Citations Clés

      « L'éducation n'est pas seulement éduquer les étudiants ; nous devrions nous concentrer sur l'éducation des parents. » — Dr. Chang Davis

      « Avoir des enfants rend la vie beaucoup plus significative, même si cela diminue le bonheur. » — John Altman (citant Ray Baumeister)

      « Le but de l'amour n'est pas de modifier les personnes que nous aimons, mais de leur donner ce dont elles ont besoin pour s'épanouir. » — John Altman (citant Alison Gopnik)

      « Les étudiants qui réussissaient le mieux étaient les étudiants "connectés" [...] La connectivité est le sentiment de faire partie de quelque chose de plus grand que soi. » — John Altman (citant Ned Hallowell)

      « Si vous voyez des enfants assis à des bureaux faisant tous la même chose au même moment, fuyez, car ce n'est pas bon pour les enfants. » — Dr. Suzanne Sulfani

    1. não definitivamente julgado
      • Para a aplicação de lei mais benéfica, deve-se atentar para a inexistência de julgamento definitivo da matéria.

      • Ademais, importante observar que a retroatividade se destina a beneficiar o réu quando se tratar de infração ou penalidade, mas não à base de cálculo e à alíquota

      • A lei tributária favorável ao contribuinte somente terá efeitos retroativos acaso se trate de ato não definitivamente julgado.

      • Isto é, enquanto perdure o processo administrativo sem julgamento definitivo da infração, é possível aplicar a norma mais favorável para conduta ilícita passada.

    1. autosomal dominant inheritance• It is an acute hypermetabolic state that occurs in muscle tissue. rare• Recent studies mention mutations in the skeletal muscle Ca channeland Ryanodine receptor genes on chromosome 19 in humans.• May develop with the onset of anesthesia or in the postoperativeperiod

      ① Autosomal dominant inheritance Otozomal dominant kalıtım

      ② It is an acute hypermetabolic state that occurs in muscle tissue, rare Kas dokusunda ortaya çıkan nadir bir akut hipermetabolik durumdur

      ③ Recent studies mention mutations in the skeletal muscle Ca channel and Ryanodine receptor genes on chromosome 19 in humans Son çalışmalar, insanlarda 19. kromozomda bulunan iskelet kası Ca kanalı ve Ryanodin reseptör genlerindeki mutasyonları bildirmektedir

      ④ May develop with the onset of anesthesia or in the postoperative period Anestezi başlangıcında veya postoperatif dönemde gelişebilir

    Annotators

    1. *Offer valid in stores and online January 16, 2026 to January 22, 2026 in US/CA. Offer applies to select styles as indicated. Online price reflects discount. **Offer valid online only January 16, 2026 to January 19, 2026 in US/CA. Offer applies to select styles as indicated. Online price reflects discount. #fragment_77a00da a { color: rgb(41, 42, 51); text-decoration: underline } See All Offer Details

      Hollister has an inaccessible navigation menu, as there is no option to make text larger. However, if there is an option to do so, it is difficult to find. The annotated text is important information regarding the parameters of Hollister's sale dates. Therefore, there should be an option to make the text larger. This is a poor practice of web accessibility.

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Response to Reviewer 1:

      The authors introduce G2PT, a hierarchical graph transformer model that integrates genetic variants (SNPs), gene annotations, and multigenic systems (Gene Ontology) to predict and interpret complex traits.

      We thank the reviewer for this accurate summary of our approach and contributions.

      Major Comments:

      Comment 1-1. Insufficient Specification of Model Architecture: The description of the "hierarchical graph transformer" lacks technical depth. Key implementation details are missing: how node embeddings are initialized for SNPs, genes, and systems; how graph connectivity is defined at each level (e.g., adjacency matrices used in Equations 5-9, the sparsity); justification for the choice of embedding dimension and number of attention heads, including any sensitivity analysis; and the architecture of the feed-forward neural networks (e.g., number of layers, activation functions, and hidden dimensions).

      __Reply 1-1. __As requested, we have expanded the technical description of the model architecture, including the hierarchical graph transformer (HiGT), in the Materials and Methods section. Details regarding node initialization and hierarchical connectivity are now included in the new paragraph "Model Initialization and Graph Construction." Specifically, all node embeddings corresponding to SNPs, genes, and ontology-defined systems are initialized using uniform Xavier initialization (Glorot and Bengio, 2010).

      We have also clarified our hyperparameter optimization strategy. Learning rate, weight decay, hidden (embedding) dimension, and the number of attention heads were selected via grid search, as summarized in new Supplementary Fig. 8, reproduced below. Based on both performance and computational efficiency, we adopted four attention heads-consistent with the configuration commonly used in academic transformer models (Vaswani et al., 2017) (the original Transformer used eight).

      Regarding the feed-forward neural network, we follow the standard Transformer architecture consisting of two position-wise layers with hidden dimension four times larger than the node embedding size and a GeLU nonlinear activation function (Hendrycks and Gimpel, 2016). This configuration is widely established in the literature and functions as an intermediate processing step following attention; therefore, it is not a focus of hyperparameter tuning. All corresponding updates have been incorporated into the revised Methods section for clarity and completeness.

      Comment 1-2. No Simulation Studies to Validate Epistasis Detection: The ground truth epistasis interaction should use the ones that have been manually validated by literature. The central claim of discovering epistatic interactions relies heavily on the model's attention mechanism and downstream statistical filtering. However, no simulation studies are presented to validate that G2PT can reliably detect epistasis when ground-truth interactions are known. Demonstrating robust detection of non-additive interactions under varying genetic architectures and noise levels in simulated genotype-phenotype datasets is essential to substantiate the method's core capability.

      Reply 1-2. We agree that a simulation of epistasis detection using the G2PT model is a worthy addition to the manuscript. Accordingly, we have now incorporated a new section in the Results titled "Validation of Epistasis through Simulation Studies", which includes two new figures reproduced below (Supplementary Fig. 6 and Fig. 5). We have also added a new Methods section to describe this simulation study under the heading "Epistasis Simulation". These simulation studies show that G2PT recovers epistatic gene pairs with high fidelity when these pairs are coherent with the systems ontology (c.f. 'ontology coherence' in Supplementary Fig. 6, which reflects the probability that both SNPs are assigned to the same leaf system). Furthermore, G2PT outcompetes previous tools, such as PLINK-epistasis, which do not use knowledge of the systems hierarchy in the same way (Supplementary Fig 6b-d). Using simulation parameters consistent with current genome-wide association studies (n = 400,000) and understanding of heritability (h2 = 0.3 to 0.5) (Bloom et al. 2015; Speed and Evans 2023), we find that approximately 10% of all epistatic SNP pairs can be recovered at a precision of 50% (Fig. 5). We have provided the source code for this simulation study in our GitHub repository (https://github.com/idekerlab/G2PT/blob/master/Epistasis_simulation.ipynb)

      Comment 1-3. Lack of Justification for Model Complexity and Missing Ablation Insights: While Supplementary Figure 2 presents ablation studies, the manuscript needs to justify the high computational cost (168 GPU hours using 4×A30 GPUs) of the full model. It remains unclear how much performance gain is specifically due to reverse propagation (Equations 8-9), which is claimed to capture biological context. The benefit of using a full Gene Ontology hierarchy versus a flat system list is not quantified. There is also no comparison between bidirectional versus unidirectional propagation. Overall, the added complexity is not empirically shown to be necessary

      Reply 1-3. We thank the reviewer for prompting a clearer justification of complexity and ablations. We have now revised the Results to (i) quantify the specific value of the ontology and reverse propagation, and (ii) explain why a flat SNP→system model is computationally and biologically sub-optimal. We have added new ablation results to compare bidirectional (forward+reverse) versus forward-only propagation. Reverse propagation has little effect when epistatic pairs are within one system (ontology coherence ρ=1.0) but substantially improves retrieval when interactions span related systems (e.g., ρ≈0.8) (Figure reproduced below) A flat design scores a dense genes×systems map, ignoring known sparsity (sparse SNP→gene assignments; sparse ontology edges) and losing multi-scale context; our hierarchical formulation restricts computation to observed edges (SNP→gene→system) and aggregates signals across levels, yielding better efficiency and biological fidelity.

      Comment 1-4. Non-Equivalent Benchmarking Against PRS Methods: Figure 2 compares G2PT to polygenic risk score (PRS) methods such as LDpred2 and Lassosum, but G2PT is run only on SNPs pre-filtered by marginal association (p-values between 10⁻⁵ and 10⁻⁸), while the PRS methods use genome-wide SNPs. This introduces a strong bias in G2PT's favor by effectively removing noise. A fair comparison would require: (a) running LDpred2 and Lassosum on the same pre-filtered SNP sets as G2PT, or (b) running G2PT on genome-wide or LD-pruned SNP sets. The reported superior performance of G2PT may be driven primarily by this input filtering, not the model architecture.

      Reply 1-4. We appreciate the reviewer's concern regarding benchmarking equivalence. In response, we have extended our analyses to include PRS-CS (Ge et al., 2019) and SBayesRC (Zheng et al., 2024), two state-of-the-art Bayesian shrinkage methods comparable to LDpred2 and Lassosum. Although we initially attempted to run LDpred2 and Lassosum under all SNP-filtering conditions, their computational requirements at UK Biobank scale proved prohibitively time consuming. We therefore focused on PRS-CS and SBayesRC, which offer similar modeling principles with greater computational tractability. These methods have now been run at matched SNP-filtering conditions to our original study. The new results demonstrate that G2PT consistently outperforms PRS-CS and SBayesRC (new Fig. 2, reproduced below), indicating that its performance advantage is not solely attributable to SNP pre-filtering but also to its hierarchical attention-based architecture.

      Comment 1-5: No Details on Hyperparameter Optimization: Although the manuscript mentions grid search for hyperparameter tuning, it provides no information about which parameters were optimized (e.g., learning rate, dropout rate, weight decay, attention dropout, FFNN dimensions), what search space was explored, or what final values were selected. There is also no assessment of how sensitive the model's performance is to these choices. Better transparency would help facilitate reproducibility

      Reply 1-5. We agree with the reviewer and have expanded the manuscript to include full details of hyperparameter optimization. As described in the revised Methods section, we performed a grid search over learning rate {10−3,10−4,10−5} hidden dimension {64,128} and weight decay {0,10−5,10−3}. The results, summarized in Supplementary Fig. 8 (reproduced above), show that model performance is most sensitive to the learning rate, while hidden dimension and weight decay exert more moderate effects. Based on these findings, we selected a learning rate of 10−5, hidden dimension of 64, and weight decay of 10−3 for all subsequent experiments. Although a hidden dimension of 128 slightly improved performance, we adopted 64 to balance predictive accuracy with computational efficiency.

      Comment 1-6. Absence of Control for Key Confounders: In interpreting attention scores as reflecting genetic relevance (e.g., the role of the immunoglobulin system), the model includes only age, sex, and genetic principal components as covariates. Important confounders such as BMI, alcohol use, or medication (e.g., statins) have not been controlled for. Since TG/HDL levels are strongly influenced by environment and lifestyle, it is entirely plausible that some high-attention features reflect environmental tagging, not biological causality.

      Reply 1-6. In the current framework, we included age, sex, and genetic principal components to account for demographic and population-structure effects, focusing on genetic contributions within a controlled baseline. We acknowledge that non-genetic covariates can influence downstream biological states and may indirectly shape attention at the gene or system level. Accurately modeling such effects requires an extended framework where environmental variables directly modulate gene and system embeddings rather than being implicitly absorbed by the attention mechanism. We have clarified these limitations in the Discussion along with plans to incorporate explicit confounder modeling in future extensions of G2PT.

      Comment 1-7. Oversimplified Treatment of SNP-to-Gene Mapping: The SNP-to-gene mapping strategy combines cS2G, eQTL, and nearest-gene annotations, but the limitations of this approach are not adequately addressed. The manuscript does not specify how conflicts between methods are resolved or what fraction of SNPs map ambiguously to multiple genes. Supplementary Figure 2 shows model performance degrades when using only nearest-gene mapping, but there is no systematic analysis of how mapping uncertainties propagate through the hierarchy and affect attention or interpretation.

      Reply 1-7. In the revision (Results), we have clarified how conflicts between cS2G, eQTL, and nearest-gene annotations are resolved, and we have reported the proportion of SNPs that map to multiple genes across these three annotation approaches. We note that the hierarchical attention mechanism enables the model to prioritize among alternative gene mappings in a data-driven manner, and this is a major strength of the approach. As shown in Fig. 3 (Results, reproduced below), SNP-to-gene attention weights reveal dominant linkages, reducing the impact of mapping uncertainty on interpretation. We now explicitly describe this mechanism and acknowledge that further work in probabilistic mapping and fine-mapping approaches is a valuable future direction for improving resolution and interpretability.

      "For SNPs with several potential SNP-to-gene mappings (Methods), we found that G2PT often prioritized one of these genes in particular due to its membership in a high-attention system. For example, the chr11q23.3 locus contains multiple genes including the APOA1/C3/A4/A5 gene cluster (Fig. 3c) which is well-known to govern lipid transport, an important system for G2PT predictions (Fig. 3a). Due to high linkage disequilibrium in the region, all of its associated SNPs had multiple alternative gene mappings available. For example, SNP rs1145189 mapped not only to APOA5 but to the more proximal BUD13, a gene functioning in spliceosomal assembly (a system receiving substantially lower G2PT attention). Here, the relevant information flow learned by G2PT was from rs1145189 to APOA5 to lipid transport and protein-lipid complex remodeling (Fig. 3c; and conversely, deprioritizing BUD13 as an effector gene for TG/HDL). We found that this particular genetic flow was corroborated by exome sequencing, which implicates APOA5 but not BUD13 in regulation of TG/HDL, using data that were not available to G2PT. Similarly, two other SNPs at this locus - rs518547 and rs11216169 - had potential mappings to their closest gene SIK3, where they reside within an intron, but also to regulatory elements for the more distant lipid transport genes APOC3 and APOA4. Here, G2PT preferentially weighted the mappings to APOC3 and APOA4 rather than to SIK3 (Fig. 3c)."

      Comment 1-8. Naive Scoring of System Importance: The method used to quantify the biological relevance of systems (i.e., correlating attention scores with predicted phenotype values) risks circular reasoning. Since the model is trained to optimize prediction, systems that contribute strongly to prediction will naturally show high correlation-even if they are not biologically causal. No comparison is made with established gene set enrichment methods applied to GWAS summary statistics. The approach lacks an independent benchmark to validate that the "important" systems are biologically meaningful.

      Reply 1-8. As requested, we compared G2PT's system-level importance scores with results from MAGMA competitive gene-set analysis, an established enrichment approach. This analysis indeed shows significant correlation between the systems identified by the two approaches (ρ = 0.26, p .01; Supplementary Table. 2), reflecting a shared emphasis on canonical lipid processes. We also observed systems detected by G2PT but not strongly detected by MAGMA's linear enrichment model-for example, the lipopolysaccharide-mediated signaling pathway (Kalita et al. 2022)

      Comment 1-9. No External Validation to Assess Generalizability. All evaluations are performed using cross-validation within the UK Biobank. There is no assessment of generalizability to independent cohorts or diverse ancestries. Given population structure, genotyping platform, and phenotype measurement variability, external validation is essential before claiming the method is suitable for broader use in polygenic risk assessment.

      Reply 1-9. To externally validate the G2PT model requires individual level genotype data with paired TG/HDL measurements, sample size at the scale of the UK Biobank, and GPU access to this data. Thus, we approached the All of Us program, a large and diverse cohort with individual level data and T2D conditions with HbA1C measurements. We first processed the All of Us genotype and phenotype data as we had processed UKBB data (Methods), resulting in 41,849 participants with T2D and 80,491 without T2D across various ethnicities. We then transferred the trained T2D G2PT model to the AoU Workbench and evaluated its performance. The model demonstrated robust discriminative capability with an explained variance of 0.025, as shown in the new Fig. 2d, (reproduced above).

      Comment 1-10. Computational Burden and Scalability Are Not Addressed: The paper notes that training the model requires 168 GPU hours on 4×A30 GPUs for just ~5,000 SNPs. However, there is no discussion of whether G2PT can scale to larger SNP sets (e.g., genome-wide imputed data) or more complex biological hierarchies (e.g., Reactome pathways). Without addressing scalability, the model's applicability to real-world, large-scale genomic datasets remains unclear.

      Reply 1-10. We have addressed scalability with both engineering optimizations and new scalability experiments. First, we refactored the model to use the xFormer memory-efficient attention for the hierarchical graph transformer (Lefaudeux et al., 2022), which also helps full parallelization of training, reducing bottlenecks. Second, we added a scaling study with progressively increasing SNP count. On 4×A30 GPUs, end-to-end training time for the 5k-SNP setting decreased from 4000 to 400 min. (approximately 7 GPU-hours, ×10). These new results are given in Supplementary Fig. 7, reproduced below.

      Minor Comment:

      Comment 1-11. Attention Weights as Mechanistic Insight: The paper equates high attention scores with biological importance, for example in highlighting the immunoglobulin system. There is no causal validation showing that altering the highlighted SNPs, genes, or systems has an actual effect on TG/HDL. Attention weights in transformer models are known to sometimes reflect spurious correlations, especially in high-dimensional settings. The correlation between attention scores and predictions (Supplementary Fig. 3a,b) does not constitute biological evidence. The interpretability claims can be restated without supporting functional or causal validation.

      Reply 1-11. We thank the reviewer for this thoughtful comment. We agree that attention weights are not causal evidence. In the revision, we (1) reframe attention-based findings as hypothesis-generating rather than mechanistic, and (2) add an explicit limitation noting that correlations between attention scores and predictions do not constitute biological validation.

      Response to Reviewer 2:

      This manuscript describes the introduction of the Genotype-to-Phenotype Transformer (G2PT), described by the authors as "a framework for modeling hierarchical information flow among variants, genes, multigenic systems, and phenotypes." The authors used the ratio TG/HDL as a trait for proof of concept of this tool.

      This is a potentially interesting computational tool of interest to bioinformaticians, computational genomicists, and biologists.

      We thank the reviewer for their overall positive assessment of our study.

      Comment 2-1. The rationale for choosing the TG/HDL ratio for this proof of concept analysis is not well justified beyond it being a marker for insulin resistance. Overall the use of a ratio may be problematic (see below). Analyses of TG and HDL separately as individual quantitative traits would be of interest. And an analysis of a dichotomous clinical trait (T2DM or CAD) would also be of great interest.

      Reply 2-1. We thank the reviewer for this suggestion. In the revised manuscript, we have expanded our analyses beyond the TG/HDL ratio to include TG and HDL as individual quantitative traits (Fig. 2, reproduced below). These additional analyses demonstrate that G2PT captures predictive signals robustly across each lipid component, not solely through their ratio. Furthermore, to address the reviewer's interest in clinical outcomes, we incorporated an analysis of type 2 diabetes (T2D) as a dichotomous trait of direct clinical relevance. Collectively, these results strengthen the rationale for our chosen phenotype and show that the G2PT framework generalizes effectively across quantitative and binary traits, consistently outperforming advanced PRS and machine learning benchmarks.

      Comment 2-2. The approach to mapping SNPs to genes does not incorporate the most advanced approaches. This should be described in more detail.

      Reply 2-2. We agree that the choice of SNP-to-gene mapping materially affects both performance and interpretability-indeed, our epistasis simulations suggest that more accurate mappings can improve recovery and localization. In this proof-of-concept work we use a straightforward, modular mapping sufficient to demonstrate the modeling framework, and we have clarified this in the Methods. The architecture is designed to plug-and-play alternative SNP-to-gene maps (e.g., eQTL/colocalization-based assignments, promoter-capture Hi-C). A dedicated follow-up study will systematically compare these alternatives and quantify their impact on attribution and downstream discovery.

      Comment 2-3. The example of gene prioritization at the A1/C3/A4/A5 gene locus is not particularly illuminating, as the prioritized genes are already well-known to influence TG and HDL-C levels and the TG/HDL ratio. Can the authors provide an example where G2PT prioritized a gene at a locus that is not already a well-known regulator of TG and HDL metabolism?

      Reply 2-3. We thank the reviewer for this suggestion. We have revised the manuscript to de-emphasize the well-established APOA1 locus and instead highlight the less expected "Positive regulation of immunoglobulin production" system (Figure 3a,b, Discussion). Here our model prioritizes the gene TNFSF13 based on specific variants that are not previously associated with TG or HDL (e.g., rs5030405, rs1858406, shown in blue). This finding points to an intriguing, non-canonical link between B-cell regulation and lipid metabolism. While full exploration of this finding is beyond the scope of the present methods paper, this example demonstrates G2PT's ability to identify novel, high-priority candidates in atypical systems.

      Comment 2-4. The identification of epistatic interactions is a potentially interesting application of G2PT. However, suppl table 1 shows a very limited number of such interactions with even fewer genes, and most of these are well established biological interactions (such as LPL/apoA5). The TGFB1 and FKBP1A interaction is interesting and should be discussed. What is needed for increasing the number of potential interactions, greater power?

      Reply 2-4. We are glad the reviewer appreciates the use of the G2PT model to identify epistatic interactions. We have now discussed a potential mechanism of epistasis between TGFB1 and FKBP1A in the protein dephosphorylation system (Discussion). In addition, we have addressed the reviewer's question about statistical power through extensive epistasis simulations (Fig. 5 and Supplementary Fig. 6), which show that G2PT's detection ability scales strongly with sample size-1,000 samples are insufficient, performance improves at 5,000, and power becomes reliable at 100,000. Realistic simulations (Fig. 5b-d) further demonstrate that under biologically plausible architectures, G2PT can robustly recover specific interactions even within complex genetic backgrounds

      Comment 2-5. Furthermore, the use of the TG/HDL ratio for the assessment of epistatic interactions may be problematic. For example, if one SNP affected only TG and the other only HDL-C, it would appear to be an epistatic interaction with regard to the ratio, although the biological epistasis may be limited to non-existent.

      Reply 2-5. We have greatly expanded the example phenotypes modeled in our study, Please see our reply 2-1 above.

      Response to Reviewer 3:

      This manuscript by Lee et al provides a sensible and powerful approach to polygenic score prediction. The model aggregates information from SNPs to genes to systems, using a transformer based architecture, which appears to increase predictive performance, produce interpretable outputs of genes and systems that underlie risk, and identify candidates for epistasis tests.

      I think the manuscript is clear and well written, and conducted via state-of-the-art approaches. I don't have any concerns regarding the claims that are made.

      We thank the reviewer for their very positive assessment of our study.

      Major comments:

      Comment 3-1. Specifically, lipid based traits are perhaps the most well-powered and the most biologically coherent; they are also very well-studied biologically and thus overrepresented in the gene ontology. It is unclear whether this approach will work as well for a trait like Schizophrenia for which the underlying pathways are not as well captured in existing ontologies. The authors anticipate this in their limitations section, and I am not expecting them to solve every issue with this, but it would be nice to expand the testing a little bit beyond only this one trait.

      Reply 3-1. We appreciate the reviewer's suggestion to expand beyond a single lipid trait. In the revised manuscript, we have included analyses of additional phenotypes, including low-density lipoprotein (LDL) and T2D (Fig. 2). These additions demonstrate the broader applicability of our framework beyond a single trait class.

      Comment 3-2. It also seems like the authors have not compared their method to the truly latest PRS methods, such as PRS-CSx and SBayesR. I would suggest adding some of the methods shown to be the best from this recent paper: https://www.nature.com/articles/s41598-025-02903-1

      Reply 3-2. We agree these are important comparators. Accordingly, we have extended our comparison to include PRS‑CS (Ge et al., 2019) and SBayesRC (Zheng et al., 2024), following its strong performance demonstrated in recent benchmarking studies (see Figure 2 above). We confirmed that G2PT outperforms advanced PRS methods for all TG/HDL ratio, LDL, and T2D phenotypes.

      Comment 3-3. Another major comment regards whether this method could be applied to traits with just GWAS summary statistics, rather than individual level data. This would not enable identification of specific methods underlying an individual, but it could still learn SNP based weights that could be mapped to genes and systems that could help explain risk when the model is applied to individuals (kind of like a pretraining step?)

      Reply 3-3. We appreciate this suggestion. While SNP weights from GWAS summary statistics could, in principle, serve as informative priors for attention values, incorporating them would require a sophisticated mathematical formulation that is beyond the scope of this study. Our current framework also relies on individual-level genotype and phenotype data to capture multilevel information flow and individual-specific variation.

      Minor comments:

      Comment 3-4. Why the need to constrain to a small number of SNPs? Is it just computational cost? If so, what would happen as power increases and more SNPs exceed the thresholds used?

      Reply 3-4. Yes, it's about computational cost, but we've now modified the code for improved computational efficiency. First, we refactored the model to use the xFormer memory-efficient attention for the hierarchical graph transformer (Lefaudeux et al., 2022), which also helps full parallelization of training, reducing bottleneck effects. Second, we added a scaling study of the impact of varying SNP count. On 4×A30 GPUs, end-to-end training time for the 5k-SNP setting decreased from 65 hours to 7 GPU-hours (×9). We expect performance can potentially increase if more SNPs are provided to the model based on Fig. 2 (reproduced above). With the optimized implementation, users can raise SNP thresholds as power increases; the expected behavior is improved accuracy up to a plateau, while hierarchical sparsity maintains training tractability and ensures well-regularized results.

      Comment 3-5. What type of sample size/power does this method require to work well? If others were to use it, how many SNPs/samples would be needed to obtain good performance?

      Reply 3-5. To address this comment, we quantified performance as a function of training size by subsampling the cohort and retraining G2PT with identical architecture and SNP set. New Supplementary Fig. 3 (reproduced below) shows monotonic gains with sample size across three representative phenotypes. We found that stable performance is reached by ~100k samples. These trends hold for continuous traits (TG/HDL, LDL) and more modestly for a binary trait (T2D), consistent with lower per-sample information for case-control settings.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      The authors introduce G2PT, a hierarchical graph transformer model that integrates genetic variants (SNPs), gene annotations, and multigenic systems (Gene Ontology) to predict and interpret complex traits.

      Major Comments:

      1. Insufficient Specification of Model Architecture: The description of the "hierarchical graph transformer" lacks technical depth. Key implementation details are missing: how node embeddings are initialized for SNPs, genes, and systems; how graph connectivity is defined at each level (e.g., adjacency matrices used in Equations 5-9, the sparsity); justification for the choice of embedding dimension and number of attention heads, including any sensitivity analysis; and the architecture of the feed-forward neural networks (e.g., number of layers, activation functions, and hidden dimensions).
      2. No Simulation Studies to Validate Epistasis Detection: The ground truth epistasis interaction should use the ones that have been manually validated by literature. The central claim of discovering epistatic interactions relies heavily on the model's attention mechanism and downstream statistical filtering. However, no simulation studies are presented to validate that G2PT can reliably detect epistasis when ground-truth interactions are known. Demonstrating robust detection of non-additive interactions under varying genetic architectures and noise levels in simulated genotype-phenotype datasets is essential to substantiate the method's core capability.
      3. Lack of Justification for Model Complexity and Missing Ablation Insights: While Supplementary Figure 2 presents ablation studies, the manuscript needs to justify the high computational cost (168 GPU hours using 4×A30 GPUs) of the full model. It remains unclear how much performance gain is specifically due to reverse propagation (Equations 8-9), which is claimed to capture biological context. The benefit of using a full Gene Ontology hierarchy versus a flat system list is not quantified. There is also no comparison between bidirectional versus unidirectional propagation. Overall, the added complexity is not empirically shown to be necessary.
      4. Non-Equivalent Benchmarking Against PRS Methods: Figure 2 compares G2PT to polygenic risk score (PRS) methods such as LDpred2 and Lassosum, but G2PT is run only on SNPs pre-filtered by marginal association (p-values between 10⁻⁵ and 10⁻⁸), while the PRS methods use genome-wide SNPs. This introduces a strong bias in G2PT's favor by effectively removing noise. A fair comparison would require: (a) running LDpred2 and Lassosum on the same pre-filtered SNP sets as G2PT, or (b) running G2PT on genome-wide or LD-pruned SNP sets. The reported superior performance of G2PT may be driven primarily by this input filtering, not the model architecture.
      5. No Details on Hyperparameter Optimization: Although the manuscript mentions grid search for hyperparameter tuning, it provides no information about which parameters were optimized (e.g., learning rate, dropout rate, weight decay, attention dropout, FFNN dimensions), what search space was explored, or what final values were selected. There is also no assessment of how sensitive the model's performance is to these choices. Better transparency would help facilitate reproducibility
      6. Absence of Control for Key Confounders: In interpreting attention scores as reflecting genetic relevance (e.g., the role of the immunoglobulin system), the model includes only age, sex, and genetic principal components as covariates. Important confounders such as BMI, alcohol use, or medication (e.g., statins) have not been controlled for. Since TG/HDL levels are strongly influenced by environment and lifestyle, it is entirely plausible that some high-attention features reflect environmental tagging, not biological causality.
      7. Oversimplified Treatment of SNP-to-Gene Mapping: The SNP-to-gene mapping strategy combines cS2G, eQTL, and nearest-gene annotations, but the limitations of this approach are not adequately addressed. The manuscript does not specify how conflicts between methods are resolved or what fraction of SNPs map ambiguously to multiple genes. Supplementary Figure 2 shows model performance degrades when using only nearest-gene mapping, but there is no systematic analysis of how mapping uncertainties propagate through the hierarchy and affect attention or interpretation.
      8. Naive Scoring of System Importance: The method used to quantify the biological relevance of systems (i.e., correlating attention scores with predicted phenotype values) risks circular reasoning. Since the model is trained to optimize prediction, systems that contribute strongly to prediction will naturally show high correlation-even if they are not biologically causal. No comparison is made with established gene set enrichment methods applied to GWAS summary statistics. The approach lacks an independent benchmark to validate that the "important" systems are biologically meaningful.
      9. No External Validation to Assess Generalizability: All evaluations are performed using cross-validation within the UK Biobank. There is no assessment of generalizability to independent cohorts or diverse ancestries. Given population structure, genotyping platform, and phenotype measurement variability, external validation is essential before claiming the method is suitable for broader use in polygenic risk assessment.
      10. Computational Burden and Scalability Are Not Addressed: The paper notes that training the model requires 168 GPU hours on 4×A30 GPUs for just ~5,000 SNPs. However, there is no discussion of whether G2PT can scale to larger SNP sets (e.g., genome-wide imputed data) or more complex biological hierarchies (e.g., Reactome pathways). Without addressing scalability, the model's applicability to real-world, large-scale genomic datasets remains unclear.

      Minor:

      1. Attention Weights as Mechanistic Insight: The paper equates high attention scores with biological importance, for example in highlighting the immunoglobulin system. There is no causal validation showing that altering the highlighted SNPs, genes, or systems has an actual effect on TG/HDL. Attention weights in transformer models are known to sometimes reflect spurious correlations, especially in high-dimensional settings. The correlation between attention scores and predictions (Supplementary Fig. 3a,b) does not constitute biological evidence. The interpretability claims can be restated without supporting functional or causal validation.

      Significance

      Novelty

      This work presents novelty by introducing the first transformer-based model that integrates the GO hierarchy to enable bidirectional mapping between genotype and phenotype. Additionally, the use of attention mechanisms to screen for epistasis offers a novel and computationally efficient alternative to traditional exhaustive SNP-SNP interaction tests.

      Impact

      Target Audience

      • Specialized: Computational biologists working on interpretable machine learning methods in genomics.
      • Broader: Geneticists investigating polygenic traits and drug developers focusing on pathway-level therapeutic targets.

      Limitations vs. Contributions

      While the work presents a clear conceptual advance by incorporating hierarchical biological priors and attention mechanisms, the technical contribution is somewhat limited by its validation on a single trait and the absence of simulation-based benchmarking. Nevertheless, the framework shows potential if extended to other traits and experimentally validated.

      Overall Assessment

      Recommendation: Major Revision

      Strengths:

      • Predictive performance appears strong.
      • The use of biological priors enables interpretability at the pathway level.

      Major Weaknesses:

      • The current validation is limited to a single trait, restricting generalizability.
      • The manuscript lacks a complete and clear description of the model architecture.
      • No simulations are provided to assess the method's ability to recover known epistatic interactions or pathways.

      Reviewer Expertise: Machine learning applications in genomics and genetics.

    1. Laboratory Markers•Histamine:•Rises within 5–10 minutes•Remains elevated for 30–60 minutes•Best measured between 10 minutes and 1 hour•Tryptase:•Peaks 60–90 minutes after onset•May remain elevated for up to 5 hours•Best measured 1–2 hours after anaphylaxis (no later than 6 hours)Alpha and beta tryptase:•Alpha tryptase has a high basal level•In anaphylaxis, beta tryptase increases and alpha tryptase rises further•(Total tryptase / beta tryptase):•<10: no mastocytosis20: mastocytosis

      ① Laboratory Markers Laboratuvar belirteçleri

      ② Histamine: Histamin:

      ③ Rises within 5–10 minutes 5–10 dakika içinde yükselir

      ④ Remains elevated for 30–60 minutes 30–60 dakika boyunca yüksek kalır

      ⑤ Best measured between 10 minutes and 1 hour En uygun ölçüm zamanı 10 dakika ile 1 saat arasıdır

      ⑥ Tryptase: Tripta z:

      ⑦ Peaks 60–90 minutes after onset Başlangıçtan 60–90 dakika sonra pik yapar

      ⑧ May remain elevated for up to 5 hours 5 saate kadar yüksek kalabilir

      ⑨ Best measured 1–2 hours after anaphylaxis (no later than 6 hours) Anafilaksiden 1–2 saat sonra ölçülmesi idealdir (en geç 6 saat içinde)

      ⑩ Alpha and beta tryptase: Alfa ve beta triptaz:

      ⑪ Alpha tryptase has a high basal level Alfa triptazın bazal düzeyi yüksektir

      ⑫ In anaphylaxis, beta tryptase increases and alpha tryptase rises further Anafilakside beta triptaz artar ve alfa triptaz daha da yükselir

      ⑬ (Total tryptase / beta tryptase): (Total triptaz / beta triptaz):

      ⑭ <10: no mastocytosis <10: mastositoz yok

      ⑮ ≥20: mastocytosis ≥20: mastositoz

    Annotators

    1. ntexte », Alexandre Faye.↩︎

      Les notes 35 et 36 n'apparaissent pas ici, car elles n'ont pas d'ancre dans le texte principal. Sera-t-il possible de vérifier si ces notes sont toujours pertinentes et si on peut les supprimer ?

      De plus, la note référence de la note 36 est déjà mentionnée dans la conclusion, elle ne semble donc pas nécessaire.

    2. Toutes les citations sont faites à partir de la transcrip

      Je propose de reformuler la note de bas de page pour d'abord mentionner la citation avant de préciser d'où elle vient.

      Voici ma proposition de reformulation:

      " [Le] pouvoir d'adhésion, quand ce projet est arrivé, ça a été un peu comme une bouffée d'oxygène dans notre quotidien. " Toutes les citations [...]

    3. Les actes de la conférence ont été publiés. Gebeil, Sophie, et Jean-Christophe Peyssard, éd. Exploring the Archived Web during a Highly Transform

      Comme énoncé dans le commentaire plus haut, je propose de supprimer cette note de bas page dont la référence est déjà énoncée à l'intérieur du texte.

    4. Grands sites archéologiques, Célébrations nationales et Recherches ethnologiques

      Dans la dernière version de l'article, la typographie a été mise à jour pour l'italique, car c'est ainsi que la typographie est conseillée pour les titres de collections selon l'OQLF.

    5. Bibliographie ebastienmagro.net/2016/03/14/une-archeologie-des-premiers-sites-web-de-musees-en-france/]{.underline}](https://sebastienmagro.net/2016/03/14/une-archeologie-des-premiers-sites-web-de-musees-en-france/)

      Erreur de frappe qui a été corrigée dans la dernière version de l'article.

    6. (Bordeaux, Poulot, et Triquet 2020)

      Il faudrait indiquer le numéro de page duquel est tiré cette citation.

      J'ajoute également qu'un espace avant le référence a été ajouté dans la dernière version.

    7. (Yannick Vernet)

      Ici aussi, je propose de mettre le nom de l'auteur de la citation entre parenthèses et d'ajouter à côté la clef de citation de la journée d'étude une fois qu'elle sera créée.

      Cela permettra de supprimer la note de bas page.

    8. 27 (Filippo Vancini)

      Puisque la note de bas de page ne fait que citer l'auteur de la citation et réfère à la journée d'étude, je propose d'ajouter l'auteur de la citation entre parenthèse et d'ajouter la clef de citation après une fois qu'elle sera créée. Cela permettra de retirer la note de bas page et de moins interrompre la lecture.

    9. 22

      Sera-t-il possible de nous transmettre toutes les informations disponibles sur la journée d'étude et les transcriptions (qui l'a effectuée, à quelle date, etc.) ?

      Cela nous permettra de créer une clef de citation et de l'insérer dans les notes de bas de page.

    10. de leur fonction d’ordonnancement

      Cette expression est plutôt lourde et difficile à saisir. Je proposerais de l'alléger en la remplaçant par "de la manière d'agencer l'identité genrée et les rapports sociaux entre les sexes."

    11. Et quel est le degré de rejouabilité des animations Flash ?

      Il y avait un problème de rupture dans la première version de ce bout de texte. Je l'ai donc reformulé pour compléter la question qui se limitait d'abord à "rejouabilité des animations Flash?"

    12. mais aussi que la capture d’un site ne soit pas complète, en fonction des liens suivis ou non par le robot.

      Il semble y avoir une rupture dans la phrase. Je proposerais de reformuler ainsi pour fluidifier la lecture :

      "[...] site. Il est aussi possible que la capture d'un site soit incomplète, en fonction des liens suivis ou non par le robot."

    1. examen físico revela sensibilidad difusa y vigilancia sin hallazgos localizados, ausencia de una fuente de infección tratable quirúrgicamente en un estudio de imagen, y la presencia de más de 250 neutrófilos/mL en fluido obtenido mediante paracentesis.

      x

    1. La Salle

      René-Robert Cavelier, Sieur de La Salle (November 22, 1643 – March 19, 1687), was a French explorer and fur trader in North America. He explored the Great Lakes region of the United States and Canada, and the Mississippi River. He is best known for an early 1682 expedition in which he canoed the lower Mississippi River from the mouth of the Illinois River to the Gulf of Mexico.

      https://en.wikipedia.org/wiki/Ren%C3%A9-Robert_Cavelier,_Sieur_de_La_Salle

    1. La switcheroo()fonction renvoie "ab"pour x = "a"ou x = "b", "cd"pour x = "c"ou x = "d", et NULLpour x = "e"ou toute autre valeur de xnon dans c("a", "b", "c", "d").

      ... an example of use case?

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      Fungal survival and pathogenicity rely on the ability to undergo reversible morphological transitions, which are often linked to nutrient availability. In this study, the authors uncover a conserved connection between glycolytic activity and sulfur amino acid biosynthesis that drives morphogenesis in two fungal model systems. By disentangling this process from canonical cAMP signaling, the authors identify a new metabolic axis that integrates central carbon metabolism with developmental plasticity and virulence.

      Strengths:

      The study integrates different experimental approaches, including genetic, biochemical, transcriptomic, and morphological analyses, and convincingly demonstrates that perturbations in glycolysis alter sulfur metabolic pathways and thus impact pseudohyphal and hyphal differentiation. Overall, this work offers new and important insights into how metabolic fluxes are intertwined with fungal developmental programs and therefore opens new perspectives to investigate morphological transitioning in fungi.

      We thank the reviewer for finding this study to be of importance and for appreciating our multipronged approach to substantiate our finding that perturbations in glycolysis alter sulfur metabolism and thus impact pseudohyphal and hyphal differentiation in fungi.

      Weaknesses:

      A few aspects could be improved to strengthen the conclusions. Firstly, the striking transcriptomic changes observed upon 2DG treatment should be analyzed in S. cerevisiae adh1 and pfk1 deletion strains, for instance, through qPCR or western blot analyses of sulfur metabolism genes, to confirm that observed changes in 2DG conditions mirror those seen in genetic mutants. Secondly, differences between methionine and cysteine in their ability to rescue the mutant phenotype in both species are not mentioned, nor discussed in more detail. This is especially important as there seem to be differences between S. cerevisiae and C. albicans, which might point to subtle but specific metabolic adaptations.

      The authors are also encouraged to refine several figure elements for clarity and comparability (e.g., harmonized axes in bar plots), condense the discussion to emphasize the conceptual advances over a summary of the results, and shorten figure legends.

      We are grateful for this valuable and constructive feedback, and we agree with the reviewer on the necessity of performing RT-qPCR analysis of sulfur metabolism genes in ∆∆pfk1 and ∆∆adh1 strains of S. cerevisiae to validate our RNA-Seq results using 2DG. We have performed this experiment, and our results show that several genes involved in the de novo biosynthesis of sulfur-containing amino acids are downregulated in both the ∆∆pfk1 and ∆∆adh1 strains, corroborating the downregulation of sulfur metabolism genes in the 2DG treated samples. This new data is now included in the revised manuscript as Supplementary Figure 2C. 

      Furthermore, we acknowledge the reviewer’s point regarding the significance of comparing the differences in the ability of methionine and cysteine to rescue filamentation defects exhibited by the mutants, between S. cerevisiae and C. albicans. The observed differences between S. cerevisiae and C. albicans likely highlight species-specific metabolic adaptations within the sulfur assimilation pathway.  While both yeasts employ the transsulfuration pathway to interconvert these sulfur-containing amino acids, the precise regulatory points including the specific enzymes, their compartmentalization, and transcriptional control are not identical. For instance, differences in the feedback inhibition mechanisms or the expression levels of key transsulfuration enzymes between S. cerevisiae and C. albicans could explain the variations in the phenotypic rescue experiments (Chebaro et al., 2017; Lombardi et al., 2024; Rouillon et al., 2000; Shrivastava et al., 2021; Thomas and Surdin-Kerjan, 1997). Furthermore, the species-specific differences in amino acid transport systems (permeases) adds another layer of complexity. S. cerevisiae primarily uses multiple, low-affinity permeases for cysteine transport (Gap1, Bap2, Bap3, Tat1, Tat2, Agp1, Gnp1, Yct1), while relying on a limited set of high-affinity transporters (like Mup1) for methionine transport, with the added complexity that its methionine transporters can also transport cysteine (Düring-Olsen et al., 1999; Huang et al., 2017; Kosugi et al., 2001; Menant et al., 2006). In contrast, C. albicans utilizes a high-affinity transporters for the uptake of both amino acids, employing Cyn1 specifically for cysteine and Mup1 for methionine, indicating a greater reliance on dedicated transport mechanisms for these sulfur-containing molecules in the pathogenic yeast (Schrevens et al., 2018; Yadav and Bachhawat, 2011). A combination of the aforesaid factors could be the potential reason for the differences in the ability of cysteine and methionine to rescue filamentation in S. cerevisiae and C. albicans.

      Finally, we have enhanced the quantitative rigor and clarity of the data presentation in the revised manuscript by implementing Y-axis uniformity across all relevant bar graphs to facilitate a more robust and direct comparative analysis. We have also condensed the discussion to emphasize the conceptual advances and have shortened the figure legends as per the reviewer suggestions

      Reviewer #2 (Public review):

      Summary:

      This manuscript investigates the interplay between glycolysis and sulfur metabolism in regulating fungal morphogenesis and virulence. Using both Saccharomyces cerevisiae and Candida albicans, the authors demonstrate that glycolytic flux is essential for morphogenesis under nitrogen-limiting conditions, acting independently of the established cAMP-PKA pathway. Transcriptomic and genetic analyses reveal that glycolysis influences the de novo biosynthesis of sulfur-containing amino acids, specifically cysteine and methionine. Notably, supplementation with sulfur sources restores morphogenetic and virulence defects in glycolysis-deficient mutants, thereby linking core carbon metabolism with sulfur assimilation and fungal pathogenicity.

      Strengths:

      The work identifies a previously uncharacterized link between glycolysis and sulfur metabolism in fungi, bridging metabolic and morphogenetic regulation, which is an important conceptual advance and fungal pathogenicity. Demonstrating that adding cysteine supplementation rescues virulence defects in animal models connects basic metabolism to infection outcomes, which adds to biomedical importance.

      We would like to thank the reviewer for the positive comments on our work. We are pleased that they recognize the novel metabolic link between glycolysis and sulfur metabolism as a key conceptual advance in fungal morphogenesis. 

      Weaknesses:

      The proposed model that glycolytic flux modulates Met30 activity post-translationally remains speculative. While data support Met4 stabilization in met30 deletion strains, the mechanism of Met30 modulation by glycolysis is not demonstrated.

      We thank the reviewer for this valuable feedback. The activity of the SCF<sup>Met30</sup> E3 ubiquitin ligase, mediated by the F box protein Met30, is dynamically regulated through both proteolytic degradation and its dissociation from the SCF complex, to coordinate sulfur metabolism and cell cycle progression (Smothers et al., 2000; Yen et al., 2005). Our transcriptomic (RNA-seq analysis) and protein expression analysis (Fig. 3J) confirms that Met30 expression is not differentially regulated in the presence of 2DG, effectively eliminating changes in synthesis or SCF<sup>Met30</sup> proteasomal degradation as the dominant regulatory mechanism. This observation is consistent with the established paradigm wherein stress signals, such as cadmium (Cd<sup>2+</sup>) exposure, rapidly inactivates the SCF<sup>Met30</sup> E3 ubiquitin ligase via the dissociation of Met30 from the Skp1 subunit of the SCF complex (Lauinger et al., 2024; Yen et al., 2005). We therefore propose that active glycolytic flux modulates SCF<sup>Met30</sup> activity post-translationally, specifically by triggering Met30 detachment from the SCF complex. This mechanism would stabilize the primary substrate, the transcription factor Met4, thus promoting the biosynthesis of sulfur-containing amino acids. Mechanistic validation of this hypothesis, particularly the assessment of Met30 dissociation from the SCF<sup>Met30</sup> complex via immunoprecipitation (IP), is technically challenging. Since these experiments will involve isolation of cells from colonies undergoing pseudohyphal differentiation, on solid media (given that pseudohyphal differentiation does not occur in liquid media that is limiting for nitrogen (Gancedo, 2001; Gimeno et al., 1992)), current cell yields (OD<sub>600</sub>≈1 from ≈80-100 colonies) are significantly below the amount of cells that is needed to obtain the required amount of total protein concentration, for standard pull down assays (OD<Sub>600</sub>≈600-800 is required to achieve 1-2 mg/ml of total protein which is the standard requirement for pull down protocols in S. cerevisiae (Lauinger et al., 2024)).

      Given that the primary objective of our study is to establish the novel regulatory link between glycolysis and sulfur metabolism in the context of fungal morphogenesis, we would like to explore these crucial mechanistic details, in depth, in a subsequent study.

      Reviewer #3 (Public review):

      This study investigates the connection between glycolysis and the biosynthesis of sulfur-containing amino acids in controlling fungal morphogenesis, using Saccharomyces cerevisiae and C. albicans as model organisms. The authors identify a conserved metabolic axis that integrates glycolysis with cysteine/methionine biosynthetic pathways to influence morphological transitions. This work broadens the current understanding of fungal morphogenesis, which has largely focused on gene regulatory networks and cAMP-dependent signaling pathways, by emphasizing the contribution of metabolic control mechanisms. However, despite the novel conceptual framework, the study provides limited mechanistic characterization of how the sulfur metabolism and glycolysis blockade directly drive morphological outcomes. In particular, the rationale for selecting specific gene deletions, such as Met32 (and not Met4), or the Met30 deletion used to probe this pathway, is not clearly explained, making it difficult to assess whether these targets comprehensively represent the metabolic nodes proposed to be critical. Further supportive data and experimental validation would strengthen the claims on connections between glycolysis, sulfur amino acid metabolism, and virulence.

      Strengths:

      (1) The delineation of how glycolytic flux regulates fungal morphogenesis through a cAMP-independent mechanism is a significant advancement. The coupling of glycolysis with the de novo biosynthesis of sulfur-containing amino acids, a requirement for morphogenesis, introduces a novel and unexpected layer of regulation.

      (2) Demonstrating this mechanism in both S. cerevisiae and C. albicans strengthens the argument for its evolutionary conservation and biological importance.

      (3) The ability to rescue the morphogenesis defect through exogenous supplementation of sulfur-containing amino acids provides functional validation.

      (4) The findings from the murine Pfk1-deficient model underscore the clinical significance of metabolic pathways in fungal infections.

      We are grateful for this comprehensive and insightful summary of our work. We deeply appreciate the reviewer's recognition of the key conceptual breakthroughs regarding the metabolic regulation of fungal morphogenesis and the clinical relevance of our findings.

      Weaknesses:

      (1) While the link between glycolysis and sulfur amino acid biosynthesis is established via transcriptomic and proteomic analysis, the specific regulation connecting these pathways via Met30 remains to be elucidated. For example, what are the expression and protein levels of Met30 in the initial analysis from Figure 2? How specific is this effect on Met30 in anaerobic versus aerobic glycolysis, especially when the pentose phosphate pathway is involved in the growth of the cells when glycolysis is perturbed ?

      We are grateful for the insightful feedback provided by the reviewer. S. cerevisiae is a Crabtree positive organism that primarily uses anaerobic glycolysis to metabolize glucose, under glucose-replete conditions (Barford and Hall, 1979; De Deken, 1966) and our pseudohyphal differentiation assays are performed in glucose-rich conditions (Gimeno et al., 1992). Furthermore, perturbation of glycolysis is known to induce compensatory upregulation of the Pentose Phosphate Pathway (PPP) (Ralser et al., 2007) and we have also observed the upregulation of the gene that encodes for transketolase-1 (Tkl1), a key enzyme in the PPP, in our RNA-seq data. Importantly, our transcriptomic (RNA-seq analysis) and protein expression analysis (Fig. 3J) confirms that Met30 expression is not differentially regulated in the presence of 2DG, effectively eliminating changes in synthesis or SCF<sup>Met30</sup> proteasomal degradation as the dominant regulatory mechanism.  This aligns with the established paradigm wherein stress signals, such as cadmium (Cd<sup>2+</sup>) exposure, rapidly inactivates SCF<sup>Met30</sup> E3 ubiquitin ligase via Met30 dissociation from the Skp1 subunit of the complex (Lauinger et al., 2024; Yen et al., 2005). We therefore propose that active glycolytic flux modulates SCF<sup>Met30</sup> activity post-translationally, specifically by triggering Met30 detachment from the SCF complex. This mechanism would stabilize the primary substrate, the transcription factor Met4, thus promoting the biosynthesis of sulfur-containing amino acids. Further experiments are required to delineate the specific role of pentose phosphate pathway in the aforesaid proposed regulation of the Met30 activity under glycolysis perturbation and this will be explored in our subsequent study.

      (2) Including detailed metabolite profiling could have strengthened the metabolic connection and provided additional insights into intermediate flux changes, i.e., measuring levels of metabolites to check if cysteine or methionine levels are influenced intracellularly. Also, it is expected to see how Met30 deletion could affect cell growth. Data on Met30 deletion and its effect on growth are not included, especially given that a viable heterozygous Met30 strain has been established. Measuring the cysteine or methionine levels using metabolomic analysis would further strengthen the claims in every section.

      We are grateful to the reviewer for this constructive feedback. To address the potential impact of met30 deletion on cell growth, we have included new data (Suppl. Fig. 4A) demonstrating that the deletion of a single copy of met30 in diploid S. cerevisiae does not compromise overall cell growth under nitrogen-limiting conditions as the ∆met30 strain grows similar to the wild-type strain. 

      Our pseudohyphal/hyphal differentiation assays show that the defects induced by glycolytic perturbation is fully rescued by the exogenous supplementation of sulfur-containing amino acids, cysteine or methionine. Since these data conclusively demonstrate that the primary metabolic limitation caused by the perturbation of glycolysis, which leads to filamentation defects is sulfur metabolism, we posit that performing comprehensive metabolic profiling would primarily reconfirm the aforesaid results. We believe that our in vitro and in vivo sulfur add-back experiments sufficiently substantiate the novel regulatory metabolic link between glycolysis and sulfur metabolism.

      (3) In comparison with the previous bioRxiv (doi: https://doi.org/10.1101/2025.05.14.654021) of this article in May 2025 to the recent bioRxiv of this article (doi: https://doi.org/10.1101/2025.05.14.654021), there have been some changes, and Met30 deletion has been recently included, and the chemical perturbation of glycolysis has been added as new data. Although the changes incorporated in the recent version of the article improved the illustration of the hypothesis in Figure 6, which connects glycolysis to Sulfur metabolism, the gene expression and protein levels of all genes involved in the illustrated hypothesis are not consistently shown. For example, in some cases, the Met4 expression is not shown (Figure 4), and the Met30 expression is not shown during profiling (gene expression or protein levels) throughout the manuscript. Lack of consistency in profiling the same set of key genes makes understanding more complicated.

      We thank the reviewer for this feedback which helps us to clarify the scope of our transcriptomic analysis. Our decision to focus our RT-qPCR experiments on downstream targets, while excluding met4 and met30 from the RT-qPCR analysis, is based on their known regulatory mechanisms. Met4 activity is predominantly regulated by post-translational ubiquitination by the SCFMet30 complex followed by its degradation (Rouillon et al., 2000; Shrivastava et al., 2021; Smothers et al., 2000)  while Met30 activity is primarily regulated by its auto-degradation or its dissociation from the SCFMet30 complex (Lauinger et al., 2024; Smothers et al., 2000; Yen et al., 2005).  Consistent with this, our RNA-Seq results indicate that neither met4 nor met30 transcripts are differentially expressed, in response to 2DG addition. For all our RT-qPCR analysis in S. cerevisiae and C. albicans, we have consistently used the same set of sulfur metabolism genes and these include met32, met3, met5, met10 and met17. Our data on protein expression analysis of Met30 in S. cerevisiae (Fig. 3J) confirms that Met30 expression is not differentially regulated in the presence of 2DG, effectively eliminating changes in synthesis or SCFMet30 proteasomal degradation as the dominant regulatory mechanism.

      (4) The demonstrated link between glycolysis and sulfur amino acid biosynthesis, along with its implications for virulence in C. albicans, is important for understanding fungal adaptation, as mentioned in the article; however, the Met4 activation was not fully characterized, nor were the data presented when virulence was assessed in Figure 4. Why is Met4 not included in Figure 4D and I? Especially, according to Figure 6, Met4 activation is crucial and guides the differences between glycolysis-active and inactive conditions.

      We thank the reviewer for their input. As the Met4 transcription factor in C. albicans is primarily regulated post-translationally through its degradation and inactivation by the SCFMet30 E3 ubiquitin ligase complex (Shrivastava et al., 2021), we opted to monitor the transcriptional status of downstream targets of Met4 (i.e., genes directly regulated by Met4), as these are the genes that exhibit the most direct and functionally relevant transcriptional changes in response to the altered Met4 levels.

      (5) Similarly, the rationale behind selecting Met32 for characterizing sulfur metabolism is unclear. Deletion of Met32 resulted in a significant reduction in pseudohyphal differentiation; why is this attributed only to Met32? What happens if Met4 is deleted? It is not justified why Met32, rather than Met4, was chosen. Figure 6 clearly hypothesizes that Met4 activation is the key to the mechanism.

      We sincerely thank the reviewer for this insightful query regarding our selection of the met32 for our gene deletion experiments. The choice of ∆∆met32 strain was strategically motivated by its unique phenotypic properties within the de novo biosynthesis of sulfur-containing amino acids pathway. While deletions of most the genes that encode for proteins involved in the de novo biosynthesis of sulfurcontaining amino acids, result in auxotrophy for methionine or cysteine, ∆∆met32 strain does not exhibit this phenotype (Blaiseau et al., 1997). This key distinction is attributed to the functional redundancy provided by the paralogous gene, met31 (Blaiseau et al., 1997). Crucially, given that the deletion of the central transcriptional regulator, met4, results in cysteine/methionine auxotrophy, the use of the ∆∆met32 strain provides an essential, viable experimental model for investigating the role of sulfur metabolism during pseudohyphal differentiation in S. cerevisiae.

      (6) The comparative RT-qPCR in Figure 5 did not account for sulfur metabolism genes, whereas it was focused only on virulence and hyphal differentiation. Is there data to support the levels of sulfur metabolism genes?

      We thank the reviewer for this feedback. We wish to respectfully clarify that the data pertaining to expression of sulfur metabolism genes in the presence of 2DG or in the ∆∆pfk1 strain in C. albicans are already included and discussed within the manuscript. These results can be found in Figure 4, panels D and I, respectively.

      (7) To validate the proposed interlink between sulfur metabolism and virulence, it is recommended that the gene sets (illustrated in Figure 6) be consistently included across all comparative data included throughout the comparisons. Excluding sulfur metabolism genes in Figure 5 prevents the experiment from demonstrating the coordinated role of glycolysis perturbation → sulfur metabolism → virulence. The same is true for other comparisons, where the lack of data on Met30, Met4, etc., makes it hard.to connect the hypothesis. It is also recommended to check the gene expression of other genes related to the cAMP pathway and report them to confirm the cAMP-independent mechanism. For example, gap2 deletion was used to confirm the effects of cAMP supplementation, but the expression of this gene was not assessed in the RNA-seq analysis in Figure 2. It would be beneficial to show the expression of cAMP-related genes to completely confirm that they do not play a role in the claims in Figure 2.

      We thank the reviewer for this valuable feedback. The transcriptional analysis of the sulfur metabolism genes in the presence of 2DG and the ∆∆pfk1 strain is shown in Figures 4D and 4I.

      Our RNA-seq analysis (Author response image 1) confirms that there is no significant transcriptional change in the expression of cAMP-PKA pathway associated genes (Log2 fold change ≥ 1 for upregulated genes and Log2 fold change ≤ -1 for downregulated genes) in 2DG treated cells compared to the untreated control cells, reinforcing our conclusion that the glycolytic regulation of fungal morphogenesis is mediated through a cAMP-PKA pathway independent mechanism.

      Author response image 1.

      (8) Although the NAC supplementation study is included in the new version of the article compared to the previous version in BioRxiv (May 2025), the link to sulfur metabolism is not well characterized in Figure 5 and their related datasets. The main focus of the manuscript is to delineate the role of sulfur metabolism; hence, it is anticipated that Figure 5 will include sulfur-related metabolic genes and their links to pfk1 deletion, using RT-PCR measurements as shown for the virulence genes.

      We thank the reviewer for this question. The relevant data are indeed present within the current submission. We respectfully direct the reviewer's attention to Figure 4, panels D and I, where the data pertaining to expression of sulfur metabolism genes in the presence of 2DG or in the ∆∆pfk1 strain in C. albicans can be found.

      (9) The manuscript would benefit from more information added to the introduction section and literature supports for some of the findings reported earlier, including the role of (i) cAMP-PKA and MAPK pathways, (ii) what is known in the literature that reports about the treatment with 2DG (role of Snf1, HXT1, and HXT3), as well as how gpa2 is involved. Some sentences in the manuscripts are repetitive; it would be beneficial to add more relevant sections to the introduction and discussion to clarify the rationale for gene choices.

      We thank the reviewer for this valuable feedback. We have incorporated these changes in our revised manuscript.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      (1) Line 107: As morphological transitions are indeed a conserved phenomenon across fungal species, hosts & environmental niches, the authors could refer to a few more here (infection structures like appressoria; fruiting bodies, etc.).

      We thank the reviewer for this valuable feedback. We have incorporated these changes in our revised manuscript.

      Line 119/120: That's a bit misleading in my opinion. Gpr1 acts as a key sensor of external carbon, while Ras proteins control the cAMP pathway as intracellular sensory proteins. That should be stated more clearly. cAMP is the output and not the sensor.

      We appreciate the reviewer's detailed attention to this signaling network. We have revised the manuscript to precisely reflect this established signaling hierarchy for maximum clarity.

      (2) Line 180: ..differentiation

      We thank the reviewer for this valuable feedback. We have incorporated this change in our revised manuscript.

      (3) Figure 1 panels C & F. The authors should provide the same scale for all experiments. Otherwise, the interpretation can be difficult. The same applies to the different bar plots in Figure 4. Have the authors quantified pseudohyphal differentiation in the cAMP add-back assays? I agree that the chosen images look convincing, but they don't reflect quantitative analyses.

      We thank the reviewer for detailed and constructive feedback. We have changed the Y-axis and made it more uniform to improve the clarity of our data presentation in the revised manuscript.

      We have also incorporated the quantitative analysis of the cAMP add-back assays in S. cerevisiae, in Figure 2 Panel L.

      (4) Line 367/68: "cysteine or methionine was able to completely rescue". Here, the authors should phrase their wording more carefully. Figure 3C shows the complete rescue of the phenotype qualitatively, but Figure 3D clearly shows that there are differences between the supplementation of cysteine and methionine, with the latter not fully restoring the phenotype.

      We sincerely appreciate the reviewer's meticulous attention to the data interpretation. We fully agree that the initial phrasing in lines 367/368 requires adjustment, as Figure 3D establishes a quantitative difference in the efficiency of phenotypic rescue between cysteine and methionine supplementation. We have revised the text to articulate this difference.

      (5) Line 568: Here, apparently, the ability to rescue the differentiation phenotype is reversed compared to the experiment with S. cerevisiae. Cysteine only results in ~20% hyphal cells, while methionine restores to wild-type-like hyphal formation. Can the authors comment on where these differences might originate from? Is there a difference in the uptake of cysteine vs. methionine in the two species or consumption rates?

      We thank the reviewer for their detailed and constructive feedback. We believe this phenotypic difference can be due to the distinct metabolic prioritization of sulfur amino acids in C. albicans. Methionine is a known trigger for hyphal differentiation in C. albicans and serves as the immediate precursor for the universal methyl donor, S-adenosylmethionine (SAM) (Schrevens et al., 2018). (Kraidlova et al., 2016). The morphological transition to hyphae involves a complex regulatory cascade which requires high rates of methylation, and this requires a rapid and direct conversion of methionine into SAM (Kraidlova et al., 2016; Schrevens et al., 2018). Cysteine, however, must first be converted into methionine via the transsulfuration pathway to produce SAM, making it metabolically less efficient for these aforesaid processes.

      Reviewer #2 (Recommendations for the authors):

      The study's comprehensive experimental approach with integrating pharmacological inhibition, genetic manipulation, transcriptomics, and infection animal model, provides strong evidence for a conserved mechanism, though some aspects need further clarification.

      Major Comments:

      (1) While the data suggest that glycolysis affects Met30 activity post-translationally, the underlying mechanism remains speculative. The authors should perform co-immunoprecipitation or ubiquitination assays to confirm whether glycolytic perturbation alters Met30-SCF complex interactions or Met4 ubiquitination levels.

      We thank the reviewer for this valuable feedback. The activity of the SCF<sup>Met30</sup> E3 ubiquitin ligase, mediated by the F box protein Met30, is dynamically regulated through both proteolytic degradation and its dissociation from the SCF complex, to coordinate sulfur metabolism and cell cycle progression (Smothers et al., 2000; Yen et al., 2005). Our transcriptomic (RNA-seq analysis) and protein expression analysis (Fig. 3J) confirms that Met30 expression is not differentially regulated in the presence of 2DG, effectively eliminating changes in synthesis or SCF<sup>Met30</sup> proteasomal degradation as the dominant regulatory mechanism. This observation is consistent with the established paradigm wherein stress signals, such as cadmium (Cd<sup>2+</sup>) exposure, rapidly inactivates the SCF<sup>Met30</sup> E3 ubiquitin ligase via the dissociation of Met30 from the Skp1 subunit of the SCF complex (Lauinger et al., 2024; Yen et al., 2005). We therefore propose that active glycolytic flux modulates SCF<sup>Met30</sup> activity post-translationally, specifically by triggering Met30 detachment from the SCF complex. This mechanism would stabilize the primary substrate, the transcription factor Met4, thus promoting the biosynthesis of sulfur-containing amino acids. Mechanistic validation of this hypothesis, particularly the assessment of Met30 dissociation from the SCF<sup>Met30 </sup>complex via immunoprecipitation (IP), is technically challenging. Since these experiments will involve isolation of cells from colonies undergoing pseudohyphal differentiation, on solid media (given that pseudohyphal differentiation does not occur in liquid media that is limiting for nitrogen (Gancedo, 2001; Gimeno et al., 1992)), current cell yields (OD<sup>600</sup>≈1 from ≈80-100 colonies) are significantly below the amount of cells that is needed to obtain the required amount of total protein concentration, for standard pull down assays (OD600≈600-800 is required to achieve 1-2 mg/ml of total protein which is the standard requirement for pull down protocols in S. cerevisiae (Lauinger et al., 2024)).

      Given that the primary objective of our study is to establish the novel regulatory link between glycolysis and sulfur metabolism in the context of fungal morphogenesis, we would like to explore these crucial mechanistic details, in depth, in a subsequent study.

      (2) 2DG can exert pleiotropic effects unrelated to glycolytic inhibition (e.g., ER stress, autophagy induction). The authors are encouraged to perform complementary metabolic flux analyses, such as quantification of glycolytic intermediates or ATP levels, to confirm specific glycolytic inhibition.

      We appreciate the reviewer's concern regarding the potential pleiotropic effects of 2DG. While we acknowledge that 2DG may induce secondary cellular stress, we are confident that the observed phenotypes are robustly attributed to glycolytic inhibition based on our complementary genetic evidence. Specifically, the deletion strains ∆∆pfk1 and ∆∆adh1, which genetically perturb distinct steps in glycolysis, recapitulate the phenotypic results observed with 2DG treatment. Given this strong congruence between chemical inhibition and specific genetic deletions of key glycolytic enzymes, we are confident that our observed phenotypes are predominantly driven by the perturbation of the glycolytic pathway by 2DG.

      (3) The differential rescue effects (cysteine-only in inhibitor assays vs. both cysteine and methionine in genetic mutants) require further explanation. The authors should discuss potential differences in metabolic interconversion or amino acid transport that may account for this observation.

      We thank the reviewer for their valuable feedback. One explanation for the observed differential rescue effects of cysteine and methionine can be due to the distinct amino acid transport systems used by S. cerevisiae to transport these amino acids. S. cerevisiae primarily uses multiple, lowaffinity permeases (Gap1, Bap2, Bap3, Tat1, Tat2, Agp1, Gnp1, Yct1) for cysteine transport, while relying on a limited set of high-affinity transporters (like Mup1) for methionine transport, with the added complexity that its methionine transporters can also transport cysteine (Düring-Olsen et al., 1999; Huang et al., 2017; Kosugi et al., 2001; Menant et al., 2006). Hence, it is likely that cysteine uptake could be happening at a higher efficiency in S. cerevisiae compared to methionine uptake. Therefore, to achieve a comparable functional rescue by exogenous supplementation of methionine, it is necessary to use a higher concentration of methionine. When we performed our rescue experiments using higher concentrations of methionine, we did not see any rescue of pseudohyphal differentiation in the presence of 2DG and in fact we noticed that, at higher concentrations of methionine, the wild-type strain failed to undergo pseudohyphal differentiation even in the absence of 2DG. This is likely due to the fact that increasing the methionine concentration raises the overall nitrogen content of the medium, thereby making the medium less nitrogen-starved. This presents a major experimental constraint, as pseudohyphal differentiation is strictly dependent on nitrogen limitation, and the elevated nitrogen resulting from the higher methionine concentration can inhibit pseudohyphal differentiation.

      (4) NAC may influence host redox balance or immune responses. The discussion should consider whether the observed virulence rescue could partly result from host-directed effects.

      We thank the reviewer for this valuable feedback. We acknowledge the role of NAC in host directed immune response. It is important to note that, in the context of certain bacterial pathogens, NAC has been reported to augment cellular respiration, subsequently increasing Reactive Oxygen Species (ROS) generation, which contributes to pathogen clearance (Shee et al., 2022). Interestingly, in our study, NAC supplementation to the mice was given prior to the infection and maintained continuously throughout the duration of the experiment. This continuous supply of NAC likely contributes to the rescue of virulence defects exhibited by the ∆∆pfk1 strain (Fig. 5I and J). Essentially, NAC likely allows the mutant to fully activate its essential virulence strategies (including morphological switching), to cause a successful infection in the host. As per the reviewer suggestion, this has been included in the discussion section of the manuscript.

      Reviewer #3 (Recommendations for the authors):

      Most of the comments related to improving the manuscript have been provided in the public review. Here are some specifics for the authors to consider:

      (1) It is important to clarify the rationale for choosing specific gene deletions over other key genes (e.g., Met32 and Met30) and explain why Met4 was not included, given its proposed central role in Figure 6.

      We sincerely thank the reviewer for this insightful query regarding our selection of the met32 for our gene deletion experiments. The choice of ∆∆met32 strain was strategically motivated by its unique phenotypic properties within the de novo biosynthesis of sulfur-containing amino acids pathway. While deletions of most the genes that encode for proteins involved in the de novo biosynthesis of sulfurcontaining amino acids, result in auxotrophy for methionine or cysteine, ∆∆met32 strain does not exhibit this phenotype (Blaiseau et al., 1997). This key distinction is attributed to the functional redundancy provided by the paralogous gene, met31 (Blaiseau et al., 1997). Crucially, given that the deletion of the central transcriptional regulator, met4, results in cysteine/methionine auxotrophy, the use of the ∆∆met32 strain provides an essential, viable experimental model for investigating the role of sulfur metabolism during pseudohyphal differentiation in S. cerevisiae.

      (2) Comparison of consistent gene and protein expression data (Met30, Met4, Met32) across all relevant figures and analyses would strengthen the mechanistic connection in a better way. Some data that might help connect the sections is not included; please see the public review for more details.

      We thank the reviewer for this valuable input, which helps us to clarify the scope of our transcriptomic analysis. Our decision to focus our RT-qPCR experiments on downstream targets, while excluding Met4 and Met30 from the RT-qPCR analysis, is based on their known regulatory mechanisms. Met4 activity is predominantly regulated by post-translational ubiquitination by the SCFMet30 complex followed by its degradation (Rouillon et al., 2000; Shrivastava et al., 2021; Smothers et al., 2000)  while Met30 activity is primarily regulated by its auto-degradation or its dissociation from the SCFMet30 complex (Lauinger et al., 2024; Smothers et al., 2000; Yen et al., 2005).  Consistent with this, our RNA-Seq results indicate that neither met4 nor met30 transcripts are differentially expressed, in response to 2DG addition. For all our RT-qPCR analysis in S. cerevisiae and C. albicans, we have consistently used the same set of sulfur metabolism genes and these include met32, met3, met5, met10 and met17. Our data on protein expression analysis of Met30 in S, cerevisiae (Fig. 3J) confirms that Met30 expression is not differentially regulated in the presence of 2DG, effectively eliminating changes in synthesis or SCFMet30 proteasomal degradation as the dominant regulatory mechanism.

      (3) Suggested to include metabolomic profiling (cysteine, methionine, and intermediate metabolites) to substantiate the proposed metabolic flux between glycolysis and sulfur metabolism.

      We thank the reviewer for this valuable input. Our pseudohyphal/hyphal differentiation assays show that the defects induced by glycolytic perturbation is fully rescued by the exogenous supplementation of sulfur-containing amino acids, cysteine or methionine. Since these data conclusively demonstrate that the primary metabolic limitation caused by the perturbation of glycolysis, which leads to filamentation defects, is sulfur metabolism, we posit that performing comprehensive metabolic profiling would primarily reconfirm the aforesaid results. We believe that our in vitro and in vivo sulfur add-back experiments sufficiently substantiate the novel regulatory metabolic link between glycolysis and sulfur-metabolism.

      (4) Data on the effects of Met30 deletion on cell growth are currently not included, and relevant controls should be included to ensure observed phenotypes are not due to general growth defects.

      We are grateful to the reviewer for this constructive feedback. To address the potential impact of met30 deletion on cell growth, we have included new data (Suppl. Fig. 4A) demonstrating that the deletion of a single copy of met30 in diploid S. cerevisiae does not compromise overall growth under nitrogen-limiting conditions as the ∆met30 strain grows similar to the wild-type strain.

      (5) Expanding RT-qPCR and data from transcriptomic analyses to include sulfur metabolism genes and key cAMP pathway genes to confirm the proposed cAMP-independent mechanism during virulence characterization is necessary.

      We thank the reviewer for this valuable feedback. The transcriptional analysis of the sulfur metabolism genes in the presence of 2DG and the ∆∆pfk1 strain is shown in Figures 4D and 4I. 

      In order to confirm that glycolysis is critical for fungal morphogenesis in a cAMP-PKA pathway independent manner under nitrogen-limiting conditions in C. albicans, we performed cAMP add-back assays. Interestingly, corroborating our S. cerevisiae data, the exogenous addition of cAMP failed to rescue hyphal differentiation defect caused by the perturbation of glycolysis through 2DG addition or by the deletion of the pfk1 gene, under nitrogen-limiting condition in C. albicans. This data is now included in Suppl. Fig. 5B.

      (6) Enhancing the introduction and discussion by providing a clearer rationale for gene selection and more detailed references to established pathways (cAMP-PKA, MAPK, Snf1/HXT regulation, gpa2 involvement) is needed to reinstate the hypothesis.

      We thank the reviewer for this valuable feedback. We have incorporated these changes in our revised manuscript.

      (7) Reducing redundancy in the text and improving figure consistency, particularly by ensuring that the gene sets depicted in Figure 6 are represented across all datasets, would strengthen the interconnections among sections.

      We thank the reviewer for this valuable feedback.  We have incorporated these changes in our revised manuscript.

      References

      Barford JP, Hall RJ. 1979. An examination of the crabtree effect in Saccharomyces cerevisiae: The role of respiratory adaptation. J Gen Microbiol. https://doi.org/10.1099/00221287-114-2-267

      Blaiseau, P. L., & Thomas, D. (1998). Multiple transcriptional activation complexes tether the yeast activator Met4 to DNA. The EMBO journal, 17(21), 6327–6336. https://doi.org/10.1093/emboj/17.21.6327

      Chebaro, Y., Lorenz, M., Fa, A., Zheng, R., & Gustin, M. (2017). Adaptation of Candida albicans to Reactive Sulfur Species. Genetics, 206(1), 151–162. https://doi.org/10.1534/genetics.116.199679

      De Deken R. H. (1966). The Crabtree effect: a regulatory system in yeast. Journal of general microbiology, 44(2), 149–156. https://doi.org/10.1099/00221287-44-2-149

      Düring-Olsen, L., Regenberg, B., Gjermansen, C., Kielland-Brandt, M. C., & Hansen, J. (1999). Cysteine uptake by Saccharomyces cerevisiae is accomplished by multiple permeases. Current genetics, 35(6), 609–617. https://doi.org/10.1007/s002940050459

      Gancedo J. M. (2001). Control of pseudohyphae formation in Saccharomyces cerevisiae. FEMS microbiology reviews, 25(1), 107–123. https://doi.org/10.1111/j.1574-6976.2001.tb00573.x

      Gimeno, C. J., Ljungdahl, P. O., Styles, C. A., & Fink, G. R. (1992). Unipolar cell divisions in the yeast S. cerevisiae lead to filamentous growth: regulation by starvation and RAS. Cell, 68(6), 1077–1090. https://doi.org/10.1016/0092-8674(92)90079-r

      Huang, C. W., Walker, M. E., Fedrizzi, B., Gardner, R. C., & Jiranek, V. (2017). Yeast genes involved in regulating cysteine uptake affect production of hydrogen sulfide from cysteine during fermentation. FEMS yeast research, 17(5), 10.1093/femsyr/fox046. https://doi.org/10.1093/femsyr/fox046

      Kosugi, A., Koizumi, Y., Yanagida, F., & Udaka, S. (2001). MUP1, high affinity methionine permease, is involved in cysteine uptake by Saccharomyces cerevisiae. Bioscience, biotechnology, and biochemistry, 65(3), 728–731. https://doi.org/10.1271/bbb.65.728

      Kraidlova, L., Schrevens, S., Tournu, H., Van Zeebroeck, G., Sychrova, H., & Van Dijck, P. (2016). Characterization of the Candida albicans Amino Acid Permease Family: Gap2 Is the Only General Amino Acid Permease and Gap4 Is an S-Adenosylmethionine (SAM) Transporter Required for SAM-Induced Morphogenesis. mSphere, 1(6), e00284-16. https://doi.org/10.1128/mSphere.00284-16

      Lauinger, L., Andronicos, A., Flick, K., Yu, C., Durairaj, G., Huang, L., & Kaiser, P. (2024). Cadmium binding by the F-box domain induces p97-mediated SCF complex disassembly to activate stress response programs. Nature communications, 15(1), 3894. https://doi.org/10.1038/s41467-024-48184-6

      Lombardi, L., Salzberg, L. I., Cinnéide, E. Ó., O'Brien, C., Morio, F., Turner, S. A., Byrne, K. P., & Butler, G. (2024). Alternative sulphur metabolism in the fungal pathogen Candida parapsilosis. Nature communications, 15(1), 9190. https://doi.org/10.1038/s41467-024-53442-8

      Menant, A., Barbey, R., & Thomas, D. (2006). Substrate-mediated remodeling of methionine transport by multiple ubiquitin-dependent mechanisms in yeast cells. The EMBO journal, 25(19), 4436–4447. https://doi.org/10.1038/sj.emboj.7601330

      Ralser, M., Wamelink, M. M., Kowald, A., Gerisch, B., Heeren, G., Struys, E. A., Klipp, E., Jakobs, C., Breitenbach, M., Lehrach, H., & Krobitsch, S. (2007). Dynamic rerouting of the carbohydrate flux is key to counteracting oxidative stress. Journal of biology, 6(4), 10. https://doi.org/10.1186/jbiol61

      Rouillon, A., Barbey, R., Patton, E. E., Tyers, M., & Thomas, D. (2000). Feedback-regulated degradation of the transcriptional activator Met4 is triggered by the SCF(Met30 )complex. The EMBO journal, 19(2), 282–294. https://doi.org/10.1093/emboj/19.2.282

      Schrevens, S., Van Zeebroeck, G., Riedelberger, M., Tournu, H., Kuchler, K., & Van Dijck, P. (2018). Methionine is required for cAMP-PKA-mediated morphogenesis and virulence of Candida albicans. Molecular microbiology, 108(3), 258–275. https://doi.org/10.1111/mmi.13933

      Shee, S., Singh, S., Tripathi, A., Thakur, C., Kumar T, A., Das, M., Yadav, V., Kohli, S., Rajmani, R. S., Chandra, N., Chakrapani, H., Drlica, K., & Singh, A. (2022). Moxifloxacin-Mediated Killing of Mycobacterium tuberculosis Involves Respiratory Downshift, Reductive Stress, and Accumulation of Reactive Oxygen Species. Antimicrobial agents and chemotherapy, 66(9), e0059222. https://doi.org/10.1128/aac.00592-22

      Shrivastava, M., Feng, J., Coles, M., Clark, B., Islam, A., Dumeaux, V., & Whiteway, M. (2021). Modulation of the complex regulatory network for methionine biosynthesis in fungi. Genetics, 217(2), iyaa049. https://doi.org/10.1093/genetics/iyaa049

      Smothers, D. B., Kozubowski, L., Dixon, C., Goebl, M. G., & Mathias, N. (2000). The abundance of Met30p limits SCF(Met30p) complex activity and is regulated by methionine availability. Molecular and cellular biology, 20(21), 7845–7852. https://doi.org/10.1128/MCB.20.21.7845-7852.2000

      Thomas, D., & Surdin-Kerjan, Y. (1997). Metabolism of sulfur amino acids in Saccharomyces cerevisiae. Microbiology and molecular biology reviews : MMBR, 61(4), 503–532. https://doi.org/10.1128/mmbr.61.4.503532.1997

      Yadav, A. K., & Bachhawat, A. K. (2011). CgCYN1, a plasma membrane cystine-specific transporter of Candida glabrata with orthologues prevalent among pathogenic yeast and fungi. The Journal of biological chemistry, 286(22), 19714–19723. https://doi.org/10.1074/jbc.M111.240648

      Yen, J. L., Su, N. Y., & Kaiser, P. (2005). The yeast ubiquitin ligase SCFMet30 regulates heavy metal response. Molecular biology of the cell, 16(4), 1872–1882. https://doi.org/10.1091/mbc.e04-12-1130

    1. Author response:

      The following is the authors’ response to the previous reviews.

      Joint Public Review:

      In this work, the authors present DeepTX, a computational tool for studying transcriptional bursting using single-cell RNA sequencing (scRNA-seq) data and deep learning. The method aims to infer transcriptional burst dynamics-including key model parameters and the associated steady-state distributions-directly from noisy single-cell data. The authors apply DeepTX to datasets from DNA damage experiments, revealing distinct regulatory patterns: IdU treatment in mouse stem cells increases burst size, promoting differentiation, while 5FU alters burst frequency in human cancer cells, driving apoptosis or survival depending on dose. These findings underscore the role of burst regulation in mediating cell fate responses to DNA damage.

      The main strength of this study lies in its methodological contribution. DeepTX integrates a non-Markovian mechanistic model with deep learning to approximate steady-state mRNA distributions as mixtures of negative binomial distributions, enabling genome-scale parameter inference with reduced computational cost. The authors provide a clear discussion of the framework's assumptions, including reliance on steady-state data and the inherent unidentifiability of parameter sets, and they outline how the model could be extended to other regulatory processes.

      The revised manuscript addresses many of the original concerns, particularly regarding sample size requirements, distributional assumptions, and the biological interpretation of inferred parameters. However, the framework remains limited by the constraints of snapshot data and cannot yet resolve dynamic heterogeneity or causality. The manuscript would also benefit from a broader contextualisation of DeepTX within the landscape of existing tools linking mechanistic modelling and single-cell transcriptomics. Finally, the interpretation of pathway enrichment analyses still warrants clarification.

      Overall, this work represents a valuable contribution to the integration of mechanistic models with highdimensional single-cell data. It will be of interest to researchers in systems biology, bioinformatics, and computational modelling.

      Recommendations for the authors:

      We thank the authors for their thorough revision and for addressing many of the points raised during the initial review. The revised manuscript presents an improved and clearer account of the methodology and its implications. However, several aspects would benefit from further clarification and refinement to strengthen the presentation and avoid overstatement.

      (1) Contextualization within the existing literature

      The manuscript would benefit from placing DeepTX more clearly in the context of other computational tools developed to connect mechanistic modelling and single-cell RNA sequencing data. This is an active area of research with notable recent contributions, including Sukys and Grima (bioRxiv, 2024), Garrido-Rodriguez et al. (PLOS Comp Biol, 2021), and Maizels (2024). Positioning DeepTX in relation to these and other relevant efforts would help readers appreciate its specific advances and contributions.

      We sincerely thank you for this valuable suggestion. We agree that situating DeepTX within the broader landscape of computational approaches linking mechanistic modeling and single-cell RNA sequencing data will clarify its contributions and advances. In this revised version, we have explicitly discussed the comparison and relation of DeepTX in the context of this active area using an individual paragraph in the Discussion section.

      Specifically, we mentioned that the DeepTX research paradigm contributes to a growing line of area aiming to link mechanistic models of gene regulation with scRNA-seq data. Maizels provided a comprehensive review of computational strategies for incorporating dynamic mechanisms into single-cell transcriptomics (Maizels RJ, 2024). In this context, RNA velocity is one of the most important examples as it infers short-term transcriptional trends based on splicing kinetics and deterministic ODEs model. However, such approaches are limited by their deterministic assumptions and cannot fully capture the stochastic nature of gene regulation. DeepTX can be viewed as an extension of this framework to stochastic modelling, explicitly addressing transcriptional bursting kinetics under DNA damage. Similarly, DeepCycle, developed by Sukys and Grima (Sukys A & Grima R, 2025), investigates transcriptional burst kinetics during the cell cycle, employing a stochastic age-dependent model and a neural network to infer burst parameters while correcting for measurement noise. By contrast, MIGNON integrates genomic variation data and static transcriptomic measurements into a mechanistic pathway model (HiPathia) to infer pathway-level activity changes, rather than gene-level stochastic transcriptional dynamics (Garrido-Rodriguez M et al., 2021). In this sense, DeepTX and MIGNON are complementary, with DeepTX resolving burst kinetics at the single-gene level and MIGNON emphasizing pathway responses to genomic perturbations, which could inspire future extensions of DeepTX that incorporate sequence-level information.

      (2) Interpretation of GO analysis

      The interpretation of the GO enrichment results in Figure 4D should be revised. While the text currently associates the enriched terms with signal transduction and cell cycle G2/M phase transition, the most significant terms relate to mitotic cell cycle checkpoint signaling. This distinction should be made clear in the main text, and the conclusions drawn from the GO analysis should be aligned more closely with the statistical results.

      We sincerely appreciate you for the insightful comment. We have carefully re-examined the GO enrichment results shown in Figure 4D and agree that the most significantly enriched terms correspond to mitotic cell cycle checkpoint signaling and signal transduction in response to DNA damage, rather than general G2/M phase transition processes. Accordingly, we have revised the main text to highlight the biological significance of mitotic cell cycle checkpoint signaling.

      Specifically, we now emphasize two key points: DNA damage and mitotic checkpoint activation are closely interconnected. (1) The mitotic checkpoint serves as a crucial safeguard to ensure accurate chromosome segregation and maintain genomic stability under DNA damage conditions. Activation of the mitotic checkpoint can influence cell fate decisions and differentiation potential (Kim EM & Burke DJ, 2008; Lawrence KS et al., 2015). (2) Sustained activation of the spindle assembly checkpoint (SAC) has been reported to induce mitotic slippage and polyploidization, which in turn may enhance the differentiation potential of embryonic stem cells  (Mantel C et al., 2007). These revisions ensure that our interpretation is consistent with the statistical enrichment results and better reflect the underlying biological processes implicated by the data.

      (3) Justification for training on simulated data

      The decision to train the model on simulated data should be clearly justified. While the advantage of having access to ground-truth parameters is understood, the manuscript would benefit from a discussion of the limitations of this approach, particularly in terms of generalizability to real datasets. Moreover, it is worth noting that many annotated scRNA-seq datasets are publicly available and could, in principle, be used to complement the training strategy.

      We thank you for this insightful comment. We chose to train DeepTXsolver on simulated data because no experimental dataset currently provides genome-wide transcriptional burst kinetics with known ground truth, which is essential for supervised learning. Simulation enables us to (i) generate large, fully annotated datasets spanning the biologically relevant parameter space, (ii) expose the solver to diverse bursting regimes (e.g., low/high burst frequency, small/large burst size, unimodal/bimodal distributions), and (iii) quantitatively benchmark model accuracy, parameter identifiability, and robustness prior to deployment on real scRNA-seq data.

      We acknowledge, however, that simulation-based training has inherent limitations in terms of generalizability. Real biological systems may deviate from the idealized bursting model, exhibit more complex noise structures, or display parameter distributions that differ from those in simulations. Moreover, the lack of ground-truth parameters in experimental scRNA-seq datasets prevents an absolute evaluation of inference accuracy. In the future work, publicly available annotated scRNA-seq datasets could be used to complement this simulation-based training strategy and enhance generalizability. We have revised the manuscript to explicitly discuss both the rationale for using simulated data and the potential limitations of this approach.

      (4) Benchmarking against external methods

      The performance of DeepTX is primarily compared to a prior method from the same group. To strengthen the methodological claims, it would be preferable to include benchmarking against additional established tools from the broader literature. This would offer a more objective evaluation of the performance gains attributed to DeepTX.

      We thank you for this constructive suggestion. We fully agree that benchmarking DeepTX against additional established tools from the broader literatures would provide a more comprehensive and objective evaluation of DeepTX . In the revised manuscript, we have included comparative analyses with other widely used methods, including nnRNA (From Shahrezaei group (Tang W et al., 2023)), txABC (from our group (Luo S et al., 2023)), txBurst (from Sandberg group (Larsson AJM et al., 2019)), txInfer (from Junhao group (Gu J et al., 2025)) (Supplementary Figure S4). The comparative results indicate that our method demonstrates superior performance in both efficiency and accuracy.

      (5) Interpretation of Figures 4-6

      The revised figures are clear and informative; however, the associated interpretations in the main text remain too strong relative to the type of analysis performed. For instance, in Figure 4, it is suggested that changes in burst size are linked to DNA damage-induced signalling cascades that affect cell cycle progression and fate decisions. While this is a plausible hypothesis, GO and GSEA analyses are correlative by nature and not sufficient to support such a mechanistic claim on their own. These analyses should be presented as exploratory, and the strength of the conclusions drawn should be tempered accordingly. Similar caution should be applied to the interpretations of Figures 5 and 6.

      We thank you for this important comment. In the revised manuscript, we have carefully moderated the interpretation of the GO and GSEA results in Figures 4, 5, and 6. Specifically, we now present these analyses as exploratory and emphasize their correlative nature, avoiding causal claims that go beyond the scope of the data. The text has been rephrased to highlight the observed associations rather than implying direct causal relationships.

      For Figure 4, we emphasize that while it is tempting to hypothesize that enhanced burst size may contribute to DNA damage-related checkpoint activation and thereby influence cell cycle progression and differentiation, our current results only indicate an association between burst size enhancement and pathways involved in DNA damage response and checkpoint signaling.

      For Figure 5, we emphasize that although our GO analysis cannot establish causality, the results are consistent with an association between 5-FU-induced changes in burst kinetics and pathways related to oxidative stress and apoptosis. Based on this, we propose a model outlining a potential process through which DNA damage may ultimately lead to cellular apoptosis.

      For Figure 6, we emphasize that these enrichment results suggest that high-dose 5FU treatment may be associated with processes such as telomerase activation and mitochondrial function maintenance, both of which have been implicated in cell survival and apoptosis evasion in previous experimental studies. For example, prior work indicates that hTERT translocation can activate telomerase pathways to support telomere maintenance and reduce oxidative stress, which is thought to contribute to apoptosis resistance. While our enrichment analysis cannot establish causality, the observed transcriptional bursting changes are consistent with these reported survival-associated mechanisms.

      (6) Discussion section framing

      The initial paragraphs of the discussion section make broad biological claims about the role of transcriptional bursting in cellular decision-making. While transcriptional bursting is undoubtedly relevant, the manuscript would benefit from a more cautious framing. It would be more appropriate to foreground the methodological contributions of DeepTX, and to present the biological insights as hypotheses or observations that may guide future experimental investigation, rather than as established conclusions.

      We thank you for this insightful comment. We have revised the discussion to clarify and appropriately temper our claims regarding transcriptional bursting. First, we now explicitly recognize that transcriptional bursting is one of multiple contributors to cellular variability, rather than the sole or dominant factor driving cellular decision-making. Second, we have restructured the opening of the discussion to prioritize the methodological contributions of DeepTX, highlighting its strength as a framework for inferring genomewide burst kinetics from scRNA-seq data. Finally, the biological insights derived from our analysis are now presented as correlative observations and potential hypotheses, which may inform and guide future experimental investigations, rather than as definitive mechanistic conclusions.

      Small Comments

      (1) Presentation of discrete distributions: In several figures (e.g., Figure 2B and Supplementary Figures S4, S6, and S8), the comparisons between empirical mRNA distributions and DeepTX-inferred distributions are visually represented using connecting lines, which may give the impression that continuous distributions are being compared to discrete ones. Given the focus on transcriptional bursting, a process inherently tied to discrete stochastic events, this representation could be misleading. The figure captions and visual style should be revised to clarify that all distributions are discrete and to avoid potential confusion. In general, it is recommended to avoid connecting points in discrete distributions with lines, as this can suggest interpolation or comparison with continuous distributions. This applies to Figures 2A and 2B in particular.

      We thank you for this valuable suggestion. To prevent any potential misinterpretation of discrete distributions as continuous ones, we have revised the visual representation of the empirical and DeepTXinferred mRNA distributions in Figures 2B, and Supplementary Figures S4, S6, and S8. Specifically, we have replaced the line plots with step plots, which more accurately capture the discrete nature of transcriptional bursting. Additionally, we have updated the figure captions to clearly state that all distributions are discrete.

      (2) Transcription is always a multi-step process. While the manuscript aims to model additional complexity introduced by DNA damage, the current phrasing (e.g., on page 5) could be read as implying that transcription becomes multi-step only under damage conditions. This should be clarified.

      We thank you for this helpful observation. We agree that transcription is inherently a multi-step process under all conditions. To avoid any possible misunderstanding, we have revised the text to clarify this point.

      Specifically, we now explain that many previous studies have employed simplified two-state models to approximate transcriptional dynamics, however, the gene expression process is inherently a multi-step process, which particularly cannot be neglected under conditions of DNA damage. DNA damage can result in slowing or even stopping the RNA pol II movement and cause many macromolecules to be recruited for damage repair. This process will affect the spatially localized behavior of the promoter, causing the dwell time of promoter inactivation and activation that cannot be approximated by a simple two state. Our work adopts a multi-step model because it is more appropriate for capturing the additional complexity introduced by DNA damage.

      (3) The first sentence of the discussion section overstates the importance of transcriptional bursting. While it is a key source of variability, it is not the only nor always the dominant one. Furthermore, its role in DNA damage response remains an emerging hypothesis rather than a general principle. The claims in this section should be moderated accordingly.

      We thank you for this valuable feedback. In the revised discussion, we have moderated the statements in the opening paragraph to better reflect the current understanding. Specifically, we now acknowledge that transcriptional bursting represents one of multiple sources of variability and is not always the dominant contributor. In addition, we have reframed the role of transcriptional bursting in DNA damage response as an emerging hypothesis, rather than a general principle. To further address this concern, we replaced conclusion-like statements with more cautious, hypothesis-oriented phrasing, presenting our observations as potential directions for future experimental validation.

      References

      Maizels, R.J. 2024. A dynamical perspective: moving towards mechanism in single-cell transcriptomics. Philos Trans R Soc Lond B Biol Sci 379: 20230049. DOI: https://dx.doi.org/10.1098/rstb.2023.0049, PMID: 38432314

      Sukys, A., Grima, R. 2025. Cell-cycle dependence of bursty gene expression: insights from fitting mechanistic models to single-cell RNA-seq data. Nucleic Acids Research 53. DOI: https://dx.doi.org/10.1093/nar/gkaf295, PMID: 40240003

      Garrido-Rodriguez, M., Lopez-Lopez, D., Ortuno, F.M., Peña-Chilet, M., Muñoz, E., Calzado, M.A., Dopazo, J. 2021. A versatile workflow to integrate RNA-seq genomic and transcriptomic data into mechanistic models of signaling pathways. PLoS Computational Biology 17: e1008748. DOI: https://dx.doi.org/10.1371/journal.pcbi.1008748, PMID: 33571195

      Kim, E.M., Burke, D.J. 2008. DNA damage activates the SAC in an ATM/ATR-dependent manner, independently of the kinetochore. PLoS Genet 4: e1000015. DOI: https://dx.doi.org/10.1371/journal.pgen.1000015, PMID: 18454191

      Lawrence, K.S., Chau, T., Engebrecht, J. 2015. DNA damage response and spindle assembly checkpoint function throughout the cell cycle to ensure genomic integrity. PLoS Genet 11: e1005150.DOI: https://dx.doi.org/10.1371/journal.pgen.1005150, PMID: 25898113

      Mantel, C., Guo, Y., Lee, M.R., Kim, M.K., Han, M.K., Shibayama, H., Fukuda, S., Yoder, M.C., Pelus, L.M., Kim, K.S., Broxmeyer, H.E. 2007. Checkpoint-apoptosis uncoupling in human and mouse embryonic stem cells: a source of karyotpic instability. Blood 109: 4518-4527. DOI: https://dx.doi.org/10.1182/blood-2006-10-054247, PMID: 17289813

      Tang, W., Jørgensen, A.C.S., Marguerat, S., Thomas, P., Shahrezaei, V. 2023. Modelling capture efficiency of single-cell RNA-sequencing data improves inference of transcriptome-wide burst kinetics. Bioinformatics 39. DOI: https://dx.doi.org/10.1093/bioinformatics/btad395, PMID: 37354494

      Luo, S., Zhang, Z., Wang, Z., Yang, X., Chen, X., Zhou, T., Zhang, J. 2023. Inferring transcriptional bursting kinetics from single-cell snapshot data using a generalized telegraph model. Royal Society Open Science 10: 221057. DOI: https://dx.doi.org/10.1098/rsos.221057, PMID: 37035293

      Larsson, A.J.M., Johnsson, P., Hagemann-Jensen, M., Hartmanis, L., Faridani, O.R., Reinius, B., Segerstolpe, A., Rivera, C.M., Ren, B., Sandberg, R. 2019. Genomic encoding of transcriptional burst kinetics. Nature 565: 251-254. DOI: https://dx.doi.org/10.1038/s41586-018-0836-1, PMID: 30602787

      Gu, J., Laszik, N., Miles, C.E., Allard, J., Downing, T.L., Read, E.L. 2025. Scalable inference and identifiability of kinetic parameters for transcriptional bursting from single cell data. Bioinformatics. DOI: https://dx.doi.org/10.1093/bioinformatics/btaf581, PMID: 41131798.

    1. Author response:

      The following is the authors’ response to the original reviews.

      eLife Assessment

      This study presents valuable findings that advance our understanding of mural cell dynamics and vascular pathology in a zebrafish model of cerebral small vessel disease. The authors provide compelling evidence that partial loss of foxf2 function leads to progressive, cell-intrinsic defects in pericytes and associated endothelial abnormalities across the lifespan, leveraging powerful in vivo imaging and genetic tools. The strength of evidence could be further improved by additional mechanistic insight and quantitative or lineage-tracing analyses to clarify how pericyte number and identity are affected in the mutant model.

      Thank you to the reviewers for insightful comments and for the time spent reviewing the manuscript. We have strengthened the data through responding to the comments.

      Public Reviews:

      Reviewer #1 (Public review):

      The paper by Graff et al. investigates the function of foxf2 in zebrafish to understand the progression of cerebral small vessel disease. The authors use a partial loss of foxf2 (zebrafish possess two foxf2 genes, foxf2a and foxf2b, and the authors mainly analyze homozygous mutants in foxf2a) to investigate the role of foxf2 signaling in regulating pericyte biology. They find that the number of pericytes is reduced in foxf2a mutants and that the remaining pericytes display alterations in their morphologies. The authors further find that mutant animals can develop to adulthood, but that in adult animals, both endothelial and pericyte morphologies are affected. They also show that mutant pericytes can partially repopulate the brain after genetic ablation.

      (1) Weaknesses: The results are mainly descriptive, and it is not clear how they will advance the field at their current state, given that a publication on mice has already examined the loss of foxf2 phenotype on pericyte biology (Reyahi, 2015, Dev. Cell).

      The Reyahi paper was the earliest report of foxf2 mutant brain pericytes and remains illuminating. The work was very well technically executed. Our manuscript expands and at times, contradicts, their findings. We realized that we did not fully discuss this in our discussion, and this has now been updated. The biggest difference between the two studies is in the direction of change in pericytes after foxf2 knockout, a major finding in both papers. This is where it is important to understand the differences in methods. Reyahi et al., used a conditional knockout under Wnt1:Cre which will ablate pericytes derived from neural crest, but not those derived from mesoderm, nor will it affect foxf2 expression in endothelial cells. Our model is a full constitutive knockout of the gene in all brain pericytes and endothelial cells. For GOF, Reyahi used a transgenic model with a human FOXF2 BAC integrated into the mouse germline.

      Both studies are important. We do not know enough about human phenotypes in patients with strokeassociated human FOXF2 SNVs to know the direction of change in pericyte numbers. We showed that the SNVs reduce FOXF2 gene expression in vitro (Ryu, 2022). Here we demonstrate dosage sensitivity in fish (showing phenotypes when 1 of 4 foxf2a + foxf2b alleles are lost, Figure 1F), supporting that slight reductions of FOXF2 in humans could lead to severe brain vessel phenotypes. For this reason, our work is complementary to the previously published work and suggests that future studies should focus on understanding the role of dosage, cell autonomy, and human pericyte phenotypes with respect to FOXF2. While some experiments are parallel in mouse and fish, we go further to look at cell death and regeneration, and to understand the consequences on the whole brain vasculature.

      (2) Reyahi et al. showed that loss of foxf2 in mice leads to a marked downregulation of pdgfrb expression in perivascular cells. In contrast to expectation, perivascular cell numbers were higher in mutant animals, but these cells did not differentiate properly. The authors use a transgenic driver line expressing gal4 under the control of the pdgfrb promoter and observe a reduction in pericyte (pdgfrb-expressing) cells in foxf2a mutants. In light of the mouse data, this result might be due to a similar downregulation of pdgfrb expression in fish, which would lead to a downregulation of gal4 expression and hence reduced labelling of pericytes. The authors show a reduction of pdgfrb expression also in zebrafish in foxf2b mutants (Chauhan et al., The Lancet Neurology 2016).

      Reyahi detected more pericytes in the Wnt1:Cre mouse, while we detected fewer in the foxf2a (and foxf2a;foxf2b) mutants. This may be because of different methods. For instance, because the mouse knockout is not a constitutive Foxf2 knockout, the observed increase in pericytes may be because mesodermal-derived pericytes proliferate more highly when the neural crest-derived pericytes are absent. Or does endothelial foxf2 activate pericyte proliferation when foxf2 is lost in some pericytes? It is also possible that mouse foxf2 has a different role from its fish ortholog. Despite these differences, there are common conclusions from both models. For instance, both mouse and fish show foxf2 controls capillary pericyte numbers, albeit in different directions. Both show hemorrhage and loss of vascular stability as a result. Both papers identify the developmental window as critical for setting up the correct numbers of pericytes.  

      As the reviewer suggested, it was important to test whether pdgfrb is downregulated in fish as it is in mice. To do this, we measured expression of pdgfrb in foxf2 mutants using hybridization chain reaction (HCR) of pdgfrb in foxf2 mutants. The results show no change in pdgfrb mRNA in foxf2a mutants at two independent experiments (Fig S3). Independently, we integrated pdgfrb transgene intensity (using a single allele of the transgene so there are no dose effects) in foxf2a mutants vs. wildtype. We found no difference (Fig S3) suggesting that pdgfrb is a reliable reporter for counting pericytes in the foxf2a knockout. The reviewer is correct that we previously showed downregulation of pdgfrb in foxf2b mutants at 4 dpf using colorimetric ISH. foxf2a and foxf2b are unlinked, independent genes (~400 M years apart in evolution) and may have different regulation.

      (3) It would be important to clarify whether, also in zebrafish, foxf2a/foxf2b mutants have reduced or augmented numbers of perivascular cells and how this compares to the data in the mouse.  

      We discuss methodological differences between Reyahi and our work in point (1) above. The reduction in pericytes in foxf2a;foxf2b mutants has been previously published (Ryu, 2022, Supplemental Figure 1) and shown again here in Supplemental Figure 2). Numbers are reduced in double mutants up to 10 dpf, suggesting no recovery. Further, in response to reviewer comments, we have quantified pericytes in the whole fish brain (Figure 3E-G) and show reduced pericytes in the adult, reduced vessel network length, and importantly that the pericyte density is reduced. In aggregate, our data shows pericyte reduction at 5 developmental stages from embryo through adult. The reason for different results from the mouse is unknown and may reflect a technical difference (constitutive vs Wnt1:Cre) or a species difference.  

      (4) The authors should perform additional characterization of perivascular cells using marker gene expression (for a list of markers, see e.g., Shih et al. Development 2021) and/or genetic lineage tracing.

      This is a good point. We have added HCR analysis of additional markers. Results show co-expression of foxf2a, foxf2b, nduf4la2 and pdgfrb in brain pericytes (Fig 2, Fig S3).

      (5) The authors motivate using foxf2a mutants as a model of reduced foxf2 dosage, "similar to human heterozygous loss of FOXF2". However, it is not clear how the different foxf2 genes in zebrafish interact with each other transcriptionally. Is there upregulation of foxf2b in foxf2a mutants and vice versa? This is important to consider, as Reyahi et al. showed that foxf2 gene dosage in mice appears to be important, with an increase in foxf2 gene dosage (through transgene expression) leading to a reduction in perivascular cell numbers.

      We agree that dosage is a very important concept and show phenotypes in foxf2a heterozygotes (Fig 1F). To test the potential compensation from foxf2b, we have added qPCR for foxf2b in foxf2a mutants as well as HCR of foxf2b in foxf2a mutants (Fig S3C,D). There is no change in foxf2b expression in foxf2a mutants. We discuss dosage in our discussion.

      (6) Figures 3 and 4 lack data quantification. The authors describe the existence of vascular defects in adult fish, but no quantifiable parameters or quantifications are provided. This needs to be added.

      This query was technically challenging to address, but very worthwhile. We have not seen published methods for quantifying brain pericytes along with the vascular network (certainly not in zebrafish adults), so we developed new methods of analyzing whole brain vascular parameters of cleared adult brains (Figure S6) using a combination of segmentation methods for pericytes, endothelium and smooth muscle. We have added another author (David Elliott) as he was instrumental in designing methods. We find a significant decrease in vessel network length in foxf2a mutants at 3 month and 6 months (Figures 3F and 4G). Similarly, we show a lower number of brain pericytes in foxf2a mutants (Figure 3E). Finally, we added whole brain analysis of smooth muscle coverage (Figure 4) and show no change in vSMC number or coverage of vessels at 5 and 10 dpf or adult, respectively, pointing to pericytes being the cells most affected. Thank you, this query pushed us in a very productive direction. These methods will be extremely useful in the future!

      (7) The analysis of pericyte phenotypes and morphologies is not clear. On page 6, the authors state: "In the wildtype brain, adult pericytes have a clear oblong cell body with long, slender primary processes that extend from the cytoplasm with secondary processes that wrap around the circumference of the blood vessel." Further down on the same page, the authors note: "In wildtype adult brains, we identified three subtypes of pericytes, ensheathing, mesh and thin-strand, previously characterized in murine models." In conclusion, not all pericytes have long, slender primary processes, but there are at least three different sub-types? Did the authors analyze how they might be distributed along different branch orders of the vasculature, as they are in the mouse?

      We have reworded the text on page 5/6 to be clearer that embryonic pericytes are thin strand only. Additional pericyte subtypes develop later are seen in the mature vasculature of the adult. We could not find a way to accurately analyze pericyte subtypes in the adult brain. The imaging analysis to count pericytes used soma as machine learning algorithms have been developed to count nuclei but not analyze processes.

      (8) Which type of pericyte is affected in foxf2a mutant animals? Can the authors identify the branch order of the vasculature for both wildtype and mutant animals and compare which subtype of pericyte might be most affected? Are all subtypes of pericytes similarly affected in mutant animals? There also seems to be a reduction in smooth muscle cell coverage.

      Please see the response to (7) about pericyte subtypes. In response to the reviewer’s query, we have now analyzed vSMCs in the embryonic and adult brain. In the embryonic brain we see no statistical differences in vSMC number at 5 and 10 dpf (Figure 4). In the adult, vSMC length (total length of vSMCs in a brain) and vSMC coverage (proportion of brain vessels with vSMCs) are not significantly different. This data is important because it suggests that foxf2a has a more important role in pericytes than in vSMCs.

      (9) Regarding pericyte regeneration data (Figure 7): Are the values in Figure 7D not significantly different from each other (no significance given)?

      Any graphs missing bars have no significance and were left off for clarity. We have stated this in the statistical methods.  

      (10) In the discussion, the authors state that "pericyte processes have not been studied in zebrafish".

      Ando et al. (Development 2016) studied pericyte processes in early zebrafish embryos, and Leonard et al. (Development 2022) studied zebrafish pericytes and their processes in the developing fin. We apologize, this was not meant to say that pericyte processes had not been studied before, we have reworded this to make clear the intent of the sentence. We were trying to emphasize that we are the first to quantify processes at different stages, especially  in foxf2 mutants. Processes change morphology over development, especially after 5 dpf, something that our data captures. Our images are of stages that have not been previously characterized. We added a reference to Mae et al., who found similar process length changes in a mouse knockout of a different gene, and to Leonard who previously showed overlap of processes in a different context in fish.

      Reviewer #2 (Public review):

      Summary:

      This study investigates the developmental and lifelong consequences of reduced foxf2 dosage in zebrafish, a gene associated with human stroke risk and cerebral small vessel disease (CSVD). The authors show that a ~50% reduction in foxf2 function through homozygous loss of foxf2a leads to a significant decrease in brain pericyte number, along with striking abnormalities in pericyte morphologyincluding enlarged soma and extended processes-during larval stages. These defects are not corrected over time but instead persist and worsen with age, ultimately affecting the surrounding endothelium. The study also makes an important contribution by characterizing pericyte behavior in wild-type zebrafish using a clever pericyte-specific Brainbow approach, revealing novel interactions such as pericyte process overlap not previously reported in mammals.

      Strengths:

      This work provides mechanistic insight into how subtle, developmental changes in mural cell biology and coverage of the vasculature can drive long-term vascular pathology. The authors make strong use of zebrafish imaging tools, including longitudinal analysis in transgenic lines to follow pericyte number and morphology over larval development, and then applied tissue clearing and whole brain imaging at 3 and 11 months to further dissect the longitudinal effects of foxf2a loss. The ability to track individual pericytes in vivo reveals cell-intrinsic defects and process degeneration with high spatiotemporal resolution. Their use of a pericyte-specific Zebrabow line also allows, for the first time, detailed visualization of pericytepericyte interactions in the developing brain, highlighting structural features and behaviors that challenge existing models based on mouse studies. Together, these findings make the zebrafish a valuable model for studying the cellular dynamics of CSVD.

      Weaknesses:

      (11) While the findings are compelling, several aspects could be strengthened. First, quantifying pericyte coverage across distinct brain regions (forebrain, midbrain, hindbrain) would clarify whether foxf2a loss differentially impacts specific pericyte lineages, given known regional differences in developmental origin, with forebrain pericytes being neural crest-derived and hindbrain pericytes being mesoderm-derived.

      In recently published work from our lab, we published that both neural crest and mesodermal cells contribute to pericytes in both the mid and hindbrain, and could not confirm earlier work suggesting more rigid compartmental origins (Ahuja, 2024). In the Ahuja, 2024 paper we noted that lineage experiments are often limited by n’s which is why this may not have been discovered before. This makes us skeptical that counting different regions will allow us to interpret data about neural crest and mesoderm. Further, Ahuja 2024 shows that pericyte intermediate progenitors from both mesoderm and neural crest are indistinguishable at 30 hpf through single cell sequencing and have converged on a common phenotype.  

      (12) Second, measuring foxf2b expression in foxf2a mutants would better support the interpretation that total FOXF2 dosage is reduced in a graded fashion in heterozygote and homozygote foxf2a mutants.

      We have done both qPCR for foxf2b in foxf2a mutants and HCR (quantitative ISH). This is now reported in Fig S3. 

      (13) Finally, quantifying vascular density in adult mutants would help determine whether observed endothelial changes are a downstream consequence of prolonged pericyte loss. Correlating these vascular changes with local pericyte depletion would also help clarify causality.

      We have added this data to Figure 3 and 4. Please also see response (6).

      Reviewer #3 (Public review):

      Summary:

      The goal of the work by Graff et al. is to model CSVD in the zebrafish using foxf2a mutants. The mutants show loss of cerebral pericyte coverage that persists through adulthood, but it seems foxf2a does not regulate the regenerative capacity of these cells. The findings are interesting and build on previous work from the group. Limitations of the work include little mechanistic insight into how foxf2a alters pericyte recruitment/differentiation/survival/proliferation in this context, and the overlap of these studies with previous work in fox2a/b double mutants. However, the data analysis is clean and compelling, and the findings will contribute to the field.

      (14) Please make Figures 5C and 5E red-green colorblind friendly.

      Thank you. We have changed the colors to light blue and yellow to be colorblind friendly.

      Reviewer #3 (Recommendations for the authors):

      (15) I'm not sure this reviewer totally agrees with the assessment that foxf2a loss of function, while foxf2b remains normal, is the same as FOXF2 heterozygous loss of function in humans. The discussion of the gene dosage needs to be better framed, and the authors should carry out qPCR to show that foxf2b levels are not altered in the foxf2a mutant background.

      We have added data on foxf2b expression in foxf2a mutants to Fig S3. We have updated the results.

      (16) Figure 4/SF7- is the aneurysm phenotype derived from the ECs or pericytes? Cell-type-specific rescues would be interesting to determine if phenotypes are rescued, especially the developmental phenotypes (it is appreciated that carrying out rescue experiments until adulthood is complex). When is the earliest time point that aneurysm-like structures are seen?

      This is a fascinating question, especially as we show that endothelial cells (vessel network length) are affected in the adult mutants. The foxf2a mutants that we work with here are constitutive knockouts. While a strategy to rescue foxf2a in specific lineages is being developed in the laboratory this will require a multi-generation breeding effort to get drivers, transgenes and mutants on the same background, and these fish are not currently available. Thank you for this comment- it is something we want to follow up on.

      (17) Figure 5 - This is very nice analysis.

      Thank you! We think it is informative too.

      (18) Figure 6 - needs to contain control images

      We have added wildtype images to figure 6A.

      (19) Figure 7- vessel images should be shown to demonstrate the specificity of NTR treatment to the pericytes.

      We have added the vessel images to Figure 7. We apologize for the omission.

    1. Author response:

      The following is the authors’ response to the previous reviews

      Public Reviews:

      Reviewer #1 (Public review):

      One possible remaining conceptual concern that might require future work is determining whether STN primarily mediates higher-level cognitive avoidance or if its activation primarily modulates motor tone.

      Our results using viral and electrolytic lesions (Fig. 11) and optogenetic inhibition of STN neurons (Fig. 10) show that signaled active avoidance is virtually abolished, and this effect is reproduced when we selectively inhibit STN fibers in the midbrain (Fig. 12). Inhibition of STN projections in either the substantia nigra pars reticulata (SNr) or the midbrain reticular tegmentum (mRt) eliminates cued avoidance responses while leaving escape responses intact. Importantly, mice continue to escape during US presentation after lesions or during photoinhibition, demonstrating that basic motor capabilities and the ability to generate rapid defensive actions are preserved.

      These findings argue against the idea that STN’s role in avoidance reflects a nonspecific suppression or facilitation of motor tone, even if the STN also contributes to general movement control. Instead, they show that STN output is required for generating “cognitively” guided cued actions that depend on interpreting sensory information and applying learned contingencies to decide when to act. Thus, while STN activity can modulate movement parameters, the loss-of-function results point to a more selective role in supporting cued, goal-directed avoidance behavior rather than a general adjustment of motor tone.

      Reviewer #2 (Public review):

      All previous weaknesses have been addressed. The authors should explain how inhibition of the STN impairing active avoidance is consistent with the STN encoding cautious action. If 'caution' is related to avoid latency, why does STN lesion or inhibition increase avoid latency, and therefore increase caution? Wouldn't the opposite be more consistent with the statement that the STN 'encodes cautious action'?

      The reviewer’s interpretation treats any increase in avoidance latency as evidence of “more caution,” but this holds only when animals are performing the avoidance behavior normally. In our intact animals, avoidance rates remain high across AA1 → AA2 → AA3, and the active avoidance trials (CS1) used to measure latency are identical across tasks (e.g., in AA2 the only change is that intertrial crossings are punished). Under these conditions, changes in latency genuinely reflect adjustments in caution, because the behavior itself is intact, actions remain tightly coupled to the cue, and the trials are identical.

      This logic does not apply when STN function is disrupted. STN inhibition or lesions reduce avoidance to near chance levels; the few crossings that do occur are poorly aligned to the CS and many likely reflect random movement rather than a cued avoidance response. Once performance collapses, latency can no longer be assumed to reflect the same cognitive process. Thus, interpreting longer latencies during STN inactivation as “more caution” would be erroneous, and we never make that claim.

      A simple analogy may help clarify this distinction. Consider a pedestrian deciding when to cross the street after a green light. If the road is deserted (like AA1), the person may step off the curb quickly. If the road is busy with many cars that could cause harm (like AA2), they may wait longer to ensure that all cars have stopped. This extra hesitation reflects caution, not an inability to cross. However, if the pedestrian is impaired (e.g., cannot clearly see the light, struggles to coordinate movements, or cannot reliably make decisions), a delayed crossing would not indicate greater caution—it would reflect a breakdown in the ability to perform the behavior itself. The same principle applies to our data: we interpret latency as “caution” only when animals are performing the active avoidance behavior normally, success rates remain high, and the trial rules are identical. Under STN inhibition or lesion, when active avoidance collapses, the latency of the few crossings that still occur can no longer be interpreted as reflecting caution. We have added these points to the Discussion.

      Reviewer #3 (Public review):

      Original Weaknesses:

      I found the experimental design and presentation convoluted and some of the results over-interpreted.

      We appreciate the reviewer’s comment, but the concern as stated is too general for us to address in a concrete way. The revised manuscript has been substantially reorganized, with simplified terminology, streamlined figures, and removal of an entire set of experiments to avoid over-interpretation. We are confident that the experimental design and results are now presented clearly and without extrapolation beyond the data. If there are specific points the reviewer finds convoluted or over-interpreted, we would be happy to address them directly.

      As presented, I don't understand this idea that delayed movement is necessarily indicative of cautious movements. Is the distribution of responses multi-modal in a way that might support this idea; or do the authors simply take a normal distribution and assert that the slower responses represent 'caution'? Even if responses are multi-modal and clearly distinguished by 'type', why should readers think this that delayed responses imply cautious responding instead of say: habituation or sensitization to cue/shock, variability in attention, motivation, or stress; or merely uncertainty which seems plausible given what I understand of the task design where the same mice are repeatedly tested in changing conditions. This relates to a major claim (i.e., in the title).

      We appreciate the reviewer’s question and address each component directly.

      (1) What we mean by “caution” and how it is operationalized

      In our study, caution is defined operationally as a systematic increase in avoidance latency when the behavioral demand becomes higher, while the trial structure and required response remain unchanged. Specifically, CS1 trials are identical in AA1, AA2, and AA3. Thus, when mice take longer to initiate the same action under more demanding contexts, the added time reflects additional evaluation before acting—consistent with longestablished interpretations of latency shifts in cognitive psychology (see papers by Donders, Sternberg, Posner) and interpretations of deliberation time in speed-accuracy tradeoff literature.

      (2) Why this interpretation does not rely on multi-modal response distributions We do not claim that “cautious” responses form a separate mode in the latency distribution. The distributions are unimodal, and caution is inferred from conditiondependent shifts in these distributions across identical trials, not from the existence of multiple peaks (see Zhou et al, 2022). Latency shifts across conditions with identical trial structure are widely used as behavioral indices of deliberation or caution.

      (3) Why alternative explanations (habituation/sensitization, motivation, attention, stress, uncertainty) do not account for these latency changes

      Importantly, nothing changes in CS1 trials between AA1 and AA2 with respect to the cue, shock, or required response. Therefore:

      - Habituation/sensitization to the cue or shock cannot explain the latency shift (the stimuli and trial type are unchanged). We have previously examined cue-evoked orienting responses and their habituation in detail (Zhou et al., 2023), and those measurements are dissociable from the latency effects described here.

      - Motivation or attention are unlikely to change selectively for identical CS1 trials when the task manipulation only adds a contingency to intertrial crossings.

      - Uncertainty also does not increase for CS1 trials, they remain fully predictable and unchanged between conditions.

      - Stress is too broad a construct to be meaningful unless clearly operationalized; moreover, any stress differences that arise from task structure would covary with caution rather than replace the interpretation.

      (4) Clarifying “types” of responses

      The reviewer’s question about “response types” appears to conflate behavioral latencies with the neuronal response “types” defined in the manuscript. The term “type” in this paper refers to neuronal activation derived from movement-based clustering, not to distinct behavioral categories of avoidance, which we term modes.

      In sum, we interpret increased CS1 latency as “caution” only when performance remains intact and trial structure is identical between conditions; under those criteria, latency reliably reflects additional cognitive evaluation before acting, rather than nonspecific changes in sensory processing, motivation, etc.

      Related to the last, I'm struggling to understand the rationale for dividing cells into 'types' based their physiological responses in some experiments.

      There is longstanding precedent in systems neuroscience for classifying neurons by their physiological response patterns, because neurons that respond similarly often play similar functional roles. For example, place cells, grid cells, direction cells, in vivo, and regular spiking, burst firing, and tonic firing in vitro are all defined by characteristic activity patterns in response to stimuli rather than anatomy or genetics alone. In the same spirit, our classifications simply reflect clusters of neurons that exhibit similar ΔF/F dynamics around behaviorally relevant events, such as movement sensitivity or avoidance modes. This is a standard analytic approach used in many studies. Thus, our rationale is not arbitrary: the “classes” and “types” arise from data-driven clustering of physiological responses, consistent with widespread practice, and they help reveal functional distinctions within the STN that would otherwise remain obscured.

      In several figures the number of subjects used was not described. This is necessary. Also necessary is some assessment of the variability across subjects.

      All the results described include the number of animals. To eliminate uncertainty, we now also include this information in figure legends.

      The only measure of error shown in many figures relates trial-to-trial or event variability, which is minimal because in many cases it appears that hundreds of trials may have been averaged per animal, but this doesn't provide a strong view of biological variability (i.e., are results consistent across animals?).

      The concern appears to stem from a misunderstanding of what the mixed-effects models quantify. The figure panels often show session-averaged traces for clarity, all statistical inferences in the paper are made at the level of animals, not trials. Mixed-effects modeling is explicitly designed for hierarchical datasets such as ours, where many trials are nested within sessions, which are themselves nested within animals.

      In our models, animal is the clustering (random) factor, and sessions are nested within animals, so variability across animals is directly estimated and used to compute the population-level effects. This approach is not only appropriate but is the most stringent and widely recommended method for analyzing behavioral and neural data with repeated measures. In other words, the significance tests and confidence intervals already fully incorporate biological variability across animals.

      Thus, although hundreds of trials per animal may be illustrated for visualization, the inferences reflect between-animal consistency, not within-animal trial repetition. The fact that the mixed-effects results are robust across animals supports the biological reliability of the findings.

      It is not clear if or how spread of expression outside of target STN was evaluated, and if or how or how many mice were excluded due to spread or fiber placements. Inadequate histological validation is presented and neighboring regions that would be difficult to completely avoid, such as paraSTN may be contributing to some of the effects.

      The STN is a compact structure with clear anatomical boundaries, and our injections were rigorously validated to ensure targeting specificity. As detailed in the Methods, every mouse underwent histological verification, and injections were quantified using the Brain Atlas Analyzer app (available on OriginLab), which we developed to align serial sections to the Allen Brain Atlas. This approach provides precise, slice-by-slice confirmation of viral spread. We have performed thousands of AAV injections and probe implants in our lab, incorporating over the years highly reliable stereotaxic procedures with multiple depth and angle checks and tools. For this study specifically, fewer than 10% of mice were excluded due to off-target expression or fiber/lesion placement. None of the included cases showed spread into adjacent structures.

      Regarding paraSTN: anatomically, paraSTN is a very small extension contiguous with STN. Our study did not attempt to dissociate subregions within STN, and the viral expression patterns we report fall within the accepted boundaries of STN. Importantly, none of our photometry probes or miniscope lenses sampled paraSTN, so contributions from that region are extremely unlikely to account for any of our neural activity results.

      Finally, our paper employs five independent loss-of-function approaches—optogenetic inhibition of STN neurons, selective inhibition of STN projections to the midbrain (in two sites: SNr and mRt), and STN lesions (electrolytic and viral). All methods converge on the same conclusion, providing strong evidence that the effects we report arise from manipulation of STN itself rather than from neighboring regions.

      Raw example traces are not provided.

      We do not think raw traces are useful here. All figures contain average traces to reflect the average activity of the estimated populations, which are already clustered per classes and types.

      The timeline of the spontaneous movement and avoidance sessions were not clear, nor the number of events or sessions per animal and how this was set. It is not clear if there was pre-training or habituation, if many or variable sessions were combined per animal, or what the time gaps between sessions was, or if or how any of these parameters might influence interpretation of the results.

      As noted, we have enhanced the description of the sessions, including the number of animals and sessions, which are daily and always equal per animals in each group of experiments. The sessions are part of the random effects in the model. In addition, we now include schematics to facilitate understanding of the procedures.  

      Comments on revised version:

      The authors removed the optogenetic stimulation experiments, but then also added a lot of new analyses. Overall the scope of their conclusions are essentially unchanged. Part of the eLife model is to leave it to the authors discretion how they choose to present their work. But my overall view of it is unchanged. There are elements that I found clear, well executed, and compelling. But other elements that I found difficult to understand and where I could not follow or concur with their conclusions.

      We respectfully disagree with the assertion that the scope of our conclusions remains unchanged. The revised manuscript differs in several fundamental ways:

      (1) Removal of all optogenetic excitation experiments

      These experiments were a substantial portion of the original manuscript, and their removal eliminated an entire set of claims regarding the causal control of cautious responding by STN excitation. The revised manuscript no longer makes these claims.

      (2) Addition of analyses that directly address the reviewers’ central concerns The new analyses using mixed-effects modeling, window-specific covariates, and movement/baseline controls were added precisely because reviewers requested clearer dissociation of sensory, motor, and task-related contributions. These additions changed not only the presentation but the interpretation of the neural signals. We now conclude that STN encodes movement, caution, and aversive signals in separable ways—not that it exclusively or causally regulates caution.

      (3) Clear narrowing of conclusions

      Our current conclusions are more circumscribed and data-driven than in the original submission. For example, we removed all claims that STN activation “controls caution,” relying instead on loss-of-function data showing that STN is necessary for performing cued avoidance—not for generating cautious latency shifts. This is a substantial conceptual refinement resulting directly from the review process.

      (4) Reorganization to improve clarity

      Nearly every section has been restructured, including terminology (mode/type/class), figure organization, and explanations of behavioral windows. These revisions were implemented to ensure that readers can follow the logic of the analyses.

      We appreciate the reviewer’s recognition that several elements were clear and compelling. For the remaining points they found difficult to understand, we have addressed each one in detail in the response and revised the manuscript accordingly. If there are still aspects that remain unclear, we would welcome explicit identification of those points so that we can clarify them further.

      Recommendations for the authors:

      Reviewer #2 (Recommendations for the authors):

      (1) Show individual data points on bar plots

      - partially addressed. Individual data points are still not shown.

      Wherever feasible, we display individual data points (e.g., Figures 1 and 2) to convey variability directly. However, in cases where figures depict hundreds of paired (repeatedmeasures) data points, showing all points without connecting them would not be appropriate, while linking them would make the figures visually cluttered and uninterpretable. All plots and traces include measures of variability (SEM), and the raw data will be shared on Dryad. When error bars are not visible, they are smaller than the trace thickness or bar line—for example, in Figure 5B, the black circles and orange triangles include error bars, but they are smaller than the symbol size.

      Also, to minimize visual clutter, only a subset of relevant comparisons is highlighted with asterisks, whereas all relevant statistical results, comparisons, and mouse/session numbers are fully reported in the Results section, with statistical analyses accounting for the clustering of data within subjects and sessions.

      (2) The active avoidance experiments are confusing when they are introduced in the results section. More explanation of what paradigms were used and what each CS means at the time these are introduced would add clarity. For example AA1, AA2 etc are explained only with references to other papers, but a brief description of each protocol and a schematic figure would really help.

      - partially addressed. A schematic figure showing the timeline would still be helpful.

      As suggested, we have added an additional panel to Fig. 5A with a schematic describing

      AA1-3 tasks. In addition, the avoidance protocols are described briefly but clearly in the Results section (second paragraph of “STN neurons activate during goal-directed avoidance contingencies”) and in greater detail in the Methods section. As stated, these tasks were conducted sequentially, and mice underwent the same number of sessions per procedure, which are indicated. All relevant procedural information has been included in these sections. Mice underwent daily sessions and learnt these tasks within 1-2 sessions, progressing sequentially across tasks with an equal number of sessions per task (7 per task), and the resulting data were combined and clustered by mouse/session in the statistical models.

      (3) How do the Class 1, 2, 3 avoids relate to Class 1 , 2, 3 neural types established in Figure 3? It seems like they are not related, and if that is the case they should be named something different from each other to avoid confusion.

      -not sufficiently addressed. The new naming system of neural 'classes' and 'types' helps with understanding that these are completely different ways of separating subpopulations within the STN. However, it is still unclear why the authors re-type the neurons based on their relation to avoids, when they classify the neurons based on their relationship to speed earlier. And it is unclear whether these neural classes and neural types have anything to do with each other. Are the neural Types related to the neural classes in any way? and what is the overlap between neural types vs classes? Which separation method is more useful for functionally defining STN populations?

      The remaining confusion stems from treating several independent analyses as if they were different versions of the same classification. In reality, each analysis asks a distinct question, and the resulting groupings are not expected to overlap or correspond. We clarify this explicitly below.

      - Movement onset neuron classes (Class A, B, C; Fig. 3):

      These classes categorize neurons based on how their ΔF/F changes around spontaneous movement onset. This analysis identifies which neurons encode the initiation and direction of movement. For instance, Class B neurons (15.9%) were inhibited as movement slowed before onset but did not show sharp activation at onset, whereas Class C neurons (27.6%) displayed a pronounced activation time-locked to movement initiation. Directional analyses revealed that Class C neurons discharged strongly during contraversive turns, while Class B neurons showed a weaker ipsiversive bias. Because neurons were defined per session and many of these recordings did not include avoidance-task sessions, these movement-onset classes were not used in the avoidance analyses.

      - Movement-sensitivity neuron classes (Class 1, 2, 3, 4; Fig. 7):

      These classes categorize neurons based on the cross-correlation between ΔF/F and head speed, capturing how each neuron’s activity scales with movement features across the entire recording session. This analysis identifies neurons that are strongly speed-modulated, weakly speed-modulated, or largely insensitive to movement. These movement-sensitivity classes were then carried forward into the avoidance analyses to ask how neurons with different kinematic relationships participate during task performance; for example, whether neurons that are insensitive to movement nonetheless show strong activation during avoidance actions.

      - Avoidance modes (Mode 1, 2, 3; Fig. 8)

      Here we classify actions, not neurons. K-means clustering is applied to the movementspeed time series during CS1 active avoidance trials only, which allows us to identify distinct action modes or variants—fast-onset versus delayed avoidance responses. This action-based classification ensures that we compare neural activity across identical movements, eliminating a major confound in studies that do not explicitly separate action variants. First, we examine how population activity differs across these avoidance modes, reflecting neural encoding of the distinct actions themselves. Second, within each mode, we then classify neurons into “types,” which simply describes how different neurons activate during that specific avoidance action (as noted next).

      - Neuron activation types within each mode (Type a, b, c; Fig.9)

      This analysis extends the mode-based approach by classifying neuronal activation patterns only within each specific avoidance mode. For each mode, we apply k-means clustering to the ΔF/F time series to identify three activation types—e.g., neurons showing little or no response, neurons showing moderate activation, and neurons showing strong or sharply timed activation. Because all trials within a mode have identical movement profiles, these activation types capture the variability of neural responses to the same avoidance behavior. Importantly, these activation “types” (a, b,

      c) are not global neuron categories. They do not correspond to, nor are they intended to map onto, the movement-based neuron classes defined earlier. Instead, they describe how neurons differ in their activation during a particular behavioral mode—that is, within a specific set of behaviorally matched trials. Because modes are defined at the trial level, the neurons contributing to each mode can differ: some neurons have trials belonging to one mode, others to two or all three. Thus, Type a/b/c groupings are not fixed properties of neurons. To prevent confusion, we refer to them explicitly as neuronal activation types, emphasizing that they characterize mode-specific response patterns rather than global cell identities.

      In conclusion, the categorizations serve entirely different analytical purposes and should not be interpreted as competing classifications. The mode-specific “types” do not reclassify or replace the movement-sensitivity classes; they capture how neurons differ within a single, well-defined avoidance action, while the movement classes reflect how neurons relate to movements in general. Each classification relates to different set of questions and overlap between them is not expected.

      To make this as clear as possible we added the following paragraph to the Results:  

      “To avoid confusion between analyses, it is important to note that the movement-sensitivity classes defined here (Class 1–4; Fig. 7) are conceptually distinct from both the movementonset classes (Class A–C; Fig. 3) and the neuronal activation “types” introduced later in the avoidance-mode analysis. The Class 1–4 grouping reflects how neurons relate to movement across the entire session, based on their cross-correlation with speed. The onset classes A–C capture neural activity specifically around spontaneous movement initiation during general exploration. In contrast, the later activation “types” are derived within each avoidance mode and describe how neurons differ in their activation patterns during identical CS1 avoidance responses. These classifications answer different questions about STN function and are not intended to correspond to one another.”

      (4) Similarly having 3 different cell types (a,b,c) in the active avoidance seems unrelated to the original classification of cell types (1,2,3), and these are different for each class of avoid. This is very confusing and it is unclear how any of these types relate to each other. Presumable the same mouse has all three classes of avoids, so there are recording from each cell during each type of avoid. So the authors could compare one cell during each avoid and determine whether it relates to movement or sound or something else. It is interesting that types a,b,c have the exact same proportions in each class of avoid, and really makes it important to investigate if these are the exact same cells or not. Also, these mice could be recorded during open field so the original neural classification (class 1, 2,3) could be applied to these same cells and then the authors can see whether each cell type defined in the open field has different response to the different avoid types. As it stands, the paper simply finds that during movement and during avoidance behaviors different cells in the STN do different things. - Similarly, the authors somewhat addressed the neural types issue, but figure 9 still has 9 different neural types and it is unclear whether the same cells that are type 'a' in mode 1 avoids are also type 'a' in mode 2 avoids, or do some switch to type b? Is there consistency between cell types across avoid modes? The authors show that type 'c' neurons are differentially elevated in mode 3 vs 2, but also describes neurons as type '2c' and statistically compare them to type '1c' neurons. Are these the same neurons? or are type 2c neurons different cells vs type 1c neurons? This is still unclear and requires clarification to be interpretable.

      We believe the remaining confusion arises from treating the different classification schemes as if they were alternative labels applied to the same neurons, when in fact they serve entirely separate analytical purposes and may not include the same neurons (see previous point). Because these classifications answer different questions, they are not expected to overlap, nor is overlap required for the interpretations we draw. It is therefore not appropriate to compare a neuron’s “type” in one avoidance mode to its movement class, or to ask whether types a/b/c across different modes are “the same cells,” since modes are defined by trial-level movement clustering rather than by neuron identity. Importantly, Types a/b/c are not intended as a new global classification of neurons; they simply summarize the variability of neuronal responses within each behaviorally matched mode. We agree that future studies could expand our findings, but that is beyond the already wide scope of the present paper. Our current analyses demonstrate a key conceptual point: when movement is held constant (via modes), STN neurons still show heterogeneous, outcome- and caution-related patterns, indicating encoding that cannot be reduced to movement alone.

      Relatedly, was the association with speed used to define each neural "class" done in the active avoidance context or in a separate (e.g. open field) experiment? This is not clear in the text.

      The cross-correlation classes were derived from the entire recording session, which included open-field and avoidance tasks recordings. The tasks include long intertrial periods with spontaneous movements. We found no difference in classes when we include only a portion of the session, such as the open field or if we exclude the avoidance interval where actions occur.

      Finally, in figure 7, why is there a separate avoid trace for each neural class? With the GRIN lens, the authors are presumably getting a sample of all cell types during each avoid, so why do the avoids differ depending on the cell type recorded?

      The entire STN population is not recorded within a single session; each session contributes only a subset of neurons to the dataset. Consequently, each neural class is composed of neurons drawn from partially non-overlapping sets of sessions, each with its own movement traces. For this reason, we plot avoidance traces separately for each neural class to maintain strict within-session correspondence between neural activity and the behavior collected in the same sessions. This prevents mixing behavioral data across sessions that did not contribute neurons to that class and ensures that all neural– behavioral comparisons remain appropriately matched. We have clarified this rationale in the revised manuscript. We note that averaging movement across classes—as is often done—would obscure these distinctions and would not preserve the necessary correspondence between neural activity and behavior. This is also clarified in Results.

      (5) The use of the same colors to mean two different things in figure 9 is confusing. AA1 vs AA2 shouldn't be the same colors as light-naïve vs light signaling CS.

      -addressed, but the authors still sometimes use the same colors to mean different things in adjacent figures (e.g. the red, blue, black colors in figure 1 and figure 2 mean totally different things) and use different colors within the same figure to represent the same thing (Figure 9AB vs Figure 9CD). This is suboptimal.

      Following the reviewer’s suggestion, in Figure 2, we changed the colors, so readers do not assume they are related to Fig. 1.

      In Figure 9, we changed the colors in C,D to match the colors in A,B.

      (6) The exact timeline of the optogenetics experiments should be presented as a schematic for understandability. It is not clear which conditions each mouse experienced in which order. This is critical to the interpretation of figure 9 and the reduction of passive avoids during STN stimulation. Did these mice have the CS1+STN stimulation pairing or the STN+US pairing prior to this experiment? If they did, the stimulation of the STN could be strongly associated with either punishment or with the CS1 that predicts punishment. If that is the case, stimulating the STN during CS2 could be like presenting CS1+CS2 at the same time and could be confusing. The authors should make it clear whether the mice were naïve during this passive avoid experiment or whether they had experienced STN stimulation paired with anything prior to this experiment.

      -addressed

      (7) Similarly, the duration of the STN stimulation should be made clear on the plots that show behavior over time (e.g. Figure 9E).

      -addressed

      (8) There is just so much data and so many conditions for each experiment here. The paper is dense and difficult to read. It would really benefit readability if the authors put only the key experiments and key figure panels in the main text and moved much of the repetative figure panels to supplemental figures. The addition of schematic drawings for behavioral experiment timing and for the different AA1, AA2, AA3 conditions would also really improve clarity.

      -partially addressed. The paper is still dense and difficult to read. No experimental schematics were added.

      As suggested, we now added the schematic to Fig. 5A.  

      New Comments:

      (9) Description of the animals used and institutional approval are missing from the methods.

      The information on animal strains and institutional approval is already included in the manuscript. The first paragraph of the Methods section states:

      “… All procedures were reviewed and approved by the institutional animal care and use committee and conducted in adult (>8 weeks) male and female mice. …”

      Additionally, the next subsection, “Strains and Adeno-Associated Viruses (AAVs),” fully specifies all mouse lines used. We therefore believe that the required descriptions of animals and institutional approval are already present and meet standard reporting.

    1. Author response:

      Public Reviews:.

      Reviewer #1 (Public review):

      Wang, Zhou et al. investigated coordination between the prefrontal cortex (PFC) and the hippocampus (Hp), during reward delivery, by analyzing beta oscillations. Beta oscillations are associated with various cognitive functions, but their role in coordinating brain networks during learning is still not thoroughly understood. The authors focused on the changes in power, peak frequencies, and coherence of beta oscillations in two regions when rats learn a spatial task over days. Inconsistent with the authors' hypothesis, beta oscillations in those two regions during reward delivery were not coupled in spectral or temporal aspects. They were, however, able to show reverse changes in beta oscillations in PFC and Hp as the animal's performance got better. The authors were also able to show a small subset of cell populations in PFC that are modulated by both beta oscillations in PFC and sharp wave ripples in Hp. A similarly modulated cell population was not observed in Hp. These results are valuable in pointing out distinct periods during a spatial task when two regions modulate their activity independently from each other.

      The authors included a detailed analysis of the data to support their conclusions. However, some clarifications would help their presentation, as well as help readers to have a clear understanding.

      (1) The crucial time point of the analysis is the goal entry. However, it needs a better explanation in the methods or in figures of what a goal entry in their behavioral task means.

      We appreciate Reviewer 1 pointing out this shortcoming and will clarify the description in the revised manuscript. Each goal is located at the end of the arm, and is equipped with a reward delivery unit. The unit has an infrared sensor. The rat breaks the infrared beam when it enters the goal.

      (2) Regarding Figure 2, the authors have mentioned in the methods that PFC tetrodes have targeted both hemispheres. It might be trivial, but a supplementary graph or a paragraph about differences or similarities between contralateral and ipsilateral tetrodes to Hp might help readers.

      We will provide the requested analysis in the full revision. We saw both hemispheres had similar properties.

      (3) The authors have looked at changes in burst properties over days of training. For the coincidence of beta bursts between PFC and Hp, is there a change in the coincidence of bursts depending on the day or performance of the animal?

      We will provide the requested analysis in the full revision.

      (4) Regarding the changes in performance through days as well as variance of the beta burst frequency variance (Figures 3C and 4C); was there a change in the number of the beta bursts as animals learn the task, which might affect variance indirectly?

      The analysis we can do here is to control for differences in the number of bursts for each category (days/performance quintile) by resampling the data to match the burst count between categories.

      (5) In the behavioral task, within a session, animals needed to alternate between two wells, but the central arm (1) was in the same location. Did the authors alternate the location of well number 1 between days to different arms? It is possible that having well number 1 in the same location through days might have an effect on beta bursts, as they would get more rewards in well number 1?

      The central arm remained the same across days since we needed the animals to learn the alternation task. In our experience, the animal needs a few days to learn the alternation rule when we switch the central arm location. For this experiment, we were interested in the initial learning process, and we kept the central constant. Switching the central arm location is a great suggestion for a follow up experiment where we can understand the effects of reward contingency change has on beta bursts.

      (6) The animals did not increase their performance in the F maze as much as they increased it in the Y maze. It would be more helpful to see a comparison between mazes in Figure 5 in terms of beta burst timing. It seems like in Y maze, unrewarded trials have earlier beta bursts in Y maze compared to F maze. Also, is there a difference in beta burst frequencies of rewarded and unrewarded trials?

      We will add this analysis in the revised manuscript.

      (7) For individual cell analysis, the authors recorded from Hp and the behavioral task involved spatial learning. It would be helpful to readers if authors mention about place field properties of the cells they have recorded from. It is known that reward cells firing near reward locations have a higher rate to participate in a sharp wave ripple. Factoring in the place field propertiesd of the cells into the analysis might give a clearer picture of the lack of modulation of HP cells by beta and sharp wave ripples.

      This is a great suggestion, and we will address this in the full revision.

      Reviewer #2 (Public review):

      We thank Reviewer 2 for their helpful comments and will address these in full in the revision. These are great suggestions to provide greater detail on the spectral and behavioral data at the goal.

      (1) When presenting the power spectra for the representative example (Figure 1), it would be appropriate to display a broader frequency band-including delta, theta, and gamma (up to ~100 Hz), rather than only the beta band.

      We will show more examples of power spectra with a wider frequency range. We did examine the wider spectra and noticed power in the beta frequency band was more prominent than others.

      What was the rat's locomotor state (e.g., running speed) after entering the reward location, during which the LFPs were recorded?

      We will add the time aligned speed profile to the spectra and raw data examples. Because goal entry is defined as the time the animals break the infrared beam at the goal (response to Reviewer 1), the rat would have come to a stop.

      If the rats stopped at the goal but still consumed the reward (i.e., exhibited very low running speed), theta rhythms might still occasionally occur, and sharp-wave ripples (SWRs) could be observed during rest.

      We typically find low theta power in the hippocampus after the animal reaches the goal location and as it consumes reward. Reviewer 2 is correct about occasional theta power at the goal. We have observed this but mostly before the animal leaves the goal location. We did find SWRs during goal periods. One example is shown in Fig. 7A.

      Do beta bursts also occur during navigation prior to goal entry?

      We did not find consistent beta bursts in PFC or CA1 on approach to goal entry. We can provide the analyses in our full revision. In our initial exploratory analysis, we found beta bursts was most prominent after goal entry, which led us to focus on post-goal entry beta for this manuscript. However, beta oscillations in the hippocampus during locomotion or exploration has been reported (Ahmed & Mehta, 2012; Berke et al., 2008; França et al., 2014; França et al., 2021; Iwasaki et al., 2021; Lansink et al., 2016; Rangel et al., 2015).

      It would be beneficial to display these rhythmic activities continuously across both the navigation and goal entry phases. Additionally, given that the hippocampal theta rhythm is typically around 7-8 Hz, while a peak at approximately 15-16 Hz is visible in the power spectra in Figure 1C, the authors should clarify whether the 22 Hz beta activity represents a genuine oscillation rather than a harmonic of the theta rhythm.

      To ensure we fully address this concern, we can provide further spectral analysis in our revised manuscript to show theta power in CA1 is reduced after goal entry. We were initially concerned about the possibility that the 22Hz power in CA1 may be a harmonic rather than a standalone oscillation band. If these are harmonics of theta, we should expect to find coincident theta at the time of bursts in the beta frequency. In Fig. 1B, Fig. 2A, we show examples of the raw LFP traces from CA1. Here, the detected bursts are not accompanied by visible theta frequency activity. For PFC, we do not always see persistent theta frequency oscillations like CA1. In PFC, we found beta bursts were frequent and visually identifiable when examining the LFP. We provided examples of the PFC LFP (Fig. 1B, Fig. 1-1, and Fig. 2A). In these cases, we see clear beta frequency oscillations lasting several cycles and these are not accompanied by any oscillations in the theta frequency in the LFP trace.

      (2) The authors claim that beta activity is independent between CA1 and PFC, based on the low coherence between these regions. However, it is challenging to discern beta-specific coherence in CA1; instead, coherence appears elevated across a broader frequency band (Figure 2 and Figure 2-1D). An alternative explanation could be that the uncoupled beta between CA1 and PFC results from low local beta coherence within CA1 itself.

      This is a legitimate concern, and we used three methods to characterize coherence and coordination between the two regions. First, we calculated coherence for tetrode pairs for times when the animal was at goals (Fig. 2B), which provides a general estimation of coherence across frequencies but lack any temporal resolution. Second, we calculated burst aligned coherence (Fig. 2-1), which provides temporal resolution relative to the burst, but the multi-taper method is constrained by the time-frequency resolution trade off. Third, we quantified the timing between the burst peaks (Fig. 2D), which will describe timing differences but the peaks for the bursts may not be symmetric. Thus, each method has its own caveats, but we drew our conclusion from the combination of results from these three analyses, which pointed to similar conclusions.

      Reviewer 2 is correct in pointing out the uniformly high coherence within CA1 across the frequency range we examined. When we inspected the raw LFP across multiple tetrodes in CA1, they were similar to each other (Fig. 2A). This likely reflects the uniformity in the LFP across recording sites in CA1, which is what we saw with coherence values across the frequency range (Fig. 2B). We found CA1 coherence between tetrode pairs within CA1 across the range, were statistically higher, compared to tetrode pairs in PFC (Fig. 2B and C), thus our results are unlikely to be explained by low beta coherence within CA1 itself. The burst aligned coherence using a multi-taper method also supports this. The coherence values within CA1 at the time of CA1 bursts is ~0.8-0.9.

      (3) In Figure 2-1E-F, visual inspection of the box plots reveals minimal differences between PFC-Ind and PFC-Coin/CA1-Coin conditions, despite reported statistical significance. It may be necessary to verify whether the significance arises from a large sample size.

      We will include the sample sizes for each of the boxplots, these should be the same as the power comparison in Fig. 2-1 A-C. The LFP within a one second window centered around the bursts are usually very similar, and the multi-taper method will return high coherence values. The p-values from statistical comparisons between the boxes are corrected using the Benjamini-Hochberg method.

      (4) In Figure 3 and Figure 4, although differences in power and frequency appear to change significantly across days, these changes are not easily discernible by visual inspection. It is worth considering whether these variations are related to increased task familiarity over days, potentially accompanied by higher running speeds.

      We agree with Reviewer 2 that familiarity increases across days, and the animal is likely running faster. The analysis for Fig. 3 and 4 includes only data from periods when the animal was at the goal and was not moving. We used linear mixed effects models to quantify the relationship between power, frequency and day or behavioral quintile.

      (5) The stronger spiking modulation by local beta oscillations shown in Figure 6 could also be interpreted in the context of uncoupled beta between CA1 and PFC. In this analysis, only spikes occurring during beta bursts should be included, rather than all spikes within a trial. The authors should verify the dataset used and consider including a representative example illustrating beta modulation of single-unit spiking.

      We agree with Reviewer 2 that the stronger modulation to local beta is another piece of evidence indicating uncoupled beta between the two regions. We appreciate this suggestion and will add examples illustrating beta modulation for single units. We want to clarify the spikes were only from periods when the animal is at the goal location on each trial and does not include the running period between goals.

      (6) As observed in Figure 7D, CA1 beta bursts continue to occur even after 2.5 seconds following goal entry, when SWRs begin to emerge. Do these oscillations alternate over time, or do they coexist with some form of cross-frequency coupling?

      This is a very interesting and helpful suggestion. Although we found SWRs generally appear later than beta bursts, it is possible the two are related on a finer timescale pointing to coordination. Our cross-correlation analysis between PFC and CA1 beta bursts only showed the relationship on the timescale of seconds. We will show a higher time-resolution version of this analysis in the revision.

      Reviewer #3 (Public review):

      Summary:

      This paper explored the role of beta rhythms in the context of spatial learning and mPFC-hippocampal dynamics. The authors characterized mPFC and hippocampal beta oscillations, examining how their coordination and their spectral profiles related to learning and prefrontal neuronal firing. Rats performed two tasks, a Y-maze and an F-maze, with the F-maze task being more cognitively demanding. Across learning, prefrontal beta oscillation power increased while beta frequency decreased. In contrast, hippocampal beta power and beta frequency decreased. This was particularly the case for the well-performed and well-learned Y-maze paradigm. The authors identified the timing of beta oscillations, revealing an interesting shift in beta burst timing relative to reward entry as learning progressed. They also discovered an interesting population of prefrontal neurons that were tuned to both prefrontal beta and hippocampal sharp-wave ripple events, revealing a spectrum of SWR-excited and SWR-inhibited neurons that were differentially phase locked to prefrontal beta rhythms.

      In sum, the authors set out to examine how beta rhythms and their coordination were related to learning and goal occupancy. The authors identified a set of learning and goal-related correlates at the level of LFP and spike-LFP interactions, but did not report on spike-behavioral correlates.

      Strengths:

      Pairing dual recordings of medial prefrontal cortex (mPFC) and CA1 with learning of spatial memory tasks is a strength of this paper. The authors also discovered an interesting population of prefrontal neurons modulated by both beta and CA1 sharp-wave ripple (SWR) events, showing a relationship between SWR-excited and SWR-inhibited neurons and beta oscillation phase.

      Weaknesses:

      Moreover, there is little detail provided about sample sizes and how data sampling is being performed (e.g., rats, sessions, or trials), raising generalizability concerns.

      We appreciate Reviewer 3’s thoughtful suggestions for making our claims convincing. We will include information about sample sizes and address each detailed recommendation in the revised manuscript.

      The authors report on a task where rats were performing sub-optimally (F-maze), weakening claims.

      Our experiment was designed to allow us to examine within the same animal, a well-performed task (Y) and a less well-performed task (F). This contrast allows us to determine differences in neural correlates. We can further dissect the relevant differences to take advantage of this experiment design.

      Likewise, it is questionable as to whether mPFC and hippocampus are dually required to perform a no-delay Y-maze task at day 5, where rats are performing near 100%.

      We agree with Reviewer 3 that the mPFC and hippocampus may not be required when the animal reaches stable performance on day 5 (Deceuninck & Kloosterman, 2024). The data we collected spans the full range of early learning (day 1) to proficiency (day 5). We wanted to understand the dynamics of beta across these learning stages.

      Recent studies suggest mPFC and hippocampus are likely to be needed, in some capacity, for learning continuous spatial alternation tasks on a range of maze geometries. Lesions, inactivation or waking activity perturbation of hippocampus or hippocampus and mPFC on the W maze alternation task slowed learning (Jadhav et al., 2012; Kim & Frank, 2009; Maharjan et al., 2018). More recently, optogenetic silencing of mPFC after sharp wave ripples on the Y maze alternation affected performance when the center arm was switched (den Bakker et al., 2023). The Y and F mazes in our study both share the continuous alternation rule, where the animal needed to avoid visiting a previously visited location on the outbound choice relative to the center, and always return to the center location.

      Further, the performance characteristics on the outbound and inbound components of our Y task is similar to the W task. We have analyzed the “inbound” and “outbound” performance of the animals on the Y maze alternation task, and they are similar to the W maze alternation task. The “inbound” or reference location component is learned quickly whereas the ”outbound”, alternation component is learned slowly. We can add this analysis to the revised manuscript.

      There would be little reason to suspect strong oscillatory coupling when task performance is poor and/or independent of mPFC-HPC communication (Jones and Wilson, 2005) potentially weakening conclusions about independent beta rhythms.

      Although many studies have examined the oscillatory coupling properties at the theta frequency between mPFC-HPC (Hyman et al., 2005; Jones & Wilson, 2005; Siapas et al., 2005), our understanding of beta frequency coordination between the two regions is less established, especially at goal locations. Beta frequency coordination at goal locations may or may not follow similar properties to theta frequency coupling. In this manuscript we are reporting the properties of goal-location beta frequency activity in mPFC-HPC networks. We are not aware of prior work describing these properties at this stage of a spatial navigation task, especially their coordination in time.

      References

      Ahmed, O. J., & Mehta, M. R. (2012). Running speed alters the frequency of hippocampal gamma oscillations. J Neurosci, 32(21), 7373-7383. https://doi.org/10.1523/JNEUROSCI.5110-11.2012

      Berke, J. D., Hetrick, V., Breck, J., & Greene, R. W. (2008). Transient 23-30 Hz oscillations in mouse hippocampus during exploration of novel environments. Hippocampus, 18(5), 519-529. https://doi.org/10.1002/hipo.20435

      Deceuninck, L., & Kloosterman, F. (2024). Disruption of awake sharp-wave ripples does not affect memorization of locations in repeated-acquisition spatial memory tasks. Elife, 13. https://doi.org/10.7554/eLife.84004

      den Bakker, H., Van Dijck, M., Sun, J. J., & Kloosterman, F. (2023). Sharp-wave-ripple-associated activity in the medial prefrontal cortex supports spatial rule switching. Cell Rep, 42(8), 112959. https://doi.org/10.1016/j.celrep.2023.112959

      França, A. S., do Nascimento, G. C., Lopes-dos-Santos, V., Muratori, L., Ribeiro, S., Lobão-Soares, B., & Tort, A. B. (2014). Beta2 oscillations (23-30 Hz) in the mouse hippocampus during novel object recognition. Eur J Neurosci, 40(11), 3693-3703. https://doi.org/10.1111/ejn.12739

      França, A. S. C., Borgesius, N. Z., Souza, B. C., & Cohen, M. X. (2021). Beta2 Oscillations in Hippocampal-Cortical Circuits During Novelty Detection. Front Syst Neurosci, 15, 617388. https://doi.org/10.3389/fnsys.2021.617388

      Hyman, J. M., Zilli, E. A., Paley, A. M., & Hasselmo, M. E. (2005). Medial prefrontal cortex cells show dynamic modulation with the hippocampal theta rhythm dependent on behavior. Hippocampus, 15(6), 739-749. https://doi.org/10.1002/hipo.20106

      Iwasaki, S., Sasaki, T., & Ikegaya, Y. (2021). Hippocampal beta oscillations predict mouse object-location associative memory performance. Hippocampus, 31(5), 503-511. https://doi.org/10.1002/hipo.23311

      Jadhav, S. P., Kemere, C., German, P. W., & Frank, L. M. (2012). Awake hippocampal sharp-wave ripples support spatial memory. Science (New York, N.Y.), 336(6087), 1454-1458. https://doi.org/10.1126/science.1217230

      Jones, M. W., & Wilson, M. A. (2005). Theta Rhythms Coordinate Hippocampal–Prefrontal Interactions in a Spatial Memory Task. PLoS Biology, 3(12). https://doi.org/10.1371/journal.pbio.0030402

      Kim, S. M., & Frank, L. M. (2009). Hippocampal Lesions Impair Rapid Learning of a Continuous Spatial Alternation Task. PLoS ONE, 4(5). https://doi.org/10.1371/journal.pone.0005494

      Lansink, C. S., Meijer, G. T., Lankelma, J. V., Vinck, M. A., Jackson, J. C., & Pennartz, C. M. (2016). Reward Expectancy Strengthens CA1 Theta and Beta Band Synchronization and Hippocampal-Ventral Striatal Coupling. J Neurosci, 36(41), 10598-10610. https://doi.org/10.1523/JNEUROSCI.0682-16.2016

      Maharjan, D. M., Dai, Y. Y., Glantz, E. H., & Jadhav, S. P. (2018). Disruption of dorsal hippocampal - prefrontal interactions using chemogenetic inactivation impairs spatial learning. Neurobiol Learn Mem, 155, 351-360. https://doi.org/10.1016/j.nlm.2018.08.023

      Rangel, L. M., Chiba, A. A., & Quinn, L. K. (2015). Theta and beta oscillatory dynamics in the dentate gyrus reveal a shift in network processing state during cue encounters. Front Syst Neurosci, 9, 96. https://doi.org/10.3389/fnsys.2015.00096

      Siapas, A. G., Lubenov, E. V., & Wilson, M. A. (2005). Prefrontal Phase Locking to Hippocampal Theta Oscillations. Neuron, 46(1), 141-151. https://doi.org/10.1016/j.neuron.2005.02.028.

    1. / 🧊/ ♖/ hyperpost/ ~/ indyweb/ 📓/ 20/ 26/ 16/ ~sim...page

      Inside a knowledge commons wiki page

      Part 4 of an ongoing series on knowledge serving commons

    1. Reviewer #2 (Public review):

      Summary:

      Ji, Ma and colleagues report the discovery of a mechanism in C. elegans that mediates transcriptional responses to low intensity light stimuli. They find that light-induced transcription requires a pair of bZIP transcription factors and induces expression of a cytochrome P450 effector. This unexpected light-sensing mechanism is required for physiologically relevant gene expression that controls behavioral plasticity. The authors further show that this mechanism can be co-opted to create light-inducible transgenes.

      Strengths:

      The authors rigorously demonstrate that ambient light stimuli regulate gene expression via a mechanism that requires the bZIP factors ZIP-2 and CEBP-2. Transcriptional responses to light stimuli are measured using transgenes and using measurements of endogenous transcripts. The study shows proper genetic controls for these effects. The study shows that this light-response does not require known photoreceptors, is tuned to specific wavelengths, and is highly unlikely to be an artifact of temperature-sensing. The study further shows that the function of ZIP-2 and CEBP-2 in light-sensing can be distinguished from their previously reporter role in mediating transcriptional responses to pathogenic bacteria. The study includes experiments that demonstrate that regulatory motifs from a known light-response gene can be used to confer light-regulated gene expression, demonstrating sufficiency and suggesting an application of these discoveries in engineering inducible transgenes. Finally, the study shows that ambient light and the transcription factors that transduce it into gene expression changes are required to stabilize a learned olfactory behavior, suggesting a physiological function for this mechanism.

      Weaknesses:

      The study implies but does not show that the effects of ambient light on stabilizing a learned olfactory behavior are through the described pathway. To show this clearly, the authors should determine whether ambient light has any further effects on learning in mutants lacking CYP-14A5, ZIP-2, or CEBP-2.

    2. Author response:

      The following is the authors’ response to the original reviews

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      The authors set out to understand how animals respond to visible light in an animal without eyes. To do so, they used the C. elegans model, which lacks eyes, but nonetheless exhibits robust responses to visible light at several wavelengths. Here, the authors report a promoter that is activated by visible light and independent of known pathways of light responses.

      Strengths:

      The authors convincingly demonstrate that visible light activates the expression of the cyp-14A5 promoter-driven gene expression in a variety of contexts and report the finding that this pathway is activated via the ZIP-2 transcriptionally regulated signaling pathway.

      Weaknesses:

      Because the ZIP-2 pathway has been reported to be activated predominantly by changes in the bacterial food source of C. elegans -- or exposure of animals to pathogens -- it remains unclear if visible light activates a pathway in C. elegans (animals) or if visible light potentially is sensed by the bacteria on the plate, which also lack eyes. Specifically, it is possible that the plates are seeded with excess E. coli, that E. coli is altered by light in some way, and in this context, alters its behavior in such a way that activates a known bacterially responsive pathway in the animals. This weakness would not affect the ability to use this novel discovery as a tool, which would still be useful to the field, but it does leave some questions about the applicability to the original question of how animals sense light in the absence of eyes.

      Thank you for the insightful questions and suggestions. We have now performed a key experiment requested. Interesting new data (Fig. S1I) show that light induction of cyp-14A5p::GFP requires live bacteria that maintain a non-starved physiological state. Neither plates without food nor plates with heat-killed OP50 support robust induction. We now include this interesting new result in the paper and revised discussion on the bacteria-modulated mechanism but note that this bacterial requirement does not alter the central conclusions of the study. Rather, it reveals an intriguing mechanistic layer, namely, that bacterial metabolic activity likely influences the animal’s sensitivity to environmental light. We are pursuing this host–microbe interaction in a separate study. In the present work, we focus on the intrinsic regulation and functional significance of cyp-14A5 under standard laboratory conditions with live OP50. Accordingly, we have revised the Results and Discussion to reflect the appropriate scope.

      Reviewer #2 (Public review):

      Summary:

      Ji, Ma, and colleagues report the discovery of a mechanism in C. elegans that mediates transcriptional responses to low-intensity light stimuli. They find that light-induced transcription requires a pair of bZIP transcription factors and induces expression of a cytochrome P450 effector. This unexpected light-sensing mechanism is required for physiologically relevant gene expression that controls behavioral plasticity. The authors further show that this mechanism can be co-opted to create light-inducible transgenes.

      Strengths:

      The authors rigorously demonstrate that ambient light stimuli regulate gene expression via a mechanism that requires the bZIP factors ZIP-2 and CEBP-2. Transcriptional responses to light stimuli are measured using transgenes and using measurements of endogenous transcripts. The study shows proper genetic controls for these effects. The study shows that this light-response does not require known photoreceptors, is tuned to specific wavelengths, and is highly unlikely to be an artifact of temperature-sensing. The study further shows that the function of ZIP-2 and CEBP-2 in light-sensing can be distinguished from their previously reported role in mediating transcriptional responses to pathogenic bacteria. The study includes experiments that demonstrate that regulatory motifs from a known light-response gene can be used to confer light-regulated gene expression, demonstrating sufficiency and suggesting an application of these discoveries in engineering inducible transgenes. Finally, the study shows that ambient light and the transcription factors that transduce it into gene expression changes are required to stabilize a learned olfactory behavior, suggesting a physiological function for this mechanism.

      Weaknesses:

      The study implies but does not show that the effects of ambient light on stabilizing a learned olfactory behavior are through the described pathway. To show this clearly, the authors should determine whether ambient light has any effect on mutants lacking CYP-14A5, ZIP-2, or CEBP-2. Other minor edits to the text and figures are suggested.

      We appreciate the reviewer’s comment. Our study indeed implies that ambient light stabilizes learned olfactory behavior through effects on the described pathway. Importantly, the existing data already address this point. Mutants lacking CYP-14A5, ZIP-2, or CEBP-2 display impaired olfactory memory even when exposed to ambient light, indicating that these genes are required for the behavioral effect of light. Consistent with this, ambient light robustly induces cyp-14A5p::GFP in wild-type animals but fails to do so in zip-2 and cebp-2 mutants, demonstrating that light-dependent transcriptional activation is blocked upstream in these pathway mutants. Together, these results support the conclusion that ambient light acts through the ZIP-2 → CEBP-2 → CYP-14A5 pathway to stabilize memory. Minor textual and figure revisions have been made where helpful to clarify this point.

      Reviewer #3 (Public review):

      Ji et al. report a novel and interesting light-induced transcriptional response pathway in the eyeless roundworm Caenorhabditis elegans that involves a cytochrome P450 family protein (CYP-14A5) and functions independently from previously established photosensory mechanisms. Although the exact mechanisms underlying photoactivation of this pathway remain unclear, light-dependent induction of CYP-14A5 requires bZIP transcription factors ZIP-2 and CEBP-2 that have been previously implicated in worm responses to pathogens. The authors then suggest that light-induced CYP-14A5 activity in the C. elegans hypoderm can unexpectedly and cell-non-autonomously contribute to retention of an olfactory memory. Finally, the authors demonstrate the potential for this pathway to enable robust light-induced control of gene expression and behavior, albeit with some restrictions. Overall, the evidence supporting the claims of the authors is convincing, and the authors' work suggests numerous interesting lines of future inquiry.

      (1) The authors determine that light, but not several other stressors tested (temperature, hypoxia, and food deprivation), can induce transcription of cyp-15A5. The authors use these experiments to suggest the potential specificity of the induction of CYP-14A5 by light. Given the established relationship between light and oxidative stress and the authors' later identification of ZIP-2, testing the effect of an oxidative stressor or pathogen exposure on transcription of cyp-14A5 would further strengthen the validity of this statement and potentially shed some insight into the underlying mechanisms.

      We appreciate the reviewer’s thoughtful suggestion. We would like to clarify that the “specificity” we refer to is the strong and preferential induction of cyp-14A5 by light among pathogen or detoxification-related genes, rather than an assertion that cyp-14A5 is exclusively light-responsive. This does not preclude the possibility that cyp-14A5 can also be activated under other conditions. Indeed, prior work from the Troemel laboratory has identified cyp-14A5 as one of many pathogen-inducible genes, consistent with its role in stress physiology. Our data show that classical pathogen-responsive genes (e.g., irg-1) are not induced by light, whereas cyp-14A5 is strongly induced, highlighting the selective engagement of this cytochrome P450 by light under the conditions tested. We have revised the text to clarify this point.

      (2) The authors suggest that short-wavelength light more robustly increases transcription of cyp-14A5 compared to equally intense longer wavelengths (Figure 2F and 2G). Here, however, the authors report intensities in lux of wavelengths tested. Measurements of and reporting the specific spectra of the incident lights and their corresponding irradiances (ideally, in some form of mW/mm2 - see Ward et al., 2008, Edwards et al., 2008, Bhatla and Horvitz, 2015, De Magalhaes Filho et al., 2018, Ghosh et al., 2021, among others, for examples) is critical for appropriate comparisons across wavelengths and facilitates cross-checking with previous studies of C. elegans light responses. On a related and more minor note, the authors place an ultraviolet shield in front of a visible light LED to test potential effects of ultraviolet light on transcription of cyp-14A5. A measurement of the spectrum of the visible light LED would help confirm if such an experiment was required. Regardless, the principal conclusions the authors made from these experiments will likely remain unchanged.

      Thank you. We have revised the text to clarify this point. “Using controlled light versus dark conditions, we confirmed the finding from an integrated cyp-14A5p::GFP reporter and observed its robust widespread GFP expression in many tissues induced by moderate-intensity (500-3000 Lux, 16-48 hr duration) LED light exposure (Fig. 1A). The photometric Lux range is approximately 0.1–0.60 mW/cm<sup>2</sup> in radiometric (total radiant power) metric given the spectrum of the LED light source.”

      (3) The authors report an interesting observation that animals exposed to ambient light (~600 lux) exhibit significantly increased memory retention compared to those maintained in darkness (Figure 4). Furthermore, light deprivation within the first 2-4 hours after learning appears to eliminate the effect of light on memory retention. These processes depend on CYP-14A5, loss of which can be rescued by re-expression of cyp-14A5 in mutant animals using a hypoderm-specific- and non-light-inducible- promoter. Taken together, the authors argue convincingly that hypodermal expression of cyp-14A5 can contribute to the retention of the olfactory memory. More broadly, these experiments suggest that cell-non-autonomous signaling can enhance retention of olfactory memory. How retention of the olfactory memory is enhanced by light generally remains unclear. In addition, the authors' experiments in Figure 1B demonstrate - at least by use of the transcriptional reporter - that light-dependent induction of cyp-14A5 transcription at 500 - 1000 lux is minimal and especially so at short duration exposures. Additional experiments, including verification of light-dependent changes in CYP-14A5 levels in the olfactory memory behavioral setup, would help further interpret these otherwise interesting results.

      We thank the reviewer for these thoughtful comments. We agree that understanding how light enhances memory retention at a mechanistic level is an important direction for future work. Regarding the light intensities used in Figure 1B, we would like to clarify that 500–1000 lux does produce a measurable and statistically significant induction of cyp-14A5p::GFP, although the magnitude is lower than that observed at higher intensities. We interpret this modest induction as physiologically relevant: intermediate light levels appear sufficient to engage the CYP-14A5–dependent program required for memory stabilization, whereas stronger light intensities are detrimental to learning and reduce behavioral performance. Thus, the behavioral paradigm uses a light regime that activates the pathway without introducing stress-associated confounders.

      (4) The experiments in Figure 4 nicely validate the usage of the cyp-14A5 promoter as a potential tool for light-dependent induction of gene expression. Despite the limitations of this tool, including those presented by the authors, it could prove useful for the community.

      Thank you and we agree. In addition, we have included in the revised manuscript the single-copy integration strains based on UAS-GAL4 that produced similar results as transgenic strains and will be even more flexible and useful for the community.

      Recommendations for the authors:

      Reviewing Editor Comments:

      While appreciating the quality and presentation of this important study, we had two major concerns that the authors need to address.

      (1) Bacteria-versus-worm origin:

      To rule out a bacterially derived stimulus, we suggest testing whether cyp-14A5p::GFP is inducible without bacteria (or killed bacteria). Checking whether the canonical immune reporters irg-5p::GFP and gst-4p::GFP are also light-inducible will further clarify this point.

      We have now performed the key experiment requested by the reviewers. Interesting new data (Fig. S1I) show that light induction of cyp-14A5p::GFP requires live bacteria that maintain a non-starved physiological state. Neither plates without food nor plates with heat-killed OP50 support robust induction. Importantly, this requirement does not alter any of the central conclusions of the study. Rather, it reveals an intriguing mechanistic layer, namely, that bacterial metabolic activity influences the animal’s sensitivity to environmental light. We are pursuing this host–microbe interaction in a separate study. In the present work, we focus on the regulation and functional significance of cyp-14A5 under standard laboratory conditions with live OP50.

      We included the data (Fig. 2D) to show that the canonical immune reporter irg-1p::GFP is not induced by the light condition that robustly induced cyp-14A5p::GFP, and gst-4p::GFP is only very mildly induced (Fig. S1J).

      (2) Pathway-behaviour link:

      The behavioural relevance of the newly described pathway is intriguing, but it needs direct support. Ideally, this would require comparing memory in WT, zip-2-/-, cebp-2-/-, and cyp-14A5-/- under both dark and light conditions. But at the very least, it would require testing if constitutive CYP-14A5 rescue in the dark bypasses the requirement of light.

      We respectfully submit that additional experiments are not required to support the behavioral conclusions. Our model posits that cyp-14A5 is required but not sufficient for memory stabilization, one component within a broader set of light-induced genes. Thus, constitutive hypodermal expression of cyp-14A5 would not be expected to bypass the requirement for ambient light. The existing data are fully consistent with this framework and conclusions of the paper.

      Reviewer #1 (Recommendations for the authors):

      Overall, I think this paper is interesting to the field of C. elegans researchers at a minimum, as a light-inducible gene expression system might have a variety of uses throughout the diverse research paradigms that use this model system. With that said, I have a couple of suggestions that I think would substantially impact the ability to interpret these findings, which might be useful for broader implications of the study.

      (1) Most importantly, the supplemental table of RNA-seq data should likely be updated and discussed further beyond the cyp-14A5 findings. First, the authors report 7,902 genes are differentially expressed in response to light and then break these into upregulated and downregulated genes. But there are only 1,785 upregulated genes and 3,632 downregulated genes. This adds up to 5417 genes, but doesn't match the 7,902 genes reported to change, and I could not find in the text if some other filters were applied that might explain this not adding up.

      Thank you for this helpful comment. We agree that the exact numbers depend on statistical thresholds and are therefore somewhat arbitrary. To avoid implying unwarranted precision, we have revised the text to state that “thousands of genes are differentially regulated by light.”

      (2) Among the upregulated genes in response to light are irg-5, irg-4, irg-6, irg-8, and gst-4. Indeed, all of these well-studied genes (or most) show even more induction by light than cyp-14A5. It is my opinion that this result needs further criticism as there are existing GFP reporters for gst-4 and irg-5 that are similarly well studied to irg-1, which is in the paper (and is not upregulated). In my opinion, the authors should test if they see activation of the irg-4 and gst-4 GFP reporters by light as well. This would not only validate their RNA-seq but might provide more important evidence for the field, as these other reporters are not considered light-inducible previously. If they are, several major studies might be impacted by this.

      Thank you for the comments. We have irg-1p::GFP and gst-4p::GFP in the lab but did not find other reporters for the genes mentioned from CGC. Neither of the two reporters showed light induction (Figs. 2D and S1J) as strongly as cyp-14A5p::GFP. It is possible that irg-1 and gst-4 RNA levels are up-regulated but not reflected in our transgenic reporters that used their promoters to drive GFP expression. Stronger light induction of cyp-14A5p::GFP is unlikely caused by the multi-copy nature of the transgene since newly generated single-copy integration strains based on the UAS-GAL4 system produced similar robust results for light induction (Fig. S1I and see Method).

      (3) Along the same lines, if at least 4 (and likely more) well characterized immune response genes are activated by light and these genes are known to mostly respond to differences in C. elegans bacterial food source/diet, then it stands to reason that maybe in this experimental context the light is not acting on "animals" at all, but rather triggering changes in E. coli (i.e. changing E. coli metabolism or pathogenicity like properties). If true, then perhaps the light affects bacteria in such a way that it activates a previously known bacterial pathogen response mechanism. This should be easy to test by seeing if this reporter is still activated by light in the presence of diverse bacterial diets, which are available from the CGC (CeMBio collection, for example). This is likely very important to the conclusions of the manuscript as it relates to animals sensing light, but might not be as important to the use of this system as a tool.

      Thank you for the insightful questions and suggestions. Interesting new data (Fig. S1I) show that light induction of cyp-14A5p::GFP requires live bacteria that maintain a non-starved physiological state. Neither plates without food nor plates with heat-killed OP50 support robust induction. Importantly, this requirement does not alter any of the central conclusions of the study. Rather, it reveals an intriguing mechanistic layer, namely, that bacterial metabolic activity influences the animal’s sensitivity to environmental light. We are pursuing this host–microbe interaction in a separate study. In the present work, we focus on the regulation and functional significance of cyp-14A5 under standard laboratory conditions with live OP50. We have revised the Results and Discussion to reflect the appropriate scope of our study and implications of the new findings.

      (4) Lastly, it seems unlikely that nearly half the C. elegans genome is transcriptionally regulated by light (or nearly half of the detected genes in the RNA-seq results). It seems likely that this list of 7,902 genes contains false positives. I would suggest upping some sort of filter, like moving to padj < 0.01 instead of 0.05, or adding a 4-fold change filter (2-fold and 0.01 still results in near 5000+ genes changing, which might explain the difference in up and down genes just being due to different padj filters. Along these lines, it is worth noting that the padj is generated using DESeq2 it appears and one of the first assumptions of DESeq2 is that the median expressed genes do not change, and there is a normalization. However, if MOST genes do change in expression, then one of the fundamental assumptions of DESeq2 is not valid, and thus would mean it might not be an appropriate analysis tool - perhaps there is some other normalization that could be done before running DESeq2 due to some other noise present in the RNA-seq runs?

      Thank you for this helpful comment. We agree that the exact numbers depend on statistical thresholds and are therefore somewhat arbitrary. To avoid implying unwarranted precision, we have revised the text to state that “thousands of genes are differentially regulated by light.”

      (5) Minor point - I would delete the reference to ER in line 92. While most CYPs do localize to the ER, the images shown are not clearly ER and probably do not have enough resolution to make claims about subcellular localization. To me, it would be easier to just delete this claim as it is not required for the main claims of the manuscript.

      Reference deleted.

      Reviewer #2 (Recommendations for the authors):

      I have one request for clarification that likely requires additional data. Figure 3 shows that ambient light stabilizes learned changes to chemotaxis and further shows that CYP-14A5 has a similar function. The implication is that light promotes CYP-14A5 expression, which somehow promotes memory consolidation. The authors should test whether memory consolidation in cyp-15A5, zip-2, or cebp-2 mutants is no longer affected by ambient light.

      It is also possible to test whether forced expression of CYP14A5 can bypass the effect of 'no light' conditions on memory consolidation.

      Thank you for the comments. We respectfully submit that additional experiments are not required to support the behavioral conclusions. Our model posits that cyp-14A5 is required but not sufficient for memory stabilization, one component within a broader set of light-induced genes. Thus, constitutive hypodermal expression of cyp-14A5 would not be expected to bypass the requirement for ambient light. The existing data are fully consistent with this framework and conclusions of the paper.

      I have several minor suggestions relating to the text and figures.

      (1) In the introduction, the authors assert that little is known about non-visual light sensing and then list many examples of molecular mechanisms of non-visual light-sensing. They should emphasize that non-visual light sensing is important and accomplished by diverse molecular mechanisms.

      Agree and revised accordingly.

      (2) Check spacing between gene names (line 109).

      Corrected.

      (3) There should be a new paragraph break when the uORF experiments are described (line 146).

      Corrected.

      (4) 'Phenoptosis' is an esoteric word. Please define it (line 206).

      Corrected.

      (5) 'p' in the transgene name cyp-14A5p::nlp-22 is in italics, unlike the rest of the manuscript.

      Corrected.

      (6) 'Acknowledgment' should be 'Acknowledgments' (line 384).

      Corrected.

      (7) The color map in panel 1B should have units.

      It was arbitrary unit (now added) to highlight relative not absolute differences.

      (8) In panel 1E, it is confusing to have 'DARK' denoted by reddish bars and 'LIGHT' denoted by bluish bars. Perhaps 'DARK' is black/dark grey and 'LIGHT' is white?

      Corrected.

      (9) In panel 1D, it takes a minute to find the purple diamond. Please mark up the volcano plot to make it easier.

      Corrected.

      Reviewer #3 (Recommendations for the authors):

      The authors generally present convincing experiments detailing interesting results in a well-written manuscript.

      One quick note: the same Bhatla and Horvitz (2015) papers appear to be cited twice [line 52].

      Corrected.