10,000 Matching Annotations
  1. Jan 2026
    1. While we do learn from experience, until we learn specific vocabulary and develop foundational knowledge of communication concepts and theories, we do not have the tools needed to make sense of these experiences.

      When we are uninformed about a topic it is harder to learn further about it. With a communications course we may know how to communicate but don't necessarily have the information to fully describe it. I can relate this to myself because I feel I know how to communicate however, I do not have the proper vocabulary to describe it.

    1. I told him, “Please don’t die over the neighborhoodThat your mama rentin’Take your drug money and buy the neighborhoodThat’s how you rinse it”

      Imagining himself talking to a young drug dealer hustling at the corners of his neighborhood, Jay-Z is basically suggesting him to be wise with his money and reinvest it in buying property in his surroundings, not only to ensure himself generational wealth, but also to give himself the chance to abandon the street life and the dangers associated with it.

    2. House nigga, don’t fuck with meI’m a field nigga, with shined cutleryGold-plated quarters, where the butlers beI’ma play the corners where the hustlers be

      Jay-Z starts his first verse by remembering once again the difference between the "house nigga" and the "field nigga".

      During slavery, slaves working inside the master's house often developed a better relationship with him and, consequently, gained certain privileges they would often protect by perpetuating and favoring the mechanisms of slavery. Slaves working in the fields, on the contrary, had no kind of pleasant relationship with their masters and hated them, planning, when possible, to escape.

      Jay-Z distances himself from the concept of the "house nigga", saying he cannot be found in gold-plated quarters, where the butlers be but on the corners where the hustlers be, referring to drug dealing on street corners.

    1. What does your receiver need to know?

      My receiver needs to know that I am sick and will not be able to make the meeting tomorrow. I will express how sorry I am, but also list out the "leg work" I completed. I will also request a follow up from the meeting to show that I am still wanting to work and learn even if being sick postpones that.

    1. What kind of relationship do you have or want to have with your receiver?

      In the email I would apologize and excuse myself, but I also want to emphasize how much I value this internship and was looking forward to Monday’s work to ensure that my boss values my dedication to work.

    1. non-evangelical Republicans seem as attuned tothe plight of their political compatriots, despite not being members of thereligious group, as white evangelical Democrats who are, themselves, mem-bers of the group in question. Non-evangelical Democrats report thatevangelicals face discrimination at the lowest rate: 22 percent.

      There's personal bias but also a silo effect

    2. Because the religiosity gap does not extend toAfrican Americans, secular white Americans and highly devout BlackAmericans are now on the same political team.

      But again ties back to morality politics, it is because their institutions, or lack thereof, support the same values

    3. In short, the more religious a personis, the more likely it is that he or she identifies with the Republican Party andsupports Republican candidates

      But what about new englaand

    Annotators

    1. I know I canBe what I wanna beIf I work hard at itI’ll be where I wanna beI know I can (I know I can)Be what I wanna be (be what I wanna be)If I work hard at it (if I work hard at it)I’ll be where I wanna be (I’ll be where I wanna be)

      The song is introduced by a children's choir singing what is the chorus of the song but, also, its main theme: working hard enough, everything can be achieved. This idea is an embodiment of the American Dream that, even despite the harsh conditions he was born in, Nas was able to realize. In the following verses, though, Nas seems to be mainly directing his message to Black boys and girls, trying to give them advice and a general sense of empowerment and self-confidence.

      Nas, here, is trying to give a proper self-consciousness to his Black listeners, the same self-consciousness Du Bois argued was missing from the African-Americans' self-perspective.

    2. in God we trust

      "In God We Trust" is one of the mottoes of the United States, featured on coins and dollars.

      Here, again, Nas is referring to the American Mythology and imaginery but, in doing so, is talking specifically to black boys and girls.

    1. …At sunrise the Admiral again went away- in the boat, and landed to hunt the birds he had seen the day before. After a time, Martin Alonso Pinzon came to him with two pieces of cinnamon, and said that a Portuguese, who was one of his crew, had seen an Indian carrying two very large bundles of it; but he had not bartered for it, because of the penalty imposed by the Admiral on anyone who bartered. He further said that this Indian carried some brown things like nutmegs. The master of the Pinta said that he had found the cinnamon trees. The Admiral went to the place, and found that they were not cinnamon trees. The Admiral showed the Indians some specimens of cinnamon and pepper he had brought from Castillo, and they knew it, and said, by signs, that there was plenty in the vicinity, pointing to the S.E. He also showed them gold and pearls, on which certain old men said that there an infinite quantity in a place called Holito} and that the people wore it on their necks, ears, arms, and legs, as well as pearls. He further understood them to say that there were great ships and much merchandise, all to the S.K. He also understood that, far away, there were men with one eye, and others with dogs’ noses who were cannibals, and that when they captured an enemy they beheaded him and drank his blood…

      Here I think what Columbus was trying to say was that the Indians were showing them where they could find gold and the spices they were so interested in and who was guarding them.

    1. “intentional . . . I don’t take a lot of pointless pictures.” Gomez recognizes the power of her social media platform,

      I'm thrilled Gomez understands this. I feel as though more people should remember this, especially celebrities. The stupid things someone posts on social media doesn't just go away; everything has a meaning, even if it wasn't intended. It was unexpected that this was mentioned in the text, but I'm happy this came up.

    2. figuring out why and how it works or fails to work in achieving its communicative purpose.

      I really enjoy this quote, it shows how you don't have to take everything at face value. I believe it always good to question and be a little confused. Earlier I was confused and looked a word up to make sure I was understanding it correctly. I kept reading and my confusion was kind of solved with the reading but I enjoyed questioning the text.

    3. critical? How does being a critical reader, writer, and thinker differ from being an ordinary reader, writer, and thinker?

      I think being critical reader, writer, and thinker means not just taking information in or putting words on a page, but attempting to fully understand what you are reading, writing, or thinking and finding ways to apply it.

    4. As you consider your reading and viewing experiences on social media and elsewhere, note that your responses involve some basic critical thinking strategies.

      wether you notice it or not you put thought into everything before you hit the post button so critical thinking strategies not only apply in writing but also when you dont even notice

    5. Like these soldiers, you carry experiences into the writing classroom that will inform your participation.

      everyone has their own way of writing, but being able to come together and help one another is the true beauty of writing.

    6. When you post on social media platforms, for instance, your audience is probably anticipated. While you might have followers, you may not know them personally, but you anticipate who they are and how they might react.

      social media is somewhere people go to post to their followers. generally you have an idea who your audiences is so it naturally influences you outta habit to not post certain things.

    1. They have suffered so long from the Rebels that they want to shoulder the musket.

      The idea that freed slaves could carry a fighting spirit toward their opponents could be a blessing or a curse. On one hand, this could mean that the soldiers would be incredibly effective in their work of hunting and killing their opponents, but on the other hand, they could, in some cases, "fly off the handle" and kill masses of people who may have done them wrong that they were not assigned.

    1. I see no escape from that ambiguity. But we can at least distinguishthe rhetor — each of us, in and out of the academy, saying or writingthis or that to produce some effect on some audience — from therhetorician, the would-be scholar who studies the most effective formsof communication.

      I know this isn't applicable to 410 right now, but I feel like what we did last semester in 500w was practicing being in both roles. Like with our paper 1, we were the rhetorician through analysis and rhetor through writing the paper.

    2. rhetoricians still represent a tinyminority on the academic scene.

      I didn't see when this was published, but it would be interesting to look into the current numbers on this, like stats on rhetoric graduates. I wonder if there is a way to look at SDSU stats.

    3. "Rhetoric is that which creates an informed appetite for thegood." (Richard Weaver, 1948)• "Rhetoric is rooted in an essential function of language itself, afunction that is wholly realistic and continually born anew: theuse of language as a symbolic means of inducing cooperation inbeings that by nature respond to symbols." (Kenneth Burke,1950)• "Rhetoric is the art of discovering warrantable beliefs andi mproving those beliefs in shared discourse ... the art of probingwhat we believe we ought to believe, rather than proving what istrue according to abstract methods." (Wayne Booth, 1964)• "Rhetoric is a mode of altering reality, not by the direct applica-tion of energy to objects, but by the creation of discourse whichchanges reality through the mediation of thought and action."(Lloyd Bitzer, 1968)• " We should not neglect rhetoric's importance, as if it were simplya formal superstructure or technique exterior to the essentialactivity. Rhetoric is something decisive in society.... [T]hereare no politics, there is no society without rhetoric, without theforce of rhetoric." (Jacques Derrida, 1990)• "Rhetoric is the art, practice, and study of [all] human communi-cation." (Andrea Lunsford, 1995)• "Rhetoric appears as the connective tissue peculiar to civil societyand to its proper finalities, happiness and political peace hic etnunc." ( Marc Fumaroli, 1999

      To discuss these 8 new definitions: Some seem a little more creative/artistic than others.<br /> I really like "the use of language as a symbolic means of inducing cooperation in beings that by nature respond to symbols," from Burke. This was probably the point, but I understand how much sentiments for rhetoric changed when comparing these definitions to how it was described by Locke. I also like "shared discourse." - Booth - I feel like shared discourse is just what a science or discipline should just be. Communicating and discussing with peers/an audience is an essential part to rhetoric.

      Agree with "...of all human communication." I don't fully understand Marc Fumaroli. Will need to ask about this.

    Annotators

    1. It is not for a single generation alone, numbering three millions—sublime as would be that effort—that we are working. It is for humanity, the wide world over, not only now, but for all coming time, and all future generations:—

      This line in the narrative displays concern not only for the current generation of the time, but also maintaining freedom for those subjected to slavery.

    2. Reader, are you an Abolitionist? What have you done for the slave?

      This moment in the narrative, addresses the readers, holding them accountable to their thoughts of the subject of slavery. Addressing the audience not only keeps the reader engulfed in his narrative, but also think what they in fact are doing to support the abolishment movement.

    3. I was a stranger, and you took me in. I was hungry, and you fed me. Naked was I, and you clothed me. Even a name by which to be known among men, slavery had denied me. You bestowed upon me your own.

      This statement at the beginning of the narrative displays how William was taken in and seen as a person and not an object of possession. He portrays gratitude towards the white man who not only provided him with the means of being seen, but also a name, as many were robbed of during the time of slavery.

    4. But as soon as the subject came to my mind, I resolved on adopting my old name of William, and let Sandford go by the board, for I always hated it. Not because there was anything peculiar in the name; but because it had been forced upon me

      by taking up his own name and no longer calling himself the name that he was forced with, he is taking back something that is his, even sounding it out to get used to it.

    5. He told her that if she would accept his vile proposals, he would take her back with him to St. Louis, and establish her as his housekeeper at his farm. But if she persisted in rejecting them, he would sell her as a field hand on the worst plantation on the river.

      this shows the sexual abuse and violence towards them as Mr. Walker tries to intimidate Cynthia by selling her to a worse plantation if she rejects him.

    6. “Do you not call me a good master?”

      Brown's exchange here is highlighting how the slaveholders constructed a false narrative to justify their cruelty. The need here for moral validation from Brown shows a faltering to the moral high ground. He needs Brown to validate that that he had been kind, that their relationships transcends mere ownership. But Brown, aptly, points out that if he was, he "would not sell [him]". An irrefutable fact that he is treated as a transactional piece of property, not as a human being.

    7. Without entering into any farther particulars, suffice it to say that Walker performed his part of the contract, at that time.

      The cruelty of slavery to the helpless who could only onlook at the horrors of their oppressors is rivaled only by the direct victims. Brown's own torture where he "foresaw but too well what the result must be", where he knew what Walker was going to do to Cynthia and there was nothing he could do about it. Later, he summarizes the coercion as "Walker performed his part of the contract", and this, disgustingly, is how the oppressors made sexual violence transactional.

    8. I complained to my master of the treatment which I received from Major Freeland; but it made no difference. He cared nothing about it, so long as he received the money for my labor.

      This excerpt reveals the economic basis for the institution of slavery. Brown’s owner is no fool about the mistreatment that is occurring but is instead intentionally oblivious to it for the sake of the bottom line. This line highlights that the enslaved individuals had no way to seek justice because the individuals who were supposed to protect them had a vested monetary interest in enslaving them.

    9. Though the field was some distance from the house, I could hear every crack of the whip, and every groan and cry of my poor mother.

      This marks an important moment because it sheds light not only on the violence of slavery, but the violence it created on the psychological levels of the people living through it as well. Brown is not explicitly receiving lashes, but the pain falls harshly on him as well. It is through the medium of audio, as opposed to visual cues, that Brown brings into view the fact that child slaves of the time received lashings vicariously, through their emotions as well.

    10. The commander of the boat was William B. Culver. I remained on her during the sailing season, which was the most pleasant time for me that I had ever experienced.

      I think the reason why this quote is so important and exemplifies the story is the juxtaposition of emotions that is felt by the narrator. We know the horrible pain he felt in bondage, but to have an example of him free and how happy he is because he is now able to control his destiny.

    11. This incident shows how it is that slavery makes its victims lying and mean; for which vices it afterwards reproaches them, and uses them as arguments to prove that they deserve no better fate.

      Brown shows that slavery doesn't just control bodies, but reshapes behavior as well. Lying becomes a survival response. By naming these traits as products of oppression, he takes away the reader's ability to judge them as personal failures. This line forces responsibility back onto the system instead of the victims of it.

    12. “This is a note to have you whipped, and says that you have a dollar to pay for it.”

      Violence is treated as routine and bureaucratic. The whipping is no longer an act of rage, but a service to be paid for. Brown shows just how ingrained cruelty is in this system, and that it isn't only limited to the temperament of individuals.

    13. he man who but a few hours before had bound my hands together with a strong cord, read a chapter from the Bible, and then offered up prayer, just as though God sanctioned the act he had just committed upon a poor panting, fugitive slave.

      "The man who but a few hours before had bound my hands together with a strong cord, read a chapter from the Bible, and then offered up prayer, just as though God sanctioned the act he had just committed upon a poor panting, fugitive slave." This line stood out to me because even though oftentimes slaveowners considered themselves religious people, they would often use this to justify the cruel punishments they would give to enslaved people.

    1. There's a mismatch between me and my writing tools. They seem to want something slightly different from what I want. I wonder if anyone else has this feeling? I mean there's plenty of people who are apparently on a life-long quest to find the perfect app, because they still haven't found what they're looking for. What's up with that? Well this article made things a lot clearer for me: Artificial Memory and Orienting Infinity | Kei Kreutler. Kreutler argues we've conflated all memory with computer memory. That's to say we've assumed everything can be stored and retrieved as data. But this misses something crucial, which is that the kind of memory that shapes worlds requires transmission, relationship, and context, and not just storage. And this got me thinking: doesn't this apply to our digital writing tools? They have to store our writing as data, but in doing so they change it in subtle ways we might not even notice, except as the kind of vague unease I've been feeling. Why your note-making tools don’t quite work the way you want them to - and what to do about it. So am I over-thinking it again, or have you too felt a gap between what you want to do and what your writing tools expect you to do?

      reply to u/atomicnotes at https://reddit.com/r/Zettelkasten/comments/1qjrnp8/why_dont_my_notemaking_tools_work_the_way_i_want/

      In older analog offices, the office worker stored things on paper in piles, in folders, in various locations within their office. Because humans have excellent spatial memory, the worker would have an idea of what he might want and would know in which pile on their desk or which filing cabinet it might be filed in. Despite what may look like a messy office, most will know exactly where certain papers are "hiding". This overlaps with older indigenous cultures and artificial memory with structures like songlines, talking rocks, and later techniques from ars memoriae like method of loci or memory palaces. For more on this cross reference Hudson & Thames' First Knowledges series edited by Margo Neale.

      Entirely digital-based methods have erased a lot of these sorts of locational affordances.

    1. The F1 score favors classifiers that have similar precision and recall. This is not always what you want: in some contexts you mostly care about precision, and in other contexts you really care about recall. For example, if you trained a classifier to detect videos that are safe for kids, you would probably prefer a classifier that rejects many good videos (low recall) but keeps only safe ones (high precision), rather than a classifier that has a much higher recall but lets a few really bad videos show up in your product (in such cases, you may even want to add a human pipeline to check the classifier’s video selection). On the other hand, suppose you train a classifier to detect shoplifters in surveillance images: it is probably fine if your classifier only has 30% precision as long as it has 99% recall. Sure, the security guards will get a few false alerts, but almost all shoplifters will get caught. Similarly, medical diagnosis usually requires a high recall to avoid missing anything important. False positives can be ruled out by follow-up medical tests. Unfortunately, you can’t have it both ways: increasing precision reduces recall, and vice versa. This is called the precision/recall trade-off.

      NEED TO INTERNALIZE THIS

    1. A Biological View

      Adolescence is an incredibly unfair time for a child because their body is telling them to act more like an adult but they are still treated as children. This causes them to act out in their own ways which may cause danger to themselves.

    1. congenital

      Interesting choice of word. My son is a "compulsive" liar, but to use the word "congenital" implies something deeper, more malicious, almost intentional. It's the difference between sociopathy and psychopathy: Sociopaths have a triggering event after they're born that cause them to lose the ability to feel emotion (this leads some to falsely believe that somehow they are more capable of gaining the ability to feel emotion back), however Psychopaths are born without the ability to feel emotion. They never had it. This is the same way that a congenital liar might have been born with a compulsion to lie beyond repair.

    2. With the other woman

      There are a lot of these sentence fragments scattered throughout. The editor in me is screaming but the poet is seeing it in a light of beauty. Perhaps the fragments are Nandini attempting to write emotion. I've seen it done before and it's cool when it happens. The reader becomes a part of what's going on because they're reading things the way the character is experiencing it. It's hard to describe in an annotation but "One of Star Wars, One of Doom," by Lee K. Abbott does the same type of thing but with the emotion of physical danger chaos. This is more of mental chaos if that makes sense.

    1. Hiroshima in flames on the afternoon of August 6. The writing on the painting speaks of encountering “living Hell in this world.”

      This image shows not only the destruction of the city and the mass amount of dead, but also some survivors. They are depicted as suffering and scared, almost as if they are worse off than the dead. It truly looks like a classic depiction of hell.

    1. “Act with kindness but do not expect gratitude.”

      I think that this is a really good way for people to think. You shouldn't act out of kindness but then expect something in return, that isn't acting out of kindness that is a more selfish act. You should be kind because you want to be, not for something in return.

    2. Real knowledge is to know the extent of one’s ignorance.

      I don't know why this is but a majority of these saying, I feel like I've heard somewhere in an anime that I've watched.

      This one in particular is eye-opening makes me think. It feels like it's a direct challenge to pride and also connecting to learning and admitting what you don't know is like a start of actually improving.

    3. But the exams were also democratic in a way: even a scholar from a poor family could take the exam if he could educate himself. And success on the top exam was a ticket to the highest levels of imperial society. Over the centuries, the scholars became an upper class in Chinese society, a gentry based on educational merit rather than merely on birth or wealth.

      I think that that type of civil service exam system really made the Chinese government even more unusually stable. Although it wasn't even fully equal, it was really merit based then the most systems at the time that were present. Having that idea that a poor scholar could really rise to the high office through having an education really helps explain why China was really often governed by those trained administrators rather than the foreign nobles that would get it by having it passed down.

    1. eLife Assessment

      This study uses a valuable combination of functional magnetic resonance imaging and electroencephalography (EEG) to study brain activity related to prediction errors in relation to both sensorimotor and more complex cognitive functions. It provides incomplete evidence to suggest that prediction error minimisation drives brain activity across both types of processing and that elevated inter-regional functional coupling along a superior-inferior axis is associated with high prediction error, whereas coupling along a posterior-anterior axis is associated with low prediction error. The manuscript will be of interest to neuroscientists working on predictive coding and decision-making, but would benefit from more precise localisation of EEG sources and more rigorous statistical controls.

    2. Reviewer #1 (Public review):

      Summary:

      This study investigates whether prediction error extends beyond lower-order sensory-motor processes to include higher-order cognitive functions. Evidence is drawn from both task-based and resting-state fMRI, with addition of resting-state EEG-fMRI to examine power spectral correlates. The results partially support the existence of dissociable connectivity patterns: stronger ventral-dorsal connectivity is associated with high prediction error, while posterior-anterior connectivity is linked to low prediction error. Furthermore, spontaneous switching between these connectivity patterns was observed at rest and correlated with subtle intersubject behavioral variability.

      Strengths:

      Studying prediction error from the lens of network connectivity provides new insights into predictive coding frameworks. The combination of various independent datasets to tackle the question adds strength, including two well-powered fMRI task datasets, resting-state fMRI interpreted in relation to behavioral measures, as well as EEG-fMRI.

      Minor Weakness:

      The lack of spatial specificity of sensor-level EEG somewhat limits the inferences that can be obtained in terms of how the fMRI network processes and the EEG power fluctuations relate to each other.<br /> While the language no longer suggests a strong overlap of the source of the two signals, several scenarios remain open (e.g., the higher-order fMRI networks being the source of the EEG oscillations, or the networks controlling the EEG oscillations expressed in lower-order cortices, or a third process driving both the observations in fMRI networks and EEG oscillations...) and somewhat weaken interpretability of this section.

      Comments on revisions:

      My prior recommendations have been mostly addressed.

      Questions remaining about the NBS results:

      The authors write about the NBS cluster: "Visual examination of the cluster roughly points to the same four posterior-anterior and ventral-dorsal modules identified formally in main-text ". I think it might be good to add quantification, not just visual inspection. The size of the significant NBS cluster should be reported. What proportion of the edges that passed uncorrected threshold and entered NBS were part of the NBS cluster? Put simply, I don't think any edges beyond those passing NBS-based correction should be interpreted or used downstream in the manuscript.

      Also, NBS is not typically used by collapsing over effects in two effect directions, but the authors use NBS on the absolute value of Z. I understand the logic of the general manuscript focusing on strength rather than direction, but here I am wondering about the methodological validity. I believe that the editor who is an expert on the methodology may be able to comment on the validity of this approach (as opposed to running two separate NBS analyses for the two directions of effect).

    3. Reviewer #2 (Public review):

      Summary:

      This paper investigates putative networks associated with prediction errors in task-based and resting state fMRI. It attempts to test the idea that prediction errors minimisation includes abstract cognitive functions, referred to as global prediction error hypothesis, by establishing a parallel between networks found in task-based fMRI where prediction errors are elicited in a controlled manner and those networks that emerge during "resting state".

      Strengths:

      Clearly a lot of work and data went into this paper, including 2 task-based fMRI experiments and the resting state data for the same participants, as well as a third EEG-fMRI dataset. Overall well written with a couple of exceptions on clarity as per below and the methodology appears overall sound, with a couple of exceptions listed below that require further justification. It does a good job of acknowledging its own weakness.

      Weaknesses:

      The paper does a good job of acknowledging its greatest weakness, the fact that it relies heavily on reverse inference, but cannot quite resolve it. As the authors put, "finding the same networks during a prediction error task and during rest does not mean that the networks engagement during rest reflect prediction error processing". Again, the authors acknowledge the speculative nature of their claims in the discussion, but given that this is the key claim and essence of the paper, it is hard to see how the evidence is compelling to support that claim.

      Given how uncontrolled cognition is during "resting-state" experiments, the parallel made with prediction errors elicited during a task designed to that effect is a little difficult to make. How often are people really surprised when their brains are "at rest", likely replaying a previously experienced event or planning future actions under their control? It seems to be more likely a very low prediction error scenario, if at all surprising.

      The quantitative comparison between networks under task and rest was done on a small subset of the ROIs rather than on the full network - why? Noting how small the correlation between task and rest is (r=0.021) and that's only for part of the networks, the evidence is a little tenuous. Running the analysis for the full networks could strengthen the argument.

      Looking at the results in Figure 2C, the four-quadrant description of the networks labelled for low and high PE appears a little simplistic. The authors state that this four-quadrant description omits some ROIs as motivated by prior knowledge. This would benefit from a more comprehensive justification. Which ROIs are excluded and what is the evidence for exclusion?

      The EEG-fMRI analysis claiming 3-6Hz fluctuations for PE is hard to reconcile with the fact that fMRI captures activity that is a lot slower while some PEs are as fast as 150 ms. The discussion acknowledges this but doesn't seem to resolve it - would benefit from a more comprehensive argument.

      Comments on revisions:

      The authors have done a good job of addressing the issues raised during the review process. There is one issue remaining that still required attention. In R2.4. when referring to "existing knowledge of prominent structural pathways among these quadrants" please cite the relevant literature.

    4. Reviewer #3 (Public review):

      Summary:

      Bogdan et al. present an intriguing investigation into the spontaneous dynamics of prediction error (PE)-related brain states. Using two independent fMRI tasks designed to elicit prediction and prediction error in separate participant samples, alongside both fMRI and EEG data, the authors identify convergent brain network patterns associated with high versus low PE. Notably, they further show that similar patterns can be detected during resting-state fMRI, suggesting that PE-related neural states may recur outside of explicit task demands.

      Strengths:

      The authors use a well-integrated analytic framework that combines multiple prediction tasks and brain imaging modalities. The inclusion of several datasets probing PE under different contexts strengthens the claim of generalizability across tasks and samples. The open sharing of code and data is commendable and will be valuable for future work seeking to build on this framework.

      Weaknesses:

      A central challenge of the manuscript lies in interpreting the functional significance of PE-related brain network states during rest. Demonstrating that a task-defined cognitive state recurs spontaneously is intriguing, but without clear links to behavior, individual traits, or experiential content during rest, it remains difficult to interpret what such spontaneous brain states tell us about the mind and brain. For example, it is unclear whether these states support future inference or learning, reflect offline predictive processing, or instead suggest state reinstatement due to a more general form of neural plasticity and circuit dynamics in the brain. Demonstrating any one of these downstream relationships would be valuable since it has the potential to inform our understanding of cognitive function or more general principles of neural organization.

      I appreciate the authors' position that establishing the existence of such states is a necessary first step, and that future work may clarify their behavioral relevance. However, the current form makes it challenging to assess the conceptual advance of the present work in isolation.

      Relatedly, in my previous review I raised questions about both across- and within-individual variability-for example, whether individuals who exhibit stronger or more distinct PE-related fluctuations at rest also show superior performance on prediction-related tasks (across-individual), or whether momentary increases in PE-network expression during tasks relate to faster or more accurate prediction (within-individual). The authors thoughtfully addressed this suggestion by conducting an individual-differences analysis correlating each participant's fluctuation amplitude with approximately 200 behavioral and trait measures from the HCP dataset.

      The reported findings-a negative association with age and card-sorting performance, alongside a positive association with age-adjusted picture sequence memory-are interesting but difficult to interpret within a coherent functional framework. As presented, these results do not clearly support the idea that spontaneous PE-state fluctuations are related to enhancement in prediction, inference, or broader cognitive function. Instead, they raise the possibility that fluctuation amplitude may reflect more general factors (e.g., age) rather than a functionally meaningful PE-related process.

      Overall, while the methodological contribution is strong, the manuscript would benefit from a clearer articulation of what functional conclusions can or cannot be drawn from the presence of spontaneous PE-related states, as well as a more cautious framing of their potential cognitive significance.

      Further comments:

      I appreciate that the authors took my earlier suggestions seriously and incorporated additional analyses examining behavioral relevance and permutation tests in the revision.

    5. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      The Reviewer structured their review such that their first two recommendations specifically concerned the two major weaknesses they viewed in the initial submission. For clarity and concision, we have copied their recommendations to be placed immediately following their corresponding points on weaknesses.

      Strengths:

      Studying prediction error from the lens of network connectivity provides new insights into predictive coding frameworks. The combination of various independent datasets to tackle the question adds strength, including two well-powered fMRI task datasets, resting-state fMRI interpreted in relation to behavioral measures, as well as EEG-fMRI.

      Weaknesses:

      Major:

      (R1.1) Lack of multiple comparisons correction for edge-wise contrast:

      The analysis of connectivity differences across three levels of prediction error was conducted separately for approximately 22,000 edges (derived from 210 regions), yet no correction for multiple comparisons appears to have been applied. Then, modularity was applied to the top 5% of these edges. I do not believe that this approach is viable without correction. It does not help that a completely separate approach using SVMs was FDR-corrected for 210 regions.

      [Later recommendation] Regarding the first major point: To address the issue of multiple comparisons in the edge-wise connectivity analysis, I recommend using the Network-Based Statistic (NBS; Zalesky et al., 2010). NBS is well-suited for identifying clusters (analogous to modules) of edges that show statistically significant differences across the three prediction error levels, while appropriately correcting for multiple comparisons.

      Thank you for bringing this up. We acknowledge that our modularity analysis does not evaluate statistical significance. Originally, the modularity analysis was meant to provide a connectome-wide summary of the connectivity effects, whereas the classification-based analysis was meant to address the need for statistical significance testing. However, as the reviewer points out, it would be better if significance were tested in a manner more analogous to the reported modules. As they suggest, we updated the Supplemental Materials (SM) to include the results of Network-Based Statistic analysis (SM p. 1-2):

      “(2.1) Network-Based Statistic

      Here, we evaluate whether PE significantly impacts connectivity at the network level using the Network-Based Statistic (NBS) approach.[1] NBS relied on the same regression data generated for the main-text analysis, whereby a regression is performed examining the effect of PE (Low = –1, Medium = 0, High = +1) on connectivity for each edge. This was done across the connectome, and for each edge, a z-score was computed. For NBS, we thresholded edges to |Z| > 3.0, which yielded one large network cluster, shown in Figure S3. The size of the cluster – i.e., number of edges – was significant (p < .05) per a permutation-test using 1,000 random shuffles of the condition data for each participant, as is standard.[1] These results demonstrate that the networklevel effects of PE on connectivity are significant. The main-text modularity analysis converts this large cluster into four modules, which are more interpretable and open the door to further analyses”.

      We updated the Results to mention these findings before describing the modularity analysis (p. 8-9):

      “After demonstrating that PE significantly influences brain-wide connectivity using Network-Based Statistic analysis (Supplemental Materials 2.1), we conducted a modularity analysis to study how specific groups of edges are all sensitive to high/low-PE information.”

      (R1.2) Lack of spatial information in EEG:

      The EEG data were not source-localized, and no connectivity analysis was performed. Instead, power fluctuations were averaged across a predefined set of electrodes based on a single prior study (reference 27), as well as across a broader set of electrodes. While the study correlates these EEG power fluctuations with fMRI network connectivity over time, such temporal correlations do not establish that the EEG oscillations originate from the corresponding network regions. For instance, the observed fronto-central theta power increases could plausibly originate from the dorsal anterior cingulate cortex (dACC), as consistently reported in the literature, rather than from a distributed network. The spatially agnostic nature of the EEG-fMRI correlation approach used here does not support interpretations tied to specific dorsal-ventral or anterior-posterior networks. Nonetheless, such interpretations are made throughout the manuscript, which overextends the conclusions that can be drawn from the data.

      [Later recommendation] Regarding the second major point: I suggest either adopting a source-localized EEG approach to assess electrophysiological connectivity or revising all related sections to avoid implying spatial specificity or direct correspondence with fMRI-derived networks. The current approach, which relies on electrode-level power fluctuations, does not support claims about the spatial origin of EEG signals or their alignment with specific connectivity networks.

      We thank the reviewer for this important point, which allows us to clarify the specific and distinct contributions of each imaging modality in our study. Our primary goal for Study 3 was to leverage the high temporal resolution of EEG to identify the characteristic frequency at which the fMRI-defined global connectivity states fluctuate. The study was not designed to infer the spatial origin of these EEG signals, a task for which fMRI is better suited and which we addressed in Studies 1 and 2.

      As the reviewer points out, fronto-central theta is generally associated with the dACC. We agree with this point entirely. We suspect that there is some process linking dACC activation to the identified network fluctuations – some type of relationship that does not manifest in our dynamic functional connectivity analyses – although this is only a hypothesis and one that is beyond the present scope.

      We updated the Discussion to mention these points and acknowledge the ambiguity regarding the correlation between network fluctuation amplitude (fMRI) and Delta/Theta power (EEG) (p. 24):

      “We specifically interpret the fMRI-EEG correlation as reflecting fluctuation speed because we correlated EEG oscillatory power with the fluctuation amplitude computed from fMRI data. Simply correlating EEG power with the average connectivity or the signed difference between posterior-anterior and ventral-dorsal connectivity yields null results (Supplemental Materials 6), suggesting that this is a very particular association, and viewing it as capturing fluctuation amplitude provides a parsimonious explanation. Yet, this correlation may be interpreted in other ways. For example, resting-state Theta is also a signature of drowsiness,[2] which may correlate with PE processing, but perhaps should be understood as some other mechanism. Additionally, Theta is widely seen as a sign of dorsal anterior cingulate cortex activity,3 and it is unclear how to reconcile this with our claims about network fluctuations. Nonetheless, as we show with simulations (Supplemental Materials 5), a correlation between slow fMRI network fluctuations and fast EEG Delta/Theta oscillations is also consistent with a common global neural process oscillating rapidly and eliciting both measures.”

      Regarding source-localization, several papers have described known limitations of this strategy for drawing precise anatomical inferences,[4–6] and this seems unnecessary given that our fMRI analyses already provide more robust anatomical precision. We intentionally used EEG in our study for what it measures most robustly: millisecond-level temporal dynamics.

      (R1.2a)Examples of problematic language include:

      Line 134: "detection of network oscillations at fast speeds" - the current EEG approach does not measure networks.

      This is an important issue. We acknowledge that our EEG approach does not directly measure fMRI-defined networks. Our claim is inferential, designed to estimate the temporal dynamics of the large-scale fMRI patterns we identified. The correlation between our fMRI-derived fluctuation amplitude (|PA – VD|) and 3-6 Hz EEG power provides suggestive evidence that the transitions between these network states occur at this frequency, rather than being a direct measurement of network oscillations.

      To support the validity of this inference, we performed two key analyses (now in Supplemental Materials). First, a simulation study provides a proof-of-concept, confirming our method can recover the frequency of a fast underlying oscillator from slow fMRI and fast EEG data. Second, a specificity analysis shows the EEG correlation is unique to our measure of fluctuation amplitude and not to simpler measures like overall connectivity strength. These analyses demonstrate that our interpretation is more plausible than alternative explanations.

      Overall, we have revised the manuscript to be more conservative in the language employed, such as presenting alternative explanations to the interpretations put forth based on correlative/observational evidence (e.g., our modifications above described in our response to comment R1.2). In addition, we have made changes throughout the report to state the issues related to reverse inference more explicitly and to better communicate that the evidence is suggestive – please see our numerous changes described in our response to comment R3.1. For the statement that the reviewer specifically mentioned here, we revised it to be more cautious (p. 7):

      “Although such speed outpaces the temporal resolution of fMRI, correlating fluctuations in dynamic connectivity measured from fMRI data with EEG oscillations can provide an estimate of the fluctuations’ speed. This interpretation of a correlation again runs up against issues related to reverse inference but would nonetheless serve as initial suggestive evidence that spontaneous transitions between network states occur rapidly.”

      (R1.2b) Line 148: "whether fluctuations between high- and low-PE networks occur sufficiently fast" - this implies spatial localization to networks that is not supported by the EEG analysis.

      Building on our changes described in our immediately prior response, we adjusted our text here to say our analyses searched for evidence consistent with the idea that the network fluctuations occur quickly rather than searching for decisive evidence favoring this idea (p. 7-8):

      “Finally, we examined rs-fMRI-EEG data to assess whether we find parallels consistent with the high/low-PE network fluctuations occurring at fast timescales suitable for the type of cognitive operations typically targeted by PE theories.”

      (R1.2c) Line 480: "how underlying neural oscillators can produce BOLD and EEG measurements" - no evidence is provided that the same neural sources underlie both modalities.

      As described above, these claims are based on the simulation study demonstrating that this is a possibility, and we have revised the manuscript overall to be clearer that this is our interpretation while providing alternative explanations.

      Reviewer #2 (Public review):

      Strengths:

      Clearly, a lot of work and data went into this paper, including 2 task-based fMRI experiments and the resting state data for the same participants, as well as a third EEG-fMRI dataset. Overall, well written with a couple of exceptions on clarity, as per below, and the methodology appears overall sound, with a couple of exceptions listed below that require further justification. It does a good job of acknowledging its own weakness.

      Weaknesses:

      (R2.1) The paper does a good job of acknowledging its greatest weakness, the fact that it relies heavily on reverse inference, but cannot quite resolve it. As the authors put it, "finding the same networks during a prediction error task and during rest does not mean that the networks' engagement during rest reflects prediction error processing". Again, the authors acknowledge the speculative nature of their claims in the discussion, but given that this is the key claim and essence of the paper, it is hard to see how the evidence is compelling to support that claim.

      We thank the reviewer for this comment. We agree that reverse inference is a fundamental challenge and that our central claim requires a particularly high bar of evidence. While no single analysis resolves this issue, our goal was to build a cumulative case that is compelling by converging on the same conclusion from multiple, independent lines of evidence.

      For our investigation, we initially established a task-general signature of prediction error (PE). By showing the same neural pattern represents PE in different contexts, we constrain the reverse inference, making it less likely that our findings are a task-specific artifact and more likely that they reflect the core, underlying process of PE. Building on this, our most compelling evidence comes from linking task and rest at the individual level. We didn't just find the same general network at rest; we showed that an individual’s unique anatomical pattern of PE-related connectivity during the task specifically predicts their own brain's fluctuation patterns at rest. This highly specific, person-by-person correspondence provides a direct bridge between an individual's task-evoked PE processing and their intrinsic, resting-state dynamics. Furthermore, these resting-state fluctuations correlate specifically with the 3-6 Hz theta rhythm—a well-established neural marker for PE.

      While reverse inference remains a fundamental limitation for many studies on resting-state cognition, the aspects mentioned above, we believe, provide suggestive evidence, favoring our PE interpretation. Nonetheless, we have made changes throughout the manuscript to be more conservative in the language we use to describe our results, to make it clear what claims are based on correlative/observational evidence, and to put forth alternative explanations for the identified effects. Please find our numerous changes detailed in our response to comment R3.1.

      (R2.2) Given how uncontrolled cognition is during "resting-state" experiments, the parallel made with prediction errors elicited during a task designed for that effect is a little difficult to make. How often are people really surprised when their brains are "at rest", likely replaying a previously experienced event or planning future actions under their control? It seems to be more likely a very low prediction error scenario, if at all surprising.

      We (and some others) take a broad interpretation of PE and believe it is often more intuitive to think about PE minimization in terms of uncertainty rather than “surprise”; the word “surprise” usually implies a sudden emotive reaction from the violation of expectations, which is not useful here.

      When planning future actions, each step of the plan is spurred by the uncertainty of what is the appropriate action given the scenario set up by prior steps. Each planned step erases some of that uncertainty. For example, you may be mentally simulating a conversation, what you will say, and what another person will say. Each step of this creates uncertainty of “what is the appropriate response?” Each reasoning step addresses contingencies. While planning, you may also uncover more obvious forms of uncertainty, sparking memory retrieval to finish it. A resting-state participant may think to cook a frozen pizza when they arrive home, but be uncertain about whether they have any frozen pizzas left, prompting episodic memory retrieval to address this uncertainty. We argue that every planning step or memory retrieval can be productively understood as being sparked by uncertainty/surprise (PE), and the subsequent cognitive response minimizes this uncertainty.

      We updated the Introduction to include a paragraph near the start providing this explanation (p. 3-4):

      “PE minimization may broadly coordinate brain functions of all sorts, including abstract cognitive functions. This includes the types of cognitive processes at play even in the absence of stimuli (e.g., while daydreaming). While it may seem counterintuitive to associate this type of cognition with PE – a concept often tied to external surprises – it has been proposed that the brain's internal generative model is continuously active.[12–14] Spontaneous thought, such as planning a future event or replaying a memory, is not a passive, low-PE process. Rather, it can be seen as a dynamic cycle of generating and resolving internal uncertainty. While daydreaming, you may be reminded of a past conversation, where you wish you had said something different. This situation contains uncertainty about what would have been the best thing to say. Wondering about what you wish you said can be viewed as resolving this uncertainty, in principle, forming a plan if the same situation ever arises again in the future. Each iteration of the simulated conversation repeatedly sparks and then resolves this type of uncertainty.”

      (R2.3)The quantitative comparison between networks under task and rest was done on a small subset of the ROIs rather than on the full network - why? Noting how small the correlation between task and rest is (r=0.021) and that's only for part of the networks, the evidence is a little tenuous. Running the analysis for the full networks could strengthen the argument.

      We thank the reviewer for this opportunity to clarify our method. A single correlation between the full, aggregated networks would be conceptually misaligned with what we aimed to assess. To test for a personspecific anatomical correspondence, it is necessary to examine the link between task and rest at a granular level. We therefore asked whether the specific parts of an individual's network most responsive to PE during the task are the same parts that show the strongest fluctuations at rest. Our analysis, performed iteratively across all 3,432 possible ROI subsets, was designed specifically to answer this question, which would be obscured by an aggregated network measure.

      We appreciate the reviewer's concern about the modest effect size (r = .021). However, this must be contextualized, as the short task scan has very low reliability (.08), which imposes a severe statistical ceiling on any possible task-rest correlation. Finding a highly significant effect (p < .001) in the face of such noisy data, therefore, provides robust evidence for a genuine task-rest correspondence.

      We updated the Discussion to discuss this point (p. 22-23):

      “A key finding supporting our interpretation is the significant link between individual differences in task-evoked PE responses and resting-state fluctuations. One might initially view the effect size of this correspondence (r = .021) as modest. However, this interpretation must be contextualized by the considerable measurement noise inherent in short task-fMRI scans; the split-half reliability of the task contrast was only .08. This low reliability imposes a severe statistical ceiling on any possible task-rest correlation. Therefore, detecting a highly significant (p < .001) relationship despite this constraint provides robust evidence for a genuine link. Furthermore, our analytical approach, which iteratively examined thousands of ROI subsets rather than one aggregated network, was intentionally granular. The goal was not simply to correlate two global measures, but to test for a personspecific anatomical correspondence – that is, whether the specific parts of an individual's network most sensitive to PE during the task are the same parts that fluctuate most strongly at rest. An aggregate analysis would obscure this critical spatial specificity. Taken together, this granular analysis provides compelling evidence for an anatomically consistent fingerprint of PE processing that bridges task-evoked activity and spontaneous restingstate dynamics, strengthening our central claim.”

      (R2.4) Looking at the results in Figure 2C, the four-quadrant description of the networks labelled for low and high PE appears a little simplistic. The authors state that this four-quadrant description omits some ROIs as motivated by prior knowledge. This would benefit from a more comprehensive justification.Which ROIs are excluded, and what is the evidence for exclusion?

      Our four-quadrant model is a principled simplification designed to distill the dominant, large-scale connectivity patterns from the complex modularity results. This approach focuses on coherent, well-documented anatomical streams while setting aside a few anatomically distant and disjoint ROIs that were less central to the main modules. This heuristic additionally unlocks more robust and novel analyses.

      The two low-PE posterior-anterior (PA) pathways are grounded in canonical processing streams. (i) The OCATL connection mirrors the ventral visual stream (the “what” pathway), which is fundamental for object recognition and is upregulated during the smooth processing of expected stimuli. (ii) The IPL-LPFC connection represents a core axis of the dorsal attention stream and the Fronto-Parietal Control Network (FPCN), reflecting the maintenance of top-down cognitive control when information is predictable; the IPL-LPFC module excludes ROIs in the middle temporal gyrus, which are often associated with the FPCN but are not covered here.

      In contrast, the two high-PE ventral-dorsal (VD) pathways reflect processes for resolving surprise and conflict. (i) The OC-IPL connection is a classic signature of attentional reorienting, where unexpected sensory input (high PE) triggers a necessary shift in attention; the OC-IPL module excludes some ROIs that are anterior to the occipital lobe and enter the fusiform gyrus and inferior temporal lobe. (ii) The ATL-LPFC connection aligns with mechanisms for semantic re-evaluation, engaging prefrontal control regions to update a mental model in the face of incongruent information.

      Beyond its functional/anatomical grounding, this simplification provides powerful methodological and statistical advantages. It establishes a symmetrical framework that makes our dynamic connectivity analyses tractable, such as our “cube” analysis of state transitions, which required overlapping modules. Critically, this model also offers a statistical safeguard. By ensuring each quadrant contributes to both low- and high-PE connectivity patterns, we eliminate confounds like region-specific signal variance or global connectivity. This design choice isolates the phenomenon to the pattern of connectivity itself (posterior-anterior vs. ventral-dorsal), making our interpretation more robust.

      We updated the end of the Study 1A results (p. 10-11):

      “Some ROIs appear in Figure 2C but are excluded from the four targeted quadrants (Figures 2C & 2D) – e.g., posterior inferior temporal lobe and fusiform ROIs are excluded from the OC-IPL module, and middle temporal gyrus ROIs are excluded from the IPL-LPFC modules. These exclusions, in favor of a four-quadrant interpretation, are motivated by existing knowledge of prominent structural pathways among these quadrants. This interpretation is also supported by classifier-based analyses showing connectivity within each quadrant is significantly influenced by PE (Supplemental Materials 2.2), along with analyses of single-region activity showing that these areas also respond to PE independently (Supplemental Materials 3). Hence, we proceeded with further analyses of these quadrants’ connections, which summarize PE’s global brain effects.

      “This four-quadrant setup also imparts analytical benefits. First, this simplified structure may better generalize across PE tasks, and Study 1B would aim to replicate these results with a different design. Second, the four quadrants mean that each ROI contributes to both the posterior-anterior and ventral-dorsal modules, which would benefit later analyses and rules out confounds such as PE eliciting increased/decreased connectivity between an ROI and the rest of the brain. An additional, less key benefit is that this setup allows more easily evaluating whether the same phenomena arise using a different atlas (Supplemental Materials Y).”

      (R2.5) The EEG-fMRI analysis claiming 3-6Hz fluctuations for PE is hard to reconcile with the fact that fMRI captures activity that is a lot slower, while some PEs are as fast as 150 ms. The discussion acknowledges this but doesn't seem to resolve it - would benefit from a more comprehensive argument.

      We thank the reviewer for raising this important point, which allows us to clarify the logic of our multimodal analysis. Our analysis does not claim that the fMRI BOLD signal itself oscillates at 3-6 Hz. Instead, it is based on the principle that the intensity of a fast neural process can be reflected in the magnitude of the slow BOLD response. It’s akin to using a long-exposure photograph to capture a fast-moving object; while the individual movements are blurred, the intensity of the blur in the photo serves as a proxy for the intensity of the underlying motion. In our case, the magnitude of the fMRI network difference (|PA – VD|) acts as the "blur," reflecting the intensity of the rapid fluctuations between states within that time window.

      Following this logic, we correlated this slow-moving fMRI metric with the power of the fast EEG rhythms, which reflects their amplitude. To bridge the different timescales, we averaged the EEG power over each fMRI time window and convolved it with the standard hemodynamic response function (HRF) – a crucial step to align the timing of the neural and metabolic signals. The resulting significant correlation specifically in the 3-6 Hz band demonstrates that when this rhythm is stronger, the fMRI data shows a greater divergence between network states. This allows us to infer the characteristic frequency of the underlying neural fluctuations without directly measuring them at that speed with fMRI, thus reconciling the two timescales.

      Reviewer #3 (Public review):

      Bogdan et al. present an intriguing and timely investigation into the intrinsic dynamics of prediction error (PE)-related brain states. The manuscript is grounded in an intuitive and compelling theoretical idea: that the brain alternates between high and low PE states even at rest, potentially reflecting an intrinsic drive toward predictive minimization. The authors employ a creative analytic framework combining different prediction tasks and imaging modalities. They shared open code, which will be valuable for future work.

      (R3.1) Consistency in Theoretical Framing

      The title, abstract, and introduction suggest inconsistent theoretical goals of the study.

      The title suggests that the goal is to test whether there are intrinsic fluctuations in high and low PE states at rest. The abstract and introduction suggest that the goal is to test whether the brain intrinsically minimizes PE and whether this minimization recruits global brain networks. My comments here are that a) these are fundamentally different claims, and b) both are challenging to falsify. For one, task-like recurrence of PE states during resting might reflect the wiring and geometry of the functional organization of the brain emerging from neurobiological constraints or developmental processes (e.g., experience), but showing that mirroring exists because of the need to minimize PE requires establishing a robust relationship with behavior or showing a causal effect (e.g., that interrupting intrinsic PE state fluctuations affects prediction).

      The global PE hypothesis-"PE minimization is a principle that broadly coordinates brain functions of all sorts, including abstract cognitive functions"-is more suitable for discussion rather than the main claim in the abstract, introduction, and all throughout the paper.

      Given the above, I recommend that the authors clarify and align their core theoretical goals across the title, abstract, introduction, and results. If the focus is on identifying fluctuations that resemble taskdefined PE states at rest, the language should reflect that more narrowly, and save broader claims about global PE minimization for the discussion. This hypothesis also needs to be contextualized within prior work. I'd like to see if there is similar evidence in the literature using animal models.

      Thank you for bringing up this issue. We have made changes throughout the paper to address these points. First, we have omitted reference to a “global PE hypothesis” from the Abstract and Introduction, in favor of structuring the Introduction in terms of a falsifiable question (p. 4):

      “We pursued this goal using three studies (Figure 1) that collectively targeted a specific question: Do the taskdefined connectivity signatures of high vs. low PE also recur during rest, and if so, how does the brain transition between exhibiting high/low signatures?”

      We made changes later in the Introduction to clarify that the investigation is based on correlative evidence and requires interpretations that may be debated (p. 5-7):

      “Although this does not entirely address the reverse inference dilemma and can only produce correlative evidence, the present research nonetheless investigates these widely speculated upon PE ideas more directly than any prior work.

      Although such speed outpaces the temporal resolution of fMRI, correlating fluctuations in dynamic connectivity measured from fMRI data with EEG oscillations can provide an estimate of the fluctuations’ speed. This interpretation of a correlation again runs up against issues related to reverse inference but would nonetheless serve as initial suggestive evidence that spontaneous transitions between network states occur rapidly.

      Second, we examined the recruitment of these networks during rs-fMRI, and although the problems related to reverse inference are impossible to overcome fully, we engage with this issue by linking rs-fMRI data directly to task-fMRI data of the same participants, which can provide suggestive evidence that the same neural mechanisms are at play in both.”

      We made changes throughout the Results now better describing the results as consistent with a hypothesis rather than demonstrating it (p. 12-19):

      “In other words, we essentially asked whether resting-state participants are sometimes in low PE states and sometimes in high PE states, which would be consistent with spontaneous PE processing in the absence of stimuli.

      These emerging states overlap strikingly with the previous task effects of PE, suggesting that rs-fMRI scans exhibit fluctuations that resemble the signatures of low- and high-PE states. 

      To be clear, this does not entirely dissuade concerns about reverse inference, which would require a type of causal manipulation that is difficult (if not impossible) to perform in a resting state scan. Nonetheless, these results provide further evidence consistent with our interpretation that the resting brain spontaneously fluctuates between high/low PE network states.

      These patterns are most consistent with a characteristic timescale near 3–6 Hz for the amplitude of the putative high/low-PE fluctuations. This is notably consistent with established links between PE and Delta/Theta and is further consistent with an interpretation in which these fluctuations relate to PE-related processing during rest.”

      We have also made targeted edits to the Discussion to present the findings in a more cautious way, more clearly state what is our interpretation, and provide alternative explanations (p. 19-26):

      “The present research conducted task-fMRI, rs-fMRI, and rs-fMRI-EEG studies to clarify whether PE elicits global connectivity effects and whether the signatures of PE processing arise spontaneously during rest. This investigation carries implications for how PE minimization may characterize abstract task-general cognitive processes. […] Although there are different ways to interpret this correlation, it is consistent with high/low PE states generally fluctuating at 3-6 Hz during rest. Below, we discuss these three studies’ findings.

      Our rs-fMRI investigation examined whether resting dynamics resemble the task-defined connectivity signatures of high vs. low PE, independent of the type of stimulus encountered. The resting-state analyses indeed found that, even at rest, participants’ brains fluctuated between strong ventral-dorsal connectivity and strong posterior-anterior connectivity, consistent with shifts between states of high and low PE. This conclusion is based on correlative/observational evidence and so may be controversial as it relies on reverse inference.

      These patterns resemble global connectivity signatures seen in resting-state participants, and correlations between fMRI and EEG data yield associations, consistent with participants fluctuating between high-PE (ventral-dorsal) and low-PE (posterior-anterior) states at 3-6 Hz. Although definitively testing these ideas is challenging, given that rs-fMRI is defined by the absence of any causal manipulations, our results provide evidence consistent with PE minimization playing a role beyond stimulus process.”

      (R3.2) Interpretation of PE-Related Fluctuations at Rest and Its Functional Relevance. It would strengthen the paper to clarify what is meant by "intrinsic" state fluctuations. Intrinsic might mean taskindependent, trait-like, or spontaneously generated. Which do the authors mean here? Is the key prediction that these fluctuations will persist in the absence of a prediction task?

      Of the three terms the reviewer mentioned, “spontaneous” and “task-independent” are the most accurate descriptors. We conceptualize these fluctuations as a continuous background process that persists across all facets of cognition, without requiring a task explicitly designed to elicit prediction error – although we, along with other predictive coding papers, would argue that all cognitive tasks are fundamentally rooted in PE mechanisms and thus anything can be seen as a “prediction task” (see our response to comment R2.2 for our changes to the Introduction that provide more intuition for this point). The proposed interactions can be seen as analogous to cortico-basal-thalamic loops, which are engaged across a vast and diverse array of cognitive processes.

      The prior submission only used the word “intrinsic” in the title. We have since revised it to “spontaneous,” which is more specific than “intrinsic,” and we believe clearer for a title than “task-independent” (p. 1): “Spontaneous fluctuations in global connectivity reflect transitions between states of high and low prediction error”

      We have also made tweaks throughout the manuscript to now use “spontaneously” throughout (it now appears 8 times in the paper).

      Regardless of the intrinsic argument, I find it challenging to interpret the results as evidence of PE fluctuations at rest. What the authors show directly is that the degree to which a subset of regions within a PE network discriminates high vs. low PE during task correlates with the magnitude of separation between high and low PE states during rest. While this is an interesting relationship, it does not establish that the resting-state brain spontaneously alternates between high and low PE states, nor that it does so in a functionally meaningful way that is related to behavior. How can we rule out brain dynamics of other processes, such as arousal, that also rise and fall with PE? I understand the authors' intention to address the reverse inference concern by testing whether "a participant's unique connectivity response to PE in the reward-processing task should match their specific patterns of resting-state fluctuation". However, I'm not fully convinced that this analysis establishes the functional role of the identified modules to PE because of the following:

      Theoretically, relating the activities of the identified modules directly to behavior would demonstrate a stronger functional role.

      (R3.2a) Across participants: Do individuals who exhibit stronger or more distinct PE-related fluctuations at rest also perform better on tasks that require prediction or inference? This could be assessed using the HCP prediction task, though if individual variability is limited (e.g., due to ceiling effects), I would suggest exploring a dataset with a prediction task that has greater behavioral variance.

      This is a good idea, but unfortunately difficult to test with our present data. The HCP gambling task used in our study was not designed to measure individual differences in prediction or inference and likely suffers from ceiling effects. Because the task outcomes are predetermined and not linked to participants' choices, there is very little meaningful behavioral variance in performance to correlate with our resting-state fluctuation measure.

      While we agree that exploring a different dataset with a more suitable task would be ideal, given the scope of the existing manuscript, this seems like it would be too much. Although these results would be informative, they would ultimately still not be a panacea for the reverse inference issues.

      Or even more broadly, does this variability in resting state PE state fluctuations predict general cognitive abilities like WM and attention (which the HCP dataset also provides)? I appreciate the inclusion of the win-loss control, and I can see the intention to address specificity. This would test whether PE state fluctuations reflect something about general cognition, but also above and beyond these attentional or WM processes that we know are fluctuating.

      This is a helpful suggestion, motivating new analyses: We measured the degree of resting-state fluctuation amplitude across participants and correlated it with the different individual differences measures provided with the HCP data (e.g., measures of WM performance). We computed each participant’s fluctuation amplitude measure as the average absolute difference between posterior-anterior and ventral-dorsal connectivity; this is the average of the TR-by-TR fMRI amplitude measure from Study 3. We correlated this individual difference score with all of the ~200 individual difference measures provided with the HCP dataset (e.g., measures of intelligence or personality). We measured the Spearman correlation between mean fluctuation amplitude with each of those ~200 measures, while correcting for multiple hypotheses using the False Discovery Rate approach.[18]

      We found a robust negative association with age, where older participants tend to display weaker fluctuations (r = -.16, p < .001). We additionally find a positive association with the age-adjusted score on the picture sequence task (r = .12, p<sub>corrected</sub> = .03) and a negative association with performance in the card sort task (r = -.12, p<sub>corrected</sub> = 046). It is unclear how to interpret these associations, without being speculative, given that fluctuation amplitude shows one positive association with performance and one negative association, albeit across entirely different tasks.  We have added these correlation results as Supplemental Materials 8 (SM p. 11):

      “(8) Behavioral differences related to fluctuation amplitude 

      To investigate whether individual differences in the magnitude of resting-state PE-state fluctuations predict general cognitive abilities, we correlated our resting-state fluctuation measure with the cognitive and demographic variables provided in the HCP dataset.

      (8.1) Methods

      For each of the 1,000 participants, we calculated a single fluctuation amplitude score. This score was defined as the average absolute difference between the time-varying posterior-anterior (PA) and ventral-dorsal (VD) connectivity during the resting-state fMRI scan (the average of the TR-by-TR measure used for Study 3). We then computed the Spearman correlation between this score and each of the approximately 200 individual difference measures provided in the HCP dataset. We corrected for multiple comparisons using the False Discovery Rate (FDR) approach.

      (8.2) Results

      The correlations revealed a robust negative association between fluctuation amplitude and age, indicating that older participants tended to display weaker fluctuations (r = -.16, p<sub>corrected</sub> < .001). After correction, two significant correlations with cognitive performance emerged: (i) a positive association with the age-adjusted score on the Picture Sequence Memory Test (r = .12, p<sub>corrected</sub> = .03), (ii) a negative association with performance on the Card Sort Task (r = -.12, p<sub>corrected</sub> = .046). As greater fluctuation amplitude is linked to better performance on one task but worse performance on another, it is unclear how to interpret these findings.”

      We updated the main text Methods to direct readers to this content (p. 39-40):

      “(4.4.3) Links between network fluctuations and behavior

      We considered whether the extent of PE-related network expression states during resting-state is behaviorally relevant. We specifically investigated whether individual differences in the overall magnitude of resting-state fluctuations could predict individual difference measures, provided with the HCP dataset. This yielded a significant association with age, whereby older participants tended to display weaker fluctuations. However, associations with cognitive measures were limited. A full description of these analyses is provided in Supplemental Materials 8.”

      (R3.2b) Within participants: Do momentary increases in PE-network expression during tasks relate to better or faster prediction? In other words, is there evidence that stronger expression of PE-related states is associated with better behavioral outcomes?

      This is a good question that probes the direct behavioral relevance of these network states on a trial-by-trial basis. We agree with the reviewer's intuition; in principle, one would expect a stronger expression of the low-PE network state on trials where a participant correctly and quickly gives a high likelihood rating to a predictable stimulus.

      Following this suggestion, we performed a new analysis in Study 1A to test this. We found that while network expression was indeed linked to participants’ likelihood ratings: higher likelihood ratings correspond to stronger posterior-anterior connectivity, whereas lower ratings correspond to stronger ventral-dorsal connectivity (Connectivity-Direction × likelihood, β [standardized] = .28, p = .02). Yet, this is not a strong test of the reviewer’s hypothesis, and different exploratory analyses of response time yield null results (p > .05). We suspect that this is due to the effect being too subtle, so we have insufficient statistical power. A comparable analysis was not feasible for Study 1B, as its design does not provide an analogous behavioral measure of trialby-trial prediction success.

      (R3.3) A priori Hypothesis for EEG Frequency Analysis.

      It's unclear how to interpret the finding that fMRI fluctuations in the defined modules correlate with frontal Delta/Theta power, specifically in the 3-6 Hz range. However, in the EEG literature, this frequency band is most commonly associated with low arousal, drowsiness, and mind wandering in resting, awake adults, not uniquely with prediction error processing. An a priori hypothesis is lacking here: what specific frequency band would we expect to track spontaneous PE signals at rest, and why? Without this, it is difficult to separate a PE-based interpretation from more general arousal or vigilance fluctuations.

      This point gets to the heart of the challenge with reverse inference in resting-state fMRI. We agree that an interpretation based on general arousal or drowsiness is a potential alternative that must be considered. However, what makes a simple arousal interpretation challenging is the highly specific nature of our fMRI-EEG association. As shown in our confirmatory analyses (Supplemental Materials 6), the correlation with 3-6 Hz power was found exclusively with the absolute difference between our two PE-related network states (|PA – VD|)—a measure of fluctuation amplitude. We found no significant relationship with the signed difference (a bias toward one state) or the sum (the overall level of connectivity). This specificity presents a puzzle for a simple drowsiness account; it seems less plausible that drowsiness would manifest specifically as the intensity of fluctuation between two complex cognitive networks, rather than as a more straightforward change in overall connectivity. While we cannot definitively rule out contributions from arousal, the specificity of our finding provides stronger evidence for a structured cognitive process, like PE, than for a general, undifferentiated state. 

      We updated the Discussion to make the argument above and also to remind readers that alternative explanations, such as ones based on drowsiness, are possible (p. 24):

      “We specifically interpret the fMRI-EEG correlation as reflecting fluctuation speed because we correlated EEG oscillatory power with the fluctuation amplitude computed from fMRI data. Simply correlating EEG power with the average connectivity or the signed difference between posterior-anterior and ventral-dorsal connectivity yields null results (Supplemental Materials 6), suggesting that this is a very particular association, and viewing it as capturing fluctuation amplitude provides a parsimonious explanation. Yet, this correlation may be interpreted in other ways. For example, resting-state Theta is also a signature of drowsiness,[2] which may correlate with PE processing, but perhaps should be understood as some other mechanism.”

      (R3.4) Significance Assessment

      The significance of the correlation above and all other correlation analyses should be assessed through a permutation test rather than a single parametric t-test against zero. There are a few reasons: a) EEG and fMRI time series are autocorrelated, violating the independence assumption of parametric tests;

      Standard t-tests can underestimate the true null distribution's variance, because EEG-fMRI correlations often involve shared slow drifts or noise sources, which can yield spurious correlations and inflating false positives unless tested against an appropriate null.

      Building a null distribution that preserves the slow drifts, for example, would help us understand how likely it is for the two time series to be correlated when the slow drifts are still present, and how much better the current correlation is, compared to this more conservative null. You can perform this by phase randomizing one of the two time courses N times (e.g., N=1000), which maintains the autocorrelation structure while breaking any true co-occurrence in patterns between the two time series, and compute a non-parametric p-value. I suggest using this approach in all correlation analyses between two time series.

      This is an important statistical point to clarify, and the suggested analysis is valuable. The reviewer is correct that the raw fMRI and EEG time series are autocorrelated. However, because our statistical approach is a twolevel analysis, we reasoned that non-independence at the correlation-level would not invalidate the higher-level t-test. The t-test’s assumption of independence applies to the individual participants' coefficients, which are independent across participants. Thus, we believe that our initial approach is broadly appropriate, and its simplicity allows it to be easily communicated.

      Nonetheless, the permutation-testing procedure that the Reviewer describes seems like an important analysis to test, given that permutation-testing is the gold standard for evaluating statistical significance, and it could guarantee that our above logic is correct. We thus computed the analysis as the reviewer described. For each participant, we phase-randomized the fMRI fluctuation amplitude time series. Specifically, we randomized the Fourier phases of the |PA–VD| series (within run), while retaining the original amplitude spectrum; inverse transforms yielded real surrogates with the same power spectrum. This was done for each participant once per permutation. Each participant’s phase-randomized data was submitted to the analysis of each oscillatory power band as originally, generating one mean correlation for each band. This was done 1,000 times.

      Across the five bands, we find that the grand mean correlation is near zero (M<sub>r</sub> = .0006) and the 97.5<sup>th</sup> percentile critical value of the null distribution is r = ~.025; this 97.5<sup>th</sup> percentile corresponds to the upper end of a 95% confidence interval for a band’s correlation; the threshold minimally differs across bands (.024 < rs < .026). Our original correlation coefficients for Delta (M<sub>r</sub> = .042) and Theta (M<sub>r</sub> = .041), which our conclusions focused on, remained significant (p ≤ .002); we can perform family-wise error-rate correction by taking the highest correlation across any band for a given permutation, and the Delta and Theta effects remain significant (p<sub>FWE</sub>corrected ≤ .003); previously Reviewer comment R1.4c requested that we employ family-wise error correction.

      These correlations were previously reported in Table 1, and we updated the caption to note what effects remain significant when evaluated using permutation-testing and with family-wise error correction (p. 19):

      “The effects for Delta, Theta, Beta, and Gamma remain significant if significance testing is instead performed using permutation-testing and with family-wise error rate correction (p<sub>corrected</sub> < .05).”

      We updated the Methods to describe the permutation-testing analysis (p. 43):

      “To confirm the significance of our fMRI-EEG correlations with a non-parametric approach, we performed a group-level permutation-test. For each of 1,000 permutations, we phase-randomized the fMRI fluctuation amplitude time series. Specifically, we randomized the Fourier phases of the |PA–VD| series (within run), while retaining the original amplitude spectrum; inverse transforms yielded real surrogates with the same power spectrum. This procedure breaks the true temporal relationship between the fMRI and EEG data while preserving its structure. We then re-computed the mean Spearman correlation for each frequency band using this phase-randomized data. We evaluated significance using a family-wise error correction approach that accounts for us analyzing five oscillatory power bands. We thus create a null distribution composed of the maximum correlation value observed across all frequency bands from each permutation. Our observed correlations were then tested for significance against this distribution of maximums.”

      (R3.5) Analysis choices

      If I'm understanding correctly, the algorithm used to identify modules does so by assigning nodes to communities, but it does not itself restrict what edges can be formed from these modules. This makes me wonder whether the decision to focus only on connections between adjacent modules, rather than considering the full connectivity, was an analytic choice by the authors. If so, could you clarify the rationale? In particular, what justifies assuming that the gradient of PE states should be captured by edges formed only between nearby modules (as shown in Figure 2E and Figure 4), rather than by the full connectivity matrix? If this restriction is instead a by-product of the algorithm, please explain why this outcome is appropriate for detecting a global signature of PE states in both task and rest.

      We discuss this matter in our response to comment R2.(4).

      When assessing the correspondence across task-fMRI and rs-fMRI in section 2.2.2, why was the pattern during task calculated from selecting a pair of bilateral ROIs (resulting in a group of eight ROIs), and the resting state pattern calculated from posterior-anterior/ventral-dorsal fluctuation modules? Doesn't it make more sense to align the two measures? For example, calculating task effects on these same modules during task and rest?

      We thank the reviewer for this question, as it highlights a point in our methods that we could have explained more clearly. The reviewer is correct that the two measures must be aligned, and we can confirm that they were indeed perfectly matched.

      For the analysis in Section 2.2.2, both the task and resting-state measures were calculated on the exact same anatomical substrate for each comparison. The analysis iteratively selected a symmetrical subset of eight ROIs from our larger four quadrants. For each of these 3,432 iterations, we computed the task-fMRI PE effect (the Connectivity Direction × PE interaction) and the resting-state fluctuation amplitude (E[|PA – VD|]) using the identical set of eight ROIs. The goal of this analysis was precisely to test if the fine-grained anatomical pattern of these effects correlated within an individual across the task and rest states. We will revise the text in Section 2.2.2 to make this direct alignment of the two measures more explicit.

      Recommendations for authors:

      Reviewer #1 (Recommendations for authors):

      (R1.3) Several prior studies have described co-activation or connectivity "templates" that spontaneously alternate during rest and task states, and are linked to behavioral variability. While they are interpreted differently in terms of cognitive function (e.g., in terms of sustained attention: Monica Rosenberg; alertness: Catie Chang), the relationship between these previously reported templates and those identified in the current study warrants discussion. Are the current templates spatially compatible with prior findings while offering new functional interpretations beyond those already proposed in the literature? Or do they represent spatially novel patterns?

      Thank you for this suggestion. Broadly, we do not mean to propose spatially novel patterns but rather focus on how these are repurposed for PE processing. In the Discussion, we link our identified connectivity states to established networks (e.g., the FPCN). We updated this paragraph to mention that these patterns are largely not spatially novel (p. 20):

      “The connectivity patterns put forth are, for the most part, not spatially novel and instead overlap heavily with prior functional and anatomical findings.”

      Regarding the specific networks covered in the prior work by Rosenberg and Chang that the reviewer seems to be referring to, [7,8] this research has emphasized networks anchored heavily in sensorimotor, subcortical– cerebellar, and medial frontal circuits, and so mostly do not overlap with the connectivity effects we put forth.

      (R1.4) Additional points:

      (R1.4a) I do not think that the logic for taking the absolute difference of fMRI connectivity is convincing. What happens if the sign of the difference is maintained ?

      Thank you for pointing out this area that requires clarification. Our analysis targets the amplitude of the fluctuation between brain states, not the direction. We define high fluctuation amplitude as moments when the brain is strongly in either the PA state (PA > VD) or the VD state (VD > PA). The absolute difference |PA – VD| correctly quantifies this intensity, whereas a signed difference would conflate these two distinct high-amplitude moments. Our simulation study (Supplemental Materials, Section 5) provides the theoretical validation for this logic, showing how this absolute difference measure in slow fMRI data can track the amplitude of a fast underlying neural oscillator.

      When the analysis is tested in terms of the signed difference, as suggested by the Reviewer, the association between the fMRI data and EEG power is insignificant for each power band (ps<sub>uncorrected</sub> ≥ .47). We updated Supplemental Materials 6 to include these results. Previously, this section included the fluctuation amplitude (fMRI) × EEG power results while controlling for: (i) the signed difference between posterior-anterior and ventral-dorsal connectivity, (ii) the sum of posterior-anterior and ventral-dorsal connectivity, and (iii) the absolute value of the sum of posterior-anterior and ventral-dorsal connectivity. For completeness, we also now report the correlation between each EEG power band and each of those other three measures (SM, p. 9)

      “We additionally tested the relationship between each of those three measures and the five EEG oscillation bands. Across the 15 tests, there were no associations (ps<sub>uncorrected</sub>  ≥ .04); one uncorrected p-value was at p = .044, although this was expected given that there were 15 tests. Thus, the association between EEG oscillations and the fMRI measure is specific to the absolute difference (i.e., amplitude) measure.”

      (R1.4b) Reasoning of focus on frontal and theta band is weak, and described as "typical" (line 359) based on a single study.

      Sorry about this. There is a rich literature on the link between frontal theta and prediction error,[3,9–11] and we updated the Introduction to include more references to this work (p. 18): “The analysis was first done using power averaged across frontal electrodes, as these are the typical focus of PE research on oscillations.[3,9–11]”

      We have also updated the Methods to cite more studies that motivate our electrode choice (p. 41): “The analyses first targeted five midline frontal electrodes (F3, F1, Fz, F2, F4; BioSemi64 layout), given that this frontal row is typically the focus of executive-function PE research on oscillations.[9–11]”

      (R1.4c) No correction appears to have been applied for the association between EEG power and fMRI connectivity. Given that 100 frequency bins were collapsed into 5 canonical bands, a correction for 5 comparisons seems appropriate. Notably, the strongest effects in the delta and theta bands (particularly at fronto-central electrodes) may still survive correction, but this should be explicitly tested and reported.

      Thanks for this suggestion. We updated the Table 1 caption to mention what results survive family-wise error rate correction – as the reviewer suggests, the Delta/Theta effects would survive Bonferroni correction for five tests, although per a later comment suggesting that we evaluate statistical significance with a permutationtesting approach (comment R3.4), we instead report family-wise error correction based on that. The revised caption is as follows (p. 19):

      “The effects for Delta, Theta, Beta, and Gamma remain significant if significance testing is instead performed using permutation-testing and with family-wise error rate correction (p<sub>corrected</sub> < .05).”

      (R1.4d) Line 135. Not sure I understand what you mean by "moods". What is the overall point here?

      The overall argument is that the fluctuations occur rapidly rather than slowly. By slow “moods” we refer to how a participant could enter a high anxiety state of >10 seconds, linked to high PE fluctuations, and then shift into a low anxiety state, linked to low PE fluctuations. We argue that this is not occurring. Regardless, we recognize that referring to lengths of time as short as 10 seconds or so is not a typical use of the word “mood” and is potentially ambiguous, so we have omitted this statement, which was originally on page 6: “Identifying subsecond fluctuations would broaden the relevance of the present results, as they rule out that the PE states derive from various moods.”

      (R1.4e) Line 100. "Few prior PE studies have targeted PE, contrasting the hundreds that have targeted BOLD". I don't understand this sentence. It's presumably about connectivity vs activity?

      Yes, sorry about this typo. The reviewer is correct, and that sentence was meant to mention connectivity. We corrected (p. 5): “Few prior PE studies have targeted connectivity, contrasting the hundreds that have targeted BOLD.”

      (R1.4f) Line 373: "0-0.5Hz" in the caption is probably "0-50Hz".

      Yes, this was another typo, thank you. We have corrected it (p. 19): “… every 0.5 Hz interval from 0-50 Hz.”

      Reviewer #2 (Recommendations for authors):

      (R2.6) (Page 3) When referring to the "limited" hypothesis of local PE, please clarify in what sense is it limited. That statement is unclear.

      Thank you for pointing out this text, which we now see is ambiguous. We originally use "limited" to refer to the hypothesis's constrained scope – namely, that PE is relevant to various low-level operations (e.g., sensory processing or rewards) but the minimization of PE does not guide more abstract cognitive processes. We edited this part of the Introduction to be clearer (p. 3)

      “It is generally agreed that the brain uses PE mechanisms at neuronal or regional levels,[15,16] and this idea has been useful in various low-level functional domains, including early vision [15] and dopaminergic reward processing.[17] Some theorists have further argued that PE propagates through perceptual pathways and can elicit downstream cognitive processes to minimize PE.”

      (R2.7) (Page 5) "Few prior PE have targeted PE"... this statement appears contradictory. Please clarify.

      Sorry about this typo, which we have corrected (p. 5):

      “Few prior PE studies have targeted connectivity, contrasting the hundreds that have targeted BOLD.”

      (R2.8) What happened to the data of the medium PE condition in Study 1A?

      The medium PE condition data were not excluded. We modeled the effect of prediction error on connectivity using a linear regression across the three conditions, coding them as a continuous variable (Low = -1, Medium = 0, High = +1). This approach allowed us to identify brain connections that showed a linear increase or decrease in strength as a function of increasing PE. This linear contrast is a more specific and powerful way to isolate PErelated effects than a High vs. Low contrast. We updated the Results slightly to make this clearer (p. 8-9):

      “In the fMRI data, we compared the three PE conditions’ beta-series functional connectivity, aiming to identify network-level signatures of PE processing, from low to high. […] For the modularity analysis, we first defined a connectome matrix of beta values, wherein each edge’s value was the slope of a regression predicting that edge’s strength from PE (coded as Low = -1, Medium = 0, High = +1; Figure 2A).”

      (R2.9) (Page 15) The point about how the dots in 6H follow those in 6J better than those in 6I is a little subjective - can the authors provide an objective measure?

      Thank you for pointing out this issue. The visual comparison using Figure 6 was not meant as a formal analysis but rather to provide intuition. However, as the reviewer describes, this is difficult to convey. Our formal analysis is provided in Supplemental Materials 5, where we report correlation coefficients between a very large number of simulated fMRI data points and EEG data points corresponding to different frequencies. We updated this part of the Results to convey this (p. 16-17):

      “Notice how the dots in Figure 6H follow the dots in Figure 6J (3 Hz) better than the dots in Figure 6I (0.5 Hz) or Figure 6K (10 Hz); this visual comparison is intended for illustrative purposes only, and quantitative analyses are provided in Supplemental Materials 5.”

      References

      (1) Zalesky, A., Fornito, A. & Bullmore, E. T. Network-based statistic: identifying differences in brain networks. Neuroimage 53, 1197–1207 (2010)

      (2) Strijkstra, A. M., Beersma, D. G., Drayer, B., Halbesma, N. & Daan, S. Subjective sleepiness correlates negatively with global alpha (8–12 Hz) and positively with central frontal theta (4–8 Hz) frequencies in the human resting awake electroencephalogram. Neuroscience letters 340, 17–20 (2003).

      (3) Cavanagh, J. F. & Frank, M. J. Frontal theta as a mechanism for cognitive control. Trends in cognitive sciences 18, 414–421 (2014).

      (4) Grech, R. et al. Review on solving the inverse problem in EEG source analysis. Journal of neuroengineering and rehabilitation 5, 25 (2008)

      (5) Palva, J. M. et al. Ghost interactions in MEG/EEG source space: A note of caution on inter-areal coupling measures. Neuroimage 173, 632–643 (2018).

      (6) Koles, Z. J. Trends in EEG source localization. Electroencephalography and clinical Neurophysiology 106, 127–137 (1998).

      (7) Rosenberg, M. D. et al. A neuromarker of sustained attention from whole-brain functional connectivity. Nature neuroscience 19, 165–171 (2016).

      (8) Goodale, S. E. et al. fMRI-based detection of alertness predicts behavioral response variability. elife 10, e62376 (2021).

      (9) Cavanagh, J. F. Cortical delta activity reflects reward prediction error and related behavioral adjustments, but at different times. NeuroImage 110, 205–216 (2015)

      (10) Hoy, C. W., Steiner, S. C. & Knight, R. T. Single-trial modeling separates multiple overlapping prediction errors during reward processing in human EEG. Communications Biology 4, 910 (2021).

      (11) Neo, P. S.-H., Shadli, S. M., McNaughton, N. & Sellbom, M. Midfrontal theta reactivity to conflict and error are linked to externalizing and internalizing respectively. Personality neuroscience 7, e8 (2024).

      (12) Friston, K. J. The free-energy principle: a unified brain theory? Nature reviews neuroscience 11, 127–138 (2010)

      (13) Feldman, H. & Friston, K. J. Attention, uncertainty, and free-energy. Frontiers in human neuroscience 4, 215 (2010).

      (14) Friston, K. J. et al. Active inference and epistemic value. Cognitive neuroscience 6, 187–214 (2015).

      (15) Rao, R. P. & Ballard, D. H. Predictive coding in the visual cortex: a functional interpretation of some extraclassical receptive-field effects. Nature neuroscience 2, 79–87 (1999)

      (16) Walsh, K. S., McGovern, D. P., Clark, A. & O’Connell, R. G. Evaluating the neurophysiological evidence for predictive processing as a model of perception. Annals of the new York Academy of Sciences 1464, 242– 268 (2020)

      (17) Niv, Y. & Schoenbaum, G. Dialogues on prediction errors. Trends in cognitive sciences 12, 265–272 (2008).

      (18) Benjamini, Y. & Hochberg, Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological) 57, 289–300 (1995).

    1. As a social science, sociology has much to contribute to the understanding not just of COVID-19, but to various matters related to health, healing, illness, and health care.

      Sociology matters because it helps us to understand the institution more broadly. This also gives us foreshadowing over what medical sociology is by focusing on how society affects health.

    1. KeyBERT is semantic, because it uses embeddings to capture a document’s overall meaning, producing thematic keywords like “malawi”, “reforms”, and “humanitarian” in Table 2. This works well for summaries but can miss specific details, such as the cease-fire line in the India-Pakistan resolution (Table 3), because it favours the general idea.

      disadvanatfe

    1. Reviewer #2 (Public review):

      This study identifies Visham, an asymmetric structure in developing mouse cysts resembling the Drosophila fusome, an organelle crucial for oocyte determination. Using immunofluorescence, electron microscopy, 3D reconstruction, and lineage labeling, the authors show that primordial germ cells (PGCs) and cysts, but not somatic cells, contain an EMA-rich, branching structure that they named Visham, which remains unbranched in male cysts. Visham accumulates in regions enriched in intercellular bridges, forming clusters reminiscent of fusome "rosettes." It is enriched in Golgi and endosomal vesicles and partially overlaps with the ER. During cell division, Visham localizes near centrosomes in interphase and early metaphase, disperses during metaphase, and reassembles at spindle poles during telophase before becoming asymmetric. Microtubule depolymerization disrupts its formation.

      Cyst fragmentation is shown to be non-random, correlating with microtubule gaps. The authors propose that 8-cell (or larger) cysts fragment into 6-cell and 2-cell cysts. Analysis of Pard3 (the mouse ortholog of Par3/Baz) reveals its colocalization with Visham during cyst asymmetry, suggesting that mammalian oocyte polarization depends on a conserved system involving Par genes, cyst formation, and a fusome-like structure.

      Transcriptomic profiling identifies genes linked to pluripotency and the unfolded protein response (UPR) during cyst formation and meiosis, supported by protein-level reporters monitoring Xbp1 splicing and 20S proteasome activity. Visham persists in meiotic germ cells at stage E17.5 and is later transferred to the oocyte at E18.5 along with mitochondria and Golgi vesicles, implicating it in organelle rejuvenation. In Dazl mutants, cysts form, but Visham dynamics, polarity, rejuvenation, and oocyte production are disrupted, highlighting its potential role in germ cell development.

      Overall, this is an interesting and comprehensive study of a conserved structure in the germline cells of both invertebrate and vertebrate species. Investigating these early stages of germ cell development in mice is particularly challenging. Although primarily descriptive, the study represents a remarkable technical achievement. The images are generally convincing, with only a few exceptions.

      Major comments:

      (1) Some titles contain strong terms that do not fully match the conclusions of the corresponding sections.

      (1a) Article title "Mouse germline cysts contain a fusome-like structure that mediates oocyte development":

      The term "mediates" could be misleading, as the functional data on Visham (based on comparing its absence to wild-type) actually reflects either a microtubule defect or a Dazl mutant context. There is no specific loss-of-function of visham only.

      (1b) Result title, "Visham overlaps centrosomes and moves on microtubules":

      The term "moves" implies dynamic behavior, which would require live imaging data that are not described in the article.

      (1c) Result title, "Visham associates with Golgi genes involved in UPR beginning at the onset of cyst formation":

      The presented data show that the presence of Visham in the cyst coincides temporally with the expression and activity of the UPR response; the term "associates" is unclear in this context.

      (1d) Result title, "Visham participates in organelle rejuvenation during meiosis":

      The term "participates" suggests that Visham is required for this process, whereas the conclusion is actually drawn from the Dazl mutant context, not a specific loss-of-function of visham only.

      (2) The authors aim to demonstrate that Visham is a fusome-like structure. I would suggest simply referring to it as a "fusome-like structure" rather than introducing a new term, which may confuse readers and does not necessarily help the authors' goal of showing the conservation of this structure in Drosophila and Xenopus germ cells. Interestingly, in a preprint from the same laboratory describing a similar structure in Xenopus germ cells, the authors refer to it as a "fusome-like structure (FLS)" (Davidian and Spradling, BioRxiv, 2025).

      Comments on revisions:

      The revised manuscript has been clearly improved, and the authors have addressed all of our comments. I would like to point out two minor issues:

      (1) As suggested by the reviewers, the authors now use the term fusome instead of visham. However, they also acknowledge that this structure lacks many components of the Drosophila fusome. It may therefore be more appropriate to refer to it as a "mouse fusome" or as a "fusome-like structure (FLS)," as used in Xenopus.

      (2) I agree with Reviewer 3 that co-localization between EMA and acTubulin on still images does not convincingly demonstrate that fusome vesicles move along microtubules (Figure S2E).

    2. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review)

      Summary

      We thank the reviewer for the constructive and thoughtful evaluation of our work. We appreciate the recognition of the novelty and potential implications of our findings regarding UPR activation and proteasome activity in germ cells.

      (1) The microscopy images look saturated, for example, Figure 1a, b, etc. Is this a normal way to present fluorescent microscopy?

      The apparent saturation was not present in the original images, but likely arose from image compression during PDF generation. While the EMA granule was still apparent, in the revised submission, we will provide high-resolution TIFF files to ensure accurate representation of fluorescence intensity and will carefully optimize image display settings to avoid any saturation artifacts.

      (2) The authors should ensure that all claims regarding enrichment/lower vs. lower values have indicated statistical tests.

      We fully agree. In the revised version, we will correct any quantitative comparisons where statistical tests were not already indicated, with a clear statement of the statistical tests used, including p-values in figure legends and text.

      (a) In Figure 2f, the authors should indicate which comparison is made for this test. Is it comparing 2 vs. 6 cyst numbers?

      We acknowledge that the description was not sufficiently detailed. Indeed, the test was not between 2 vs 6 cyst numbers, but between all possible ways 8-cell cysts or the larger cysts studied could fragment randomly into two pieces, and produce by chance 6-cell cysts in 13 of 15 observed examples. We will expand the legend and main text to clarify that a binomial test was used to determine that the proportion of cysts producing 6-cell fragments differed very significantly from chance.

      Revised text:

      “A binomial test was used to assess whether the observed frequency of 6-cell cyst products differed from random cyst breakage. Production of 6-cell cysts was strongly preferred (13/15 cysts; ****p < 0.0001).”

      (b) Figures 4d and 4e do not have a statistical test indicated.

      We will include the specific statistical test used and report the corresponding p-values directly in the figure legends.

      (3) Because the system is developmentally dynamic, the major conclusions of the work are somewhat unclear. Could the authors be more explicit about these and enumerate them more clearly in the abstract?

      We will revise the abstract to better clarify the findings of this study. We will also replace the term Visham with mouse fusome to reflect its functional and structural analogy to the Drosophila and Xenopus fusomes, making the narrative more coherent and conclusive.

      (4) The references for specific prior literature are mostly missing (lines 184-195, for example).

      We appreciate this observation of a problem that occurred inadvertently when shortening an earlier version.  We will add 3–4 relevant references to appropriately support this section.

      (5) The authors should define all acronyms when they are first used in the text (UPR, EGAD, etc).

      We will ensure that all acronyms are spelled out at first mention (e.g., Unfolded Protein Response (UPR), Endosome and Golgi-Associated Degradation (EGAD)).

      (6) The jumping between topics (EMA, into microtubule fragmentation, polarization proteins, UPR/ERAD/EGAD, GCNA, ER, balbiani body, etc) makes the narrative of the paper very difficult to follow.

      We are not jumping between topics, but following a narrative relevant to the central question of whether female mouse germ cells develop using a fusome.  EMA, microtubule fragmentation, polarization proteins, ER, and balbiani body are all topics with a known connection to fusomes. This is explained in the general introduction and in relevant subsections. We appreciate this feedback that further explanations of these connections would be helpful. In the revised manuscript, use of the unified term mouse fusome will also help connect the narrative across sections.  UPR/ERAD/EGAD are processes that have been studied in repair and maintenance of somatic cells and in yeast meiosis.  We show that the major regulator XbpI is found in the fusome, and that the fusome and these rejuvenation pathway genes are expressed and maintained throughout oogenesis, rather than only during limited late stages as suggested in previous literature.

      (7) The heading title "Visham participates in organelle rejuvenation during meiosis" in line 241 is speculative and/or not supported. Drawing upon the extensive, highly rigorous Drosophila literature, it is safe to extrapolate, but the claim about regeneration is not adequately supported.

      We believe this statement is accurate given the broad scope of the term "participates." It is supported by localization of the UPR regulator XbpI to the fusome. XbpI is the ortholog of HacI a key gene mediating UPR-mediated rejuvenation during yeast meiosis.  We also showed that rejuvenation pathway genes are expressed throughout most of meiosis (not previously known) and expanded cytological evidence of stage-specific organelle rejuvenation later in meiosis, such as mitochondrial-ER docking, in regions enriched in fusome antigens. However, we recognize the current limitations of this evidence in the mouse, and want to appropriately convey this, without going to what we believe would be an unjustified extreme of saying there is no evidence.

      Reviewer #2 (Public review):

      We thank the reviewer for the comprehensive summary and for highlighting both the technical achievement and biological relevance of our study. We greatly appreciate the thoughtful suggestions that have helped us refine our presentation and terminology.

      (1) Some titles contain strong terms that do not fully match the conclusions of the corresponding sections.

      (1a) Article title “Mouse germline cysts contain a fusome-like structure that mediates oocyte development”

      We will change the statement to: “Mouse germline cysts contain a fusome that supports germline cyst polarity and rejuvenation.”

      (1b) Result title “Visham overlaps centrosomes and moves on microtubules”

      We acknowledge that “moves” implies dynamics. We will include additional supplementary images showing small vesicular components of the mouse fusome on spindle-derived microtubule tracks.

      (1c) Result title “Visham associates with Golgi genes involved in UPR beginning at the onset of cyst formation”

      We will revise this title to: “The mouse fusome associates with the UPR regulatory protein Xbp1 beginning at the onset of cyst formation” to reflect the specific UPR protein that was immunolocalized.

      (1d) Result title “Visham participates in organelle rejuvenation during meiosis”

      We will revise this to: “The mouse fusome persists during organelle rejuvenation in meiosis.”

      (2) The authors aim to demonstrate that Visham is a fusome-like structure. I would suggest simply referring to it as a "fusome-like structure" rather than introducing a new term, which may confuse readers and does not necessarily help the authors' goal of showing the conservation of this structure in Drosophila and Xenopus germ cells. Interestingly, in a preprint from the same laboratory describing a similar structure in Xenopus germ cells, the authors refer to it as a "fusome-like structure (FLS)" (Davidian and Spradling, BioRxiv, 2025).

      We appreciate the reviewer’s insightful comment. To maintain conceptual clarity and align with existing literature, we will refer to the structure as the mouse fusome throughout the manuscript, avoiding introduction of a new term.

      Reviewer #3 (Public review):

      We thank the reviewer for emphasizing the importance of our study and for providing constructive feedback that will help us clarify and strengthen our conclusions.

      (1) Line 86 - the heading for this section is "PGCs contain a Golgi-rich structure known as the EMA granule"

      We agree that the enrichment of Golgi within the EMA PGCs was not shown until the next section. We will revise this heading to:

      “PGCs contain an asymmetric EMA granule.” 

      (2) Line 105-106, how do we know if what's seen by EM corresponds to the EMA1 granule?

      We will clarify that this identification is based on co-localization with Golgi markers (GM130 and GS28) and response to Brefeldin A treatment, which will be included as supplementary data. These findings support that the mouse fusome is Golgi-derived and can therefore be visualized by EM. The Golgi regions in E13.5 cyst cells move close together and associate with ring canals as visualized by EM (Figure 1E), the same as the mouse fusomes identified by EMA.

      (3) Line 106-107-states "Visham co-stained with the Golgi protein Gm130 and the recycling endosomal protein Rab11a1". This is not convincing as there is only one example of each image, and both appear to be distorted.

      Space is at a premium in these figures, but we have no limitation on data documenting this absolutely clear co-localization. We will replace the existing images with high-resolution, noncompressed versions for the final figures to clearly illustrate the co-staining patterns for GM130 and Rab11a1.

      (4) Line 132-133---while visham formation is disrupted when microtubules are disrupted, I am not convinced that visham moves on microtubules as stated in the heading of this section.

      We will include additional supplementary data showing small mouse fusome vesicles aligned along microtubules.

      (5) Line 156 - the heading for this section states that Visham associates with polarity and microtubule genes, including pard3, but only evidence for pard3 is presented.

      We agree and will revise the heading to: “Mouse fusome associates with the polarity protein Pard3.” We are adding data showing association of small fusome vesicles on microtubules.

      (6) Lines 196-210 - it's strange to say that UPR genes depend on DAZ, as they are upregulated in the mutants. I think there are important observations here, but it's unclear what is being concluded.

      UPR genes are not upregulated in DAZ in the sense we have never documented them increasing. We show that UPR genes during this time behave like pleuripotency genes and normally decline, but in DAZ mutants their decline is slowed.  We will rephrase the paragraph to clarify that Dazl mutation partially decouples developmental processes that are normally linked, which alters UPR gene expression relative to cyst development.

      (7) Line 257-259-wave 1 and 2 follicles need to be explained in the introduction, and how these fits with the observations here clarified.

      Follicle waves are too small a focus of the current study to explain in the introduction, but we will request readers to refer to the cited relevant literature (Yin and Spradling, 2025) for further details.

      We sincerely thank all reviewers for their insightful and constructive feedback. We believe that the planned revisions—particularly the refined terminology, improved image quality, clarified statistics, and restructured abstract—will substantially strengthen the manuscript and enhance clarity for readers.

      Reviewer #1 (Recommendations for the authors):

      (1) Figure 1E: need to use some immuno-gold staining to identify the Visham. Just circling an area of cytoplasm that contains ER between germ cell pairs is not enough.

      We appreciate the reviewer’s insistence that the association between the mouse fusome and Golgi be clearly demonstrated. However, the EMA granule is a large structure discovered and defined by light microscopy, and presents no inherent challenge to documenting its Golgi association by immunofluorescence experiments, which we presented and now further strengthened as described in the next paragraph.  We believe that the suggested EM experiment would add little to the EM we already presented (Figure 1E, E')  Moreover, due to facility limitations, we are currently unable to perform immunogold staining. 

      To strengthen previous immunolocalization experiments, we have now included additional immunostaining data showing the clear colocalization of the fusome region with the Golgi markers GM130 and GS28 (Figure S1H). We have also incorporated a new experiment using the Golgi-specific inhibitor Brefeldin A (BFA) see Figure S1I.  Treatment of in vitro–cultured gonads with BFA, disrupted EMA granule formation, demonstrating that EMA granules not only associate with Golgi, but require Golgi function to to be maintained.

      Additionally, in Figure 2, we showed that the fusome overlaps with the peri-centriolar region—a characteristic locus for Golgi due to its movement on microtubules.  We showed that the dynamic behavior of the fusome during the cell cycle, parallels Golgi dispersal and reassembly, and all these facts provide further strong support for the Golgi-association of the EMA granule and fusome.

      (2) Figure 1F: is this image compressed?

      We have now substituted the image in Figure 1F with a better image and have avoided the compression of the image. 

      (3) In the figure legends, are the sample sizes individual animals or individual sections? Please ensure that all figure legends for each figure panel consistently contain the sample size.

      We have now included the number of measurements (N) in every figure legend. Each experiment was performed using samples from at least three different animals, and in most cases from more than three. This information has also been added to the Methods section under Statistics. In addition, N values are now consistently provided for each graph throughout the figures.

      (4) Figure 2b/c: seemly likely based on the snapshot of different stages of cytokinesis that the "newly formed" visham is accurate, but without live imaging, this claim of "newly formed" is putative/speculative. It is OK if it is labeled as "putative" in the figure panel.  

      The behavior of the Drosophila fusome during mitosis was deduced without live imaging (deCuevas et al. 1998). We clarified that the conversion of a single mouse germ cell with one round fusome to an interconnected pair of cells with two round fusomes of greater total volume following mitosis is the basis for deducing that new fusome formation occurs each cell cycle. However, we agree with the reviewer that the phrase "newly formed" in the original label on Figure 2c suggested a specific mechanism of fusome increase that was not intended and this phrase has been removed entirely.  

      (5) Figure 2e/e is extremely difficult to follow. In order to improve the readability of these figure panels, can individual panels with a single stain be shown? The 'gap' between YFP+ sister cells is not immediately obvious in panel e or e" with the current layout. Since this is a key aspect of the author's claim about cleavage of the cyst, it would be best to make this claim more robust by showing more convincing images. In Figure 2E, the staining pattern of EMA needs to be clarified and described more fully in the text.

      We mapped discontinuities in the microtubule connections, not the fusome or YFP.  YFP is the lineage marker indicating that the cells of a single cyst are being studied. Consequently, no gap between YFP cytoplasmic expression is expected because only in the last example (figure E”), has fragmentation already occurred (and here there is a YFP gap).  The acetylated tubulin gap proceeds fragmentation.  The mitotic spindle remnants labeled by AcTub link the cells into two groups separated by a gap, which is clearly shown in the data images and in the third column where only the relevant AcTub from the cyst itself is shown. In response to the reviewers question about the fusome, which is not directly relevant to fragmentation, we have now provided images of the separate fusome channel and corresponding measurements for all three Figure 2E-E'' cysts in the supplementary Figure S4H. We have improved the text regarding this important figure to try and make it easier to follow, and also added a new example of a 10-cell cyst also in S2H (lower panels).  We also added, movies allowing full 3D study of one of the 8 cell cysts and the new 10-cell cyst.  I also suggest that the reviewer examine how the deduced mechanism of fragmentation explains previously published but not fully understood data on cyst fragmentation going back to 1998 as described in the expanded Discussion on this topic.  

      (6) It would be best to support the proposed model in Figure 2G (4+4+4) with microscopy images of a 12-cell or 16-cell cyst? Would these 12-cell or 16-cell cysts be too large to technically recover in a section?

      Unfortunately the reviewer 's suggestion that 12- or 16-cell cysts are too large to recover and present convincingly is correct. Because our analysis depends on capturing lineage-labeled cysts specifically at telophase with acetylated-tubulin connections, the likelihood of obtaining the correct stage is very low.  In addition, the dense packing of germ cells in the mouse gonad further limits our ability to fully reconstruct all the cells in large cysts, with difficulty increasing as cyst size grows.

      However, as noted, we added a well-resolved 10-cell cyst—the largest size we could confidently analyze—in a 3D video in Supplementary Figure S2H (lower panel), which shows a 6 + 4 breakage pattern.

      (7) We did not find a reference in the text for Figure 2G.

      We have now provided reference for 2G in the text and in the discussion section. 

      (8) Line 189: ERAD is used as an acronym, but is not defined until the discussion.

      We have now provided full form of acronym at its first usage in the text.

      (9) Fig 3i/i': the increase of UPR pathway components, increasing expression during zygotene, is interesting to note, but is not commented enough in the text of the paper.

      We have discussed this issue in the discussion section with specific reference to figure 3I. Please find the detailed discussion under the heading “Germ cell rejuvenation is highly active during cyst formation.”

      (10) Please quantify DNMT3A expression levels in WT control vs Dazl KO germ cells in Figure 4a.

      We have now quantified DNMT3A expression levels in WT control vs Dazl KO germ cells and have added the data in the Figure 4A.

      (11) Please introduce the rationale behind selecting DazL KO for studying cyst formation (text in line 197). This comes out of nowhere.

      True.  We significantly expanded our discussion of Dazl and citations of previous work, including evidence that it can affect cyst structures like ring canals, in the Introduction.  

      (12) It would be best to stain WT control vs DazL KO oogonia in Figure 4a with 5mC antibodies to support their claim that DNA methylation might be affected in the mutants.

      We respectfully disagree that this additional experiment is necessary within the scope of the current study. At the developmental stage examined (E12.5), germ cells in the Dazl mutant are clearly in an arrested and hypomethylated state, as supported by previous evidence (Haston et al. 2009).This initial experiments was designed to show that in our hands Dazl mutants show this known pkuripotency delay. However, the effects of Dazl mutation on female germline cyst development as it relates to polarity or the fusome was not studied before, and that is what the paper addresses, building on previous work.

      Because our study does not focus on germ-cell epigenetic modifications but rather on the consequences of Dazl loss on germ cell cyst development, adding 5mC immunostaining would not substantially advance the main conclusions. The existing data and previous published work already provide sufficient background.

      (13) Figure 4c: a very interesting figure, it would be best to quantify developmental pseudotime (perhaps using monocle3 analysis) and compare more rigorously the developmental stage of WT control vs DazL KO.

      Developmental pseudotime, such as through Monocle3 analysis, might sometimes be valuable but involves assumptions that when possible are better addressed by direct experimental examination. Our conclusions regarding cyst developmental stage are supported by straightforward evidence rather to which computational trajectory inference would add little. Specifically, we have performed analysis of germ-cell methylation state, ring canal formation, pluripotency markers, UPR pathway activity assay (Xbp1 and Proteomic assay), Golgi-stress analysis and Pard3 which collectively document the developmental status of the WT and Dazl KO germ cells. These empirical data demonstrate the same developmental pattern reflected in Figure 4c, making the less reliable pseudotime-based computational method superfluous.

      (14) Figure 4d has two panels labeled as "d".

      We have now corrected the labelling of the figure

      (15) Color coding in 4d, d', d" is confusing; please harmonize some visual presentation here.

      We have now harmonized the visual representation of all the graph in figure 4

      (16) Fig 4e' is labeled as DazL +/- but is this really a typo?

      Thank you for pointing it out. We have now corrected the typo

      (17) Figure F': typo labeled as E3.5, which is E13.5?

      Thank you for pointing it out. We have now corrected the typo

      (18) Figure F': was DazL KO mutant but no WT control.

      The WT control was not provided to avoid the redundancy. Please refer to earlier figure 3A-B, Fig S3C and D and videos S3A and S3b to refer to WT control at every stage.

      (19) Figure G: unusual choice in punctuation marks for cartoon schematic. No key to guide the reader for color-coded structures would be helpful to have something similar to 4h.

      We have now provided the key to guide the readers in the mentioned figure 4G.

      (20) The authors use WGA and EMA as interchangeable markers (Figure 5a) without fully explaining why they have switched markers.

      Because it is germ cell specific, we used EMA as a fusome marker during the time when it is found up through E13.5.  After that point we used WGA which is still usable, but also labels somatic cells.  This rationale is explicitly described at the end of the section “Fusome is highly enriched in Golgi and vesicles”, where we state:

      “EMA staining disappears from germ cells at E14.5 (Figure 1I). However, very similar (but non–germ-cell-specific) staining continued with wheat germ agglutinin (WGA) at later stages (Figure 1G, G’; Figure S1G).”

      To ensure this is fully clear to readers, we have now added an additional statement in the start of the text section discussing the figure 5:

      “For the reasons explained previously (see text for Figure 1G), WGA was used as a fusome marker beyond stage E14.5.”

      (21) Figure 5b' is compressed.

      We have now decompressed the image

      (22) Line 267, Balbiani body is misspelled.  

      We have now corrected the spelling.

      (23) The explanation of why the authors switch focus from DazL KO to DazL +/- is not adequately described. The authors should also explain the phenotype of the DazL +/- animals or reference a paper citing the hets are sterile or subfertile.

      We have now added the explanation of why Dazl KO is used in our introduction section where we have mentioned the phenotype of Dazl homozygous and heterozygous mouse.

      (24) Is Figure 5i actually DazL +/-? It is not labeled clearly in the text, the figure legend, or the figure panel. 

      We have now labelled the figure correctly in figure and in the legend.

      (25) The paper ends abruptly at line 275 with no context or summary.

      The manuscript does not end at line 275; the apparent interruption is due to a page break occurring immediately before the beginning of the Discussion section. We hope that continuation is fully visible in the reviewer 1 (your) version of the PDF.

      Reviewer #2 (Recommendations for the authors):

      (1) Line 93: Fig. 1B: DDX4 marks germ cells; do all the red and yellow cells in the NE inset originate from the same PGC? There are only 2 cells marked in yellow among the group of red cells. Is it a z-projection issue? Or do they come from different PGCs?

      This experiment used vasa staining to identify all germ cells, which are produced by multiple PGCs. Green labeling is a lineage marker derived from a single PGC (due to the low frequency of tamoxifen-activated labeling). Consequently, the two yellow cells observed in the NE inset of Fig. 1B represent YFP-labeled germ cells (YFP + DDX4 double-positive) that have arisen from a single, lineage-traced PGC. This approach, introduced in 2013, is described in the Methods, and represents the field's single largest technical advance that has made it possible to analyze mouse germ cell development at single cell resolution.

      To ensure clarity, we have added a brief explanatory note to the figure legend indicating that yellow cells represent the lineage-traced progeny of a single PGC, while the red staining marks all germ cells.

      (2) Line 96: Figure 1C vs 1C'. The difference between female and male Visham is not obvious, although quantification shows a clear difference. How was the quantification made? Manual or automatic thresholding? Would it be possible to show only the Visham channel?

      We thank the reviewer for pointing out this problem. We have now more clearly described in the text that the female fusome increases in some cells with close attachments to other cells (future oocytes) and decreases in distant nurse cells.  It branches due to rosette formation..  In males, the fusome remains much like the initial EMA granules present in early germ cells, with only fine and difficult to see connections.  The quantification shown in Figures 1C and 1C′ was performed manually, based on the presence of either (i) fused, branched EMA-positive fusome structures or (ii) dispersed, punctate EMA granules. This assessment was carried out across multiple E13.5 male and female gonad samples to ensure robustness.  To facilitate independent evaluation, we have already provided supplementary videos S3B1 and S3B2, which display the EMA-stained E13.5 male and female gonads in three dimensions. These videos allow the structural differences to be examined more clearly than in static images.

      In response to the reviewer’s request, we now additionally include the single-channel fusome image in Supplementary Figure S1E′. This presentation highlights the fusome signal alone and further clarifies the morphological differences underlying the quantification.

      (3) L118: Figure 2A, third row = 2-cell cyst? Please specify PCNT in the legend.

      We appreciate the reviewer’s observation. In Figure 2A (third row), the cells were not specifically labeled as a 2-cell cyst; rather, the intention was to illustrate the presence of two distinct centrosomes positioned on a fused fusome structure, a configuration we frequently observe.

      We have now updated the figure legend to explicitly define PCNT.

      (4) L169: Missing reference to S3B and video S3B1?

      We have now included the reference to S3B1 and S3B2 in the text and in the legend

      (5) L170: Please describe the graph in the Figure 3D legend.

      We have now described the Graph in the legend

      (6) L171: Would it be possible to have a close-up showing both Pard3 and Visham in a ringlike pattern related to RACGAP (RC) staining? The images are too small.

      It is difficult to capture this relationship perfectly in a two dimensional picture. The images represent the maximum close-up possible that still includes enough relevant area for the necessary conclusions. We have now provided additional three close-up images exclusively for ring-canal and Pard3 association in the supplementary Figure S3C for further clarity. However, we also note that the quality of the image permits the reader of a pdf to zoom and to visualize the images in great detail.

      (7) L181: Wrong reference, should be 3 then 3I.

      Thank you for pointing it out, we have now corrected the reference.

      (8) L199: In Figure S4B, was DNMT3 staining quantified? Red intensity differs globally between images; use the somatic red level as a reference? Note: EMA seems higher in Dazl- vs. WT?

      We have now performed quantification of DNMT3 staining, which is presented in Figure 4A. While the red intensity (DNMT3 or EMA) can appear to differ between images, this variation can result from biological differences between tissues or minor technical variability despite using consistent microscope settings. To account for this, we normalized the staining intensity using the somatic cell signal as an internal reference, ensuring that the quantification reflects genuine differences between WT and Dazl-/- samples rather than global intensity variation.

      (9) L229: Should be "proteasome."

      We have now corrected the spelling error.

      (10) L233: Quantify fragmentation of Gs28? EMA doesn't seem affected. Could you quantify both Gs28 and EMA? Images are too small.

      We thank the reviewer for this suggestion. While the current images are small, they can be examined in detail using zoom to visualize the structures clearly. As noted, EMA staining is not affected, (we agree) as cells are in arrested state. This arrested state creates stress on Golgi. The fragmentation of Gs28-labeled Golgi membranes is a classical indicator of Golgi stress, even though the fragmented membranes may remain functionally active. Our results show that Dazl deletion specifically affects Golgi in germ cells, while Golgi in neighboring somatic cells appears healthy. To quantify this effect, we have now included manual quantification of Golgi fragmentation in Figure 4F, assessing tissues for the presence of fragmented versus intact Golgi structures. This confirms that Golgi fragmentation is a germ cell–specific phenotype in Dazl– samples, while pre-formed EMA-positive fusomes remain unaffected but probably in arrested state.

      (11) L237: Figure 4F graph shows E3.5, not E13.5.

      We have now corrected the typo in the figure 

      (12) L257: Figure 5D: quantify as in 5A? overlap?

      Yes, it's an overlap and shown as two separate image with ring canal for better clarity. We have now quantified the image and have produced combined graph for fusome and pard3 in Figure 5A graph.

      (13) L261: Figure 5E-E': black arrowhead not mentioned in legend.

      We have now mentioned the black arrowhead in the legend

      (14) L262: Figure 5C: arrowhead not mentioned in legend. Figure 5F: oocyte appears separated from nurse cells compared to 5C.

      Yes, that may happen as cysts undergo fragmentation; what matters is all cells are lineage labelled and hence are members of a single cyst derived from one PGC.

      (15) L263: Figure 5G has no legend reference; nurse cells are not outlined as in 5C.

      We have now outlined the nurse cells and have added the reference to the graph in the legend.

      (16) L279: "The fusome and Visham and both..." should be replaced with "Both fusome and Visham...".

      We have now replaced the term Visham with fusome as suggested by reviewers and editor.  We updated the statement to correct the grammatical error.

      (17) L1127: Video S3B1: It is unclear what to focus on.

      We have now added the Rectangle area and arrow to highlight what to focus on

      (18) L1128: Video "S3B1" should be "S3B2."

      We have now corrected the legend

      (19) Finally: curiosity question: have the authors tried to use known markers of the Drosophila fusome in mice, such as Spectrin or other markers described in Lighthouse, Buszczak and Spradling, Dev Bio, 2008? And conversely, do EMA and WGA label the fusome in Drosophila?

      Yes, we and others used the most specific markers of the Drosophila fusome such alpha-spectrin, adducin-like Hts, tropomodulin, etc. to search for fusomes in vertebrate species. It was unsuccessful in clarifying the situation, because Hts and alpha-spectrin in Drosophila and other insects generate a protein skeleton that stabilizes the fusome and is easily stained. But this structure is simply not conserved in vertebrates. The polarity behavior of the fusome, it core developmental property, is conserved, however. The mammalian fusome still acquires and maintains cyst polarity, and goes even farther and reflects both initial cyst formation and cyst cleavage, before marking oocyte vs nurse cell development in the smaller cysts.  Expression of the inner microtubule-rich portion of the fusome, its Par proteins, and many ER-related and lysosomal fusome proteins are mostly conserved but their ability to mark the fusome alone varies with time and context (only some of the examples are shown in Figure 3I'). Nearly all of the proteins identified in Lighthouse et al. 2008 are expressed.  These proteins may be involved in rejuvenation as studied here.  We modified the first section of the Discussion to explicitly compare mouse, Xenopus and Drosophila fusomes, which was not possible before this work.  

      Reviewer #3 (Recommendations for the authors):

      The authors should either revise the conclusions or add additional evidence to support their claims. In addition, minor corrections are listed below.

      We have added additional evidence as noted in responses above, and revised some claims that were stated inaccurately.  In addition, we have attempted to clarify the evidence we do present, so that its full significance is more easily grasped by readers.    

      (1) Lines 20-21 are unclear - the cyst doesn't get sent into meiosis, each oocyte does.

      Research is showing that it's more complicated than that.  All cyst cells enter "pre-meiotic S phase", and most cell cycles are conventionally considered to start after the previous M phase-

      i.e. in G1 or S, not in the next prophase, an ancient view limited just to meiosis. Absent this old tradition from meiosis cytology, pre-meiotic S would just be called meiotic S as some workers on meiosis do.  In addition, in different species, nurse cells diverge from meiosis on different schedules, including many much later in the meiotic cycle.  Two cyst cells in Drosophila fully enter meiosis by all criteria, the oocyte and one nurse cell that only exits in late zygotene.  In Xenopus and mouse, scRNAseq shows that many cyst cells enter meiosis up to leptotene and zygotene, including nurse cells that specifically downregulate meiotic genes during this time, possibly to assist their nurse cell functions, while others remain in meiosis even longer (Davidian and Spradling, 2025; Niu and Spradling, 2022). Eventually, only the oocytes within each fragmented mouse cyst complete meiosis. 

      (2) Many places in the manuscript abbreviations are never defined or not defined the first time they are used (but the second or third time): Line 23-ER, Line 29-UPR, Line 33-PGC (not defined until line 45), Line 79-EGAD.

      We have defined full acronyms now upon their first occurrence.

      (3) Line 5 should be the pachytene substage of meiosis I.

      We have now updated the statement to “In pachytene stage of meiosis I…”

      (4) Line 59-61 - this statement needs a reference(s).

      These statements are a continuation from the references cited in the previous statements. However, for further clarity we have again cited the relevant reference here (Niu and Spradling, 2022).

      (5) Line 80 - should it be oocyte proteome quality control?

      We have now updated the statement to “Oocyte proteome quality control begins early”.

      (6) Line 87 - in this case, EMA does not stand for epithelial membrane antigen (AI will call it that, but it is not correct). I believe it originally was the abbrev for (Em)bryonic (a)ntigen, though some papers call it (e)mbryonic (m)ouse (a)ntigen. And the reference here is Hahnel and Eddy, 1986, but in the reference list is a different paper, 1987 (both refer to EMA-1).

      We have now updated the acronym EMA-1 in corrected form and have corrected the citation.

      (7) Line 176 - RNA seq.

      We have now updated the statement to “We performed single cell RNA sequencing (scRNA seq) of mouse gonad”.

      (8) Line 181 - Figure 4E and 4I should be 3E and 3I.

      We have now updated the figure reference in the text to correct one.

      (9) Line 183 - missing period.

      Added.

    1. eLife Assessment

      This paper develops a fundamental theory that explains how the brain can hold in working memory not only the identity but also the order of presented stimuli. Previous theories did not explain the ability of people to immediately recall the correct order of the stimulus presentation. The authors present compelling evidence that this can be achieved through synaptic augmentation, an experimentally observed phenomenon with a time scale of tens of seconds.

    2. Reviewer #1 (Public review):

      Summary:

      The issue of how the brain can maintain serial order of presented items in working memory is a major unsolved question in cognitive neuroscience. It has been proposed that this serial order maintenance could be achieved thanks to periodic reactivations of different presented items at different phases of an oscillation, but the mechanisms by which this could be achieved by brain networks, as well as the mechanisms of read-out, are still unclear. In an influential 2008 paper, the authors have proposed a mechanism by which a recurrent network of neurons could maintain multiple items in working memory, thanks to `population spikes' of populations of neurons encoding for the different items, occurring at alternating times. These population spikes occur in a specific regime of the network and are a result of synaptic facilitation, an experimentally observed type of synaptic short-term dynamics with time scales of order hundreds of ms.

      In the present manuscript, the authors extend their model to include another type of experimentally observed short-term synaptic plasticity termed synaptic augmentation, that operates on longer time scales on the order of 10s. They show that while a network without augmentation loses information about serial order, augmentation provides a mechanism by which this order can be maintained in memory thanks to a temporal gradient of synaptic efficacies. The order can then be read out using a read-out network whose synapses are also endowed with synaptic augmentation. Interestingly, the read-out speed can be regulated using background inputs.

      Strengths:

      This is an elegant solution to the problem of serial order maintenance, that only relies on experimentally observed features of synapses. The model is consistent with a number of experimental observations in humans and monkeys. The paper will be of interest to the broad readership of eLife and I believe it will have a strong impact on the field.

      Comments on revisions:

      I am happy with how the authors have addressed my comments, and believe the paper can be published in its present form.

    3. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      (1) The network they propose is extremely simple. This simplicity has pros and cons: on the one hand, it is nice to see the basic phenomenon exposed in the simplest possible setting. On the other hand, it would also be reassuring to check that the mechanism is robust when implemented in a more realistic setting, using, for instance, a network of spiking neurons similar to the one they used in the 2008 paper. The more noisy and heterogeneous the setting, the better.

      The choice of a minimal model to illustrate our hypothesis is deliberate. Our main goal was to suggest a physiologically-grounded mechanism to rapidly encode temporally-structured information (i.e., sequences of stimuli) in Working Memory, where none was available before. Indeed, as discussed in the manuscript, previous proposals were unsatisfactory in several respects. In view of our main goal, we believe that a spiking implementation is beyond the scope of the present work.

      We would like to note that the mechanism originally proposed in Mongillo et al. (2008), has been repeatedly implemented, by many different groups, in various spiking network models with different levels of biological realism (see, e.g., Lundquivst et al. (2016), for an especially ‘detailed’ implementation) and, in all cases, the relevant dynamics has been observed. We take this as an indication of ‘robustness’; the relevant network dynamics doesn’t critically depend on many implementation details and, importantly, this dynamics is qualitatively captured by a simple rate model (see, e.g., Mi et al. (2017)).

      In the present work, we make a relatively ‘minor’ (from a dynamical point of view) extension of the original model, i.e., we just add augmentation. Accordingly, we are fairly confident that a set of parameters for the augmentation dynamics can be found such that the spiking network behaves, qualitatively, as the rate model. A meaningful study, in our opinion, then would require extensively testing the (large) parameters’ space (different models of augmentation?) to see how the network behavior compares with the relevant experimental observations (which ones? Behavioral? Physiological?). As said above, we believe that this is beyond the scope of the present work.

      This being said, we definitely agree with the reviewer that not presenting a spiking implementation is a limitation of the present work. We have clearly acknowledged this limitation here, by adding the following paragraph to the Discussion.

      “To illustrate our theory in a simple setting, we used a minimal model network that neglects many physiological details. This, however, constitutes a limitation of the present study. It would be reassuring to see that the mechanism we propose here is robust enough to reliably operate also in spiking networks, in the presence of heterogeneity in both single-cell and synaptic properties. While we are fairly confident that this is the case, a spiking implementation of our model is beyond the scope of the present study and will be addressed in the future. Also, because of the simplicity of the model network, a comparison between the model behavior and the electrophysiological observations cannot be completely direct. Nevertheless the model qualitatively accounts for a diverse set of experimental data”.

      (2) One major issue with the population spike scenario is that (to my knowledge) there is no evidence that these highly synchronized events occur in delay periods of working memory experiments. It seems that highly synchronized population spikes would imply (a) a strong regularity of spike trains of neurons, at odds with what is typically observed in vivo (b) high synchronization of neurons encoding for the same item (and also of different items in situations where multiple items have to be held in working memory), also at odds with in vivo recordings that typically indicate weak synchronization at best. It would be nice if the authors at least mention this issue, and speculate on what could possibly bridge the gap between their highly regular and synchronized network, and brain networks that seem to lie at the opposite extreme (highly irregular and weakly synchronized). Of course, if they can demonstrate using a spiking network simulation that they can bridge the gap, even better.

      Direct experimental evidence (in monkeys) in support of the existence of highly synchronized events -- to be identified with the ‘population spikes’ of our model -- during the delay period of a memory task is available in the literature, i.e., Panichello et al. (2024). we provide a short discussion of the results of Panichello et al. (2024) and how these results directly relate to our model. We also provide a short discussion of the results of Liebe et al. (2025), which, again, are fully consistent with our model.

      We note that there is no fundamental contradiction between highly synchronized events in ‘small’ neural populations (e.g., a cell assembly) on one hand, and temporally irregular (i.e., Poisson-like) spiking at the single-neuron level and weakly synchronized activity at the network level, on the other hand. This was already illustrated in our original publication, i.e., Mongillo et al. (2008) (see, in particular, Fig. S2). We further note that the mechanism we propose to encode temporal order -- a temporal gradient in the synaptic efficacies brought about by synaptic augmentation -- would also work if the memory of the items is maintained by ‘tonic’ persistent activity (i.e., without highly synchronized events), provided this activity occurs at suitably low rates such as to prevent the saturation of the synaptic augmentation.

      We have added the following two paragraphs to the Discussion.

      “More direct support to this interpretation comes from recent electrophysiological studies [Panichello et al., 2024, Liebe et al., 2025]. By recording large neuronal populations (∼ 300) simultaneously in the prefrontal cortex of monkeys performing a WM task, [Panichello et al., 2024] found that, during the maintenance period, the decoding of the actively held item from neural activity was ’intermittent’; that is, decoding was only possible during short epochs (∼ 100ms) interleaved with epochs (also ∼ 100ms) where decoding was at chance level. The inability to decode resulted from a loss of selectivity at the population level, with a return of the single-neuron firing rates to their spontaneous (pre-stimulus) activity levels. The transitions between these two activity states (decodable/not-decodable) were coordinated across large populations of neurons in PFC. By recording single-neuron activity in the medial temporal lobe of humans performing a sequential multi-item WM task, [Liebe et al., 2025] found that during maintenance, neurons coding for a given item tended to fire at a specific phase of the underlying theta rhythm, again suggesting that the corresponding neuronal populations reactivate briefly and sequentially. In summary, these experimental results suggest that active memory maintenance relies on brief reactivations of the neural representations of the items, which we identify with the population spikes in our model, and that these reactivatations occur sequentially in time, as predicted by our theory”.

      “We note that the proposed mechanism would still work if the items were maintained by tonically-enhanced firing rates, instead of population spikes, provided that those firing rates were suitably low. However, obtaining low firing rates in model networks of persistent activity is quite difficult”.

      Reviewer #2 (Public review):

      The study relates to the well-known computational theory for working memory, which suggests short-term synaptic facilitation is required to maintain working memory, but doesn't rely on persistent spiking. This previous theory appears similar to the proposed theory, except for the change from facilitation to augmentation. A more detailed explanation of why the authors use augmentation instead of facilitation in this paper is warranted: is the facilitation too short to explain the whole process of WM? Can the theory with synaptic facilitation also explain the immediate storage of novel sequences in WM?

      In the model, synaptic dynamics displays both short-term facilitation and augmentation (and shortterm depression). Indeed, synaptic facilitation, alone, would be too short-lived to encode novel sequences. This is illustrated in Fig. 1B.

      We provide a discussion of this important point, by adding the following paragraph to the Results section.

      “If augmentation was the only form of synaptic plasticity present in the network, the encoding of an item in WM would require long presentation times, or alternatively high firing rates upon presentation, precisely because K_A is small. Instead, rapid encoding is made possible by the presence of the short-term facilitation, which builds up significantly faster than augmentation, as U >> K_A . For the same reason, however, the level of facilitation rapidly reaches the steady state; therefore, short-term facilitation alone is unable to encode temporal order (see Fig. 1B). Thus, our model requires the existence of transitory synaptic enhancement on at least two time scales, such that longer decays are accompanied by slower build-ups. Intriguingly, this pattern is experimentally observed [Fisher et al., 1997]”.

      In Figure 1, the authors mention that synaptic augmentation leads to an increased firing rate even after stimulus presentation. It would be good to determine, perhaps, what the lowest threshold is to see the encoding of a WM task, and whether that is biologically plausible.

      We believe that this comment is related to the above point. The reviewer is correct; augmentation alone would require fairly long stimulus presentations to encode an item in WM. ‘Fast’ encoding, indeed, is guaranteed by the presence of short-term facilitation. This important point is emphasized; see above.

      In the middle panel of Figure 4, after 15-16 sec, when the neuronal population prioritizes with the second retro-cue, although the second retro-cue item's synaptic spike dominates, why is the augmentation for the first retro-cue item higher than the second-cue augmentation until the 20 sec?

      This is because of the slow build-up and decay of the augmentation. When the second item is prioritized, and the corresponding neuronal population re-activates, its augmentation level starts to increase. At the same time, as the first item is now de-prioritized and the corresponding neuronal population is now silent, its augmentation level starts to decrease. Because of the ‘slowness’ of both processes (i.e., augmentation build-up and decay), it takes about 5 seconds for the augmentation level of the second item to overcome the augmentation level of the first item.

      We note that the slow time scales of the augmentation dynamics, consistently with experimental observations, are necessary for our mechanism to work; see above.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      (1) Line 46 identify -> identity.

      (2) Line 207 scale -> scales.

      Fixed. Thank you.

      (3) Lines 222-224 what about behavioral time-scale plasticity? This type of plasticity can apparently be induced very quickly.

      We have removed the corresponding paragraph.

      (4) Line 231 identification of `gamma bursts' with population spikes: These two phenomena seem to be very different - one can be weakly synchronized and can be consistent with highly irregular activity, while it is not clear whether the other can (see major issue 2). Also, it seems that population spikes occur at frequencies that are an order of magnitude lower than gamma.

      We have rewritten the corresponding paragraph and we rely now on more direct electrophysiological evidence (i.e., on the simultaneous recording of large neuronal populations) to identify putative population spikes; see above.

      Reviewer #2 (Recommendations for the authors):

      (1) On page 7, the behavioral study of Rose et al. (2016) is quite important for readers to understand the 'low-activity regime', and to fully appreciate Figure 4, it would be beneficial to explain that study in greater detail.

      We have added a panel to Fig. 4, and accompanying text in the caption, to better illustrate the main task events in the experiment of Rose et al. (2016).

      (2) Line 17: "wrong order", but wrong timing matters too

      Definitely, depending on the task. Specifically, in our example, timing is immaterial.

      (3) Line 33-34: "special training", what is considered special? One could argue that the number of trials needed to learn, depending on the TI timing, is special, depending on the task.

      We have removed the sentence as apparently it was confusing. We simply meant that ‘naive’ human subjects can perform the task (e.g., serial recall); that is, they didn’t undergo any kind of practice that can be construed as ‘training’.

      (4) Line 40-41: but timing is also part of working memory processing. Perhaps it can be merged with the next sentence.

      We have merged the two sentences.

      (5) Line 53: Is the implication here that what happens in the synapses is what drives WM, and not just that the neurons stay persistently on?

      Yes. The idea is that information can be maintained in the synaptic facilitation level, without enhanced spiking activity. Reading-out and refreshing the memory contents, however, requires neuronal activity. We explain this in some detail in the next paragraph (i.e., lines 60-65 in the revised submission).

      (6) Line 102: could a lack of excitatory activity be explained by inhibitory signaling? It appears the inhibitory component is quite understated here.

      Here we are just defining A-bar; according to Eq. (6), if r_a is 0 (i.e., no synaptic activity, for whatever reason), then A_a will converge to A-bar after a time much longer than \tau_A (i.e., a long period). We have rephrased the sentence to improve clarity.

      (7) Line 158-172: please consider revising this paragraph for a more general audience.

      We have rewritten this paragraph to improve clarity. For the same purpose, we have also slightly modified Fig. 3.

      (8) Line 227: it would seem this is due to a singular inhibitory group making the model highly dependent on the excitatory groups.

      We are not sure that we understand this comment. Here, we are just saying that if the item-coding populations don’t reactivate during the maintenance period (i.e., activity-silent regime) then the augmentation gradient cannot build up. If, on the other hand, the item-coding populations are constantly active at high rates during the maintenance period (i.e., persistent-activity regime) then then augmentation levels will rapidly saturate and, again, there will be no augmentation gradient. This is independent of how ‘silence’ or ‘activity’ of the item-coding populations is determined by the interplay of excitation and inhibition.

      (9) Line 284: this would certainly be an interesting take, but it isn't clear that the model proved this type of decoupling of the temporal aspect of the recall.

      This is an ‘educated’ speculation, based on the model and on a specific interpretation of some experimental results, as discussed in the paper and, in particular, in the last paragraph of the Discussion. We believe that the phrasing of the paragraph makes clear that this is, indeed, a speculation.

    1. big companies like Starbucks are providing training to their employees to reduce implicit bias and the state of California has introduced legislation to combat implicit bias.

      I knew of implicit bias before reading this article but was surprised how important it is for many people. Enough that I California has even introduced legislation for it and some companies train on it. It does seem like a big issue, and although I am surprised, I support the introduction into training more about it.

    2. You probably have the impression that line B is longer than line A, but in reality, both lines are equally long. What happens is that you are influenced by the arrows at the end of the lines even though you do not pay attention to the arrows or might even have the conscious goal not to be influenced by the arrows.

      Shows a clear and easy to follow example of implicit bias, showing that it is universal and truly does affect us whether we want it to or not. If the reader had any doubts, this example helps prove the articles point.

    1. What is a multi-carrier shipping software?

      I like all the headers and for the content they have the headers should be H2s but they are stuffed with keywords.

      Every title is multi-carrier shipping which is sending a lot of signals to search that is contributing to the cannibilisation

    2. Multi-Carrier Shipping Software: All you need to know if you are an eCommerce expert

      H1 needs to be updated to include the phrase mini guide

      GFS Mini Guide to Multi-Carrier Shipping Software

      We can underneath the H1 keep - All you need to know etc But ensure it has no html header code

    1. But managing multiple different carriers is not easy – which is where Managed Multi-Carrier Services come in.

      Can this change to which is where GFS services come in

      or similar to remove the keyword stuffing

    1. Author response:

      The following is the authors’ response to the previous reviews

      eLife Assessment

      This valuable study combines a computational language model, i.e., HM-LSTM, and temporal response function (TRF) modeling to quantify the neural encoding of hierarchical linguistic information in speech, and addresses how hearing impairment affects neural encoding of speech. The analysis has been significantly improved during the revision but remain somewhat incomplete - The TRF analysis should be more clearly described and controlled. The study is of potential interest to audiologists and researchers who are interested in the neural encoding of speech.

      We thank the editors for the updated assessment. In the revised manuscript, we have added a more detailed description of the TRF analysis on p. of the revised manuscript. We have also updated Figure 1 to better visualize the analyses pipeline. Additionally, we have included a supplementary video to illustrate the architecture of the HM-LSTM model, the ridge regression methods using the model-derived features, and mTRF analysis using the acoustic envelop and the binary rate models.

      Public Reviews:

      Reviewer #1 (Public review):

      About R squared in the plots:

      The authors have used a z-scored R squared in the main ridge regression plots. While this may be interpretable, it seems non-standard and overly complicated. The authors could use a simple Pearson r to be most direct and informative (and in line with similar work, including Goldstein et al. 2022 which they mentioned). This way the sign of the relationships is preserved.

      We did not use Pearson’s r as in Goldstein et al. (2022) because our analysis did not involve a train-test split, which was a key aspect of their approach. Specifically, Goldstein et al. (2022) divided their data into training and testing sets, trained a ridge regression model on the training set, and then used the trained model to predict neural responses on the test set. They calculated Pearson’s r to assess the correlation between the predicted and observed neural responses, making the correlation coefficient (r) their primary measure of model performance. In contrast, our analysis focused on computing the model fitting performance (R²) of the ridge regression model for each sensor and time point for each subject. At the group level, we conducted one-sample t-tests with spatiotemporal cluster-based correction on the R² values to identify sensors and time windows where R² values were significantly greater than baseline. We established the baseline by normalizing the R² values using Fisher z-transformation across sensors within each subject. We have added this explanation on p.13 of the revised manuscript.

      About the new TRF analysis:

      The new TRF analysis is a necessary addition and much appreciated. However, it is missing the results for the acoustic regressors, which should be there analogous to the HM-LSTM ridge analysis. The authors should also specify which software they have utilized to conduct the new TRF analysis. It also seems that the linguistic predictors/regressors have been newly constructed in a way more consistent with previous literature (instead of using the HM-LSTM features); these specifics should also be included in the manuscript (did it come from Montreal Forced Aligner, etc.?). Now that the original HM-LSTM can be compared to a more standard TRF analysis, it is apparent that the results are similar.

      We used the Python package Eelbrain (https://eelbrain.readthedocs.io/en/r0.39/auto_examples/temporal-response-functions/trf_intro.html) to conduct the multivariate temporal response function (mTRF) analyses. As we previously explained in our response to R3, we did not apply mTRF to the acoustic features due to the high dimensionality of the input. Specifically, our acoustic representation consists of a 130-dimensional vector sampled every 10 ms throughout the speech stimuli (comprising a 129-dimensional spectrogram and a 1dimensional amplitude envelope). This led to interpreting the 130-dimensional TRF estimation difficult to interpret. A similar constraint applied to the hidden-layer activations from our HMLSTM model for the five linguistic features. After dimensionality reduction via PCA, each still resulted in 150-dimensional vectors. To address this, we instead used binary predictors marking the offset of each linguistic unit (phoneme, syllable, word, phrase, sentence). Since our speech stimuli were computer-synthesized, the phoneme and syllable boundaries were automatically generated. The word boundaries were manually annotated by a native Mandarin as in Li et al. (2022). The phrase boundaries were automatically annotated by the Stanford parser and manually checked by a native Mandarin speaker. These rate models are represented as five distinct binary time series, each aligned with the timing of the corresponding linguistic unit, making them well-suited for mTRF analysis. Although the TRF results from the 1-dimensional rate predictors and the ridge regression results from the high-dimensional HM-LSTM-derived features are similar, they encode different things: The rate regressors only encode the timing of linguistic unit boundaries, while the model-derived features encode the representational content of the linguistic input. Therefore, we do not consider the mTRF analyses to be analogous to the ridge regression analyses. Rather, these results complement each other and both provide informative results into the neural tracking of linguistic structures at different levels for the attended and unattended speech.

      Since the TRF result for the continuous acoustic features also concerns R2, we have added an mTRF analysis where we fitted the one-dimensional speech envelope to the EEG. We extracted the envelope at 10 ms intervals for both attended and unattended speech and computed mTRFs independently for each subject and sensor using a basis of 50 ms Hamming windows spanning –100 ms to 300 ms relative to envelope onset. The results showed that in hearing-impaired participants, attended speech elicited a significant cluster in the bilateral temporal regions from 270 to 300 ms post-onset (t = 2.40, p = 0.01, Cohen’s d = 0.63). Unattended speech elicited an early cluster in right temporal and occipital regions from –100 ms to –80 ms (t = 3.07, p = 0.001, d = 0.83). Normal-hearing participants showed significant envelope tracking in the left temporal region at 280–300 ms after envelope onset (t = 2.37, p = 0.037, d = 0.48), with no significant cluster for unattended speech. These results further suggest that hearing-impaired listeners may have difficulty suppressing unattended streams. We have added the new TRF results for envelope to Figure S3 and the “mTRF results for attended and unattended speech” on p.7 and the “mTRF analysis” in Material and Methods of the revised manuscript.

      The authors' wording about this suggests that these new regressors have a nonzero sample at each linguistic event's offset, not onset. This should also be clarified. As the authors know, the onset would be more standard, and using the offset has implications for understanding the timing of the TRFs, as a phoneme has a different duration than a word, which has a different duration from a sentence, etc.

      In our rate‐model mTRF analyses, we initially labelled linguistic boundaries as “offsets” because our ridge‐regression with HM-LSTM features was aligned to sentence offsets rather than onsets. However, since each offset coincides with the next unit’s onset—and our regressors simply mark these transition points as 1—the “offset” and “onset” models yield identical mTRFs. To avoid confusion, we have relabeled “offset” as “boundary” in Figure S2.

      As discussed in our prior responses, this design was based on the structure of our input to the HM-LSTM model, where each input consists of a pair of sentences encoded in phonemes, such as “t a_1 n əŋ_2 f ei_1 <sep> zh ə_4 sh iii_4 f ei_1 j ii_1” (“It can fly <sep> This is an airplane”). The two sentences are separated by a special <sep> token, and the model’s objective is to determine whether the second sentence follows the first, similar to a next-sentence prediction task. Since the model processes both sentences in full before making a prediction, the neural activations of interest should correspond to the point at which the entire sentence has been processed by humans. To enable a fair comparison between the model’s internal representations and brain responses, we aligned our neural analyses with the sentence offsets, capturing the time window after the sentence has been fully perceived by the participant. Thus, we extracted epochs from -100 to +300 ms relative to each sentence offset, consistent with our model-informed design.

      We understand that phonemes, syllables, words, phrases, and sentences differ in their durations. However, the five hidden activity vectors extracted from the model are designed to capture the representations of these five linguistic levels across the entire sentence. Specifically, for a sentence pair such as “It can fly <sep> This is an airplane,” the first 2048-dimensional vector represents all the phonemes in the two sentences (“t a_1 n əŋ_2 f ei_1 <sep> zh ə_4 sh iii_4 f ei_1 j ii_1”), the second vector captures all the syllables (“ta_1 nəŋ_2 fei_1 <sep> zhə_4 shiii_4 fei_1jii_1”), the third vector represents all the words, the fourth vector captures the phrases, and the fifth vector represents the sentence-level meaning. In our dataset, input pairs consist of adjacent sentences from the stimuli (e.g., Sentence 1 and Sentence 2, Sentence 2 and Sentence 3, and so on), and for each pair, the model generates five 2048-dimensional vectors, each corresponding to a specific linguistic level. To identify the neural correlates of these model-derived features—each intended to represent the full linguistic level across a complete sentence—we focused on the EEG signal surrounding the completion of the second sentence rather than on incremental processing. Accordingly, we extracted epochs from -100 ms to +300 ms relative to the offset of the second sentence and performed ridge regression analyses using the five model features (reduced to 150 dimensions via PCA) at every 50 ms across the epoch. We have added this clarification on p.12 of the revised manuscript.

      About offsets:

      TRFs can still be interpretable using the offset timings though; however, the main original analysis seems to be utilizing the offset times in a different, more confusing way. The authors still seem to be saying that only the peri-offset time of the EEG was analyzed at all, meaning the vast majority of the EEG trial durations do not factor into the main HM-LSTM response results whatsoever. The way the authors describe this does not seem to be present in any other literature, including the papers that they cite. Therefore, much more clarification on this issue is needed. If the authors mean that the regressors are simply time-locked to the EEG by aligning their offsets (rather than their onsets, because they have varying onsets or some such experimental design complexity), then this would be fine. But it does not seem to be what the authors want to say. This may be a miscommunication about the methods, or the authors may have actually only analyzed a small portion of the data. Either way, this should be clarified to be able to be interpretable.

      We hope that our response in RE4, along with the supplementary video, has helped clarify this issue. We acknowledge that prior studies have not used EEG data surrounding sentence offsets to examine neural responses at the phoneme or syllable levels. However, this is largely due to a lack of model that represent all linguistic levels across an entire sentence. There is abundant work comparing model predictors with neural data time-locked to offsets because they mark the point at which participants has already processed the relevant information (Brennan, 2016; Brennan et al., 2016; Gwilliams et al., 2024, 2025). Similarly, in our model– brain alignment study, our goal is to identify neural correlates for each model-derived feature. If we correlate model activity with EEG data aligned to sentence onsets, we would be examining linguistic representations at all levels (from phoneme to sentence) of the whole sentence at the time when participants have not heard the sentence yet. Although this limits our analysis to a subset of the data (143 sentences × 400 ms windows × 4 conditions), it targets the exact moment when full-sentence representations emerge against background speech, allowing us to examine each model-derived feature onto its neural signature. We have added this clarification on p.12 of the revised manuscript.

      Reviewer #2 (Public review):

      This study presents a valuable finding on the neural encoding of speech in listeners with normal hearing and hearing impairment, uncovering marked differences in how attention to different levels of speech information is allocated, especially when having to selectively attend to one speaker while ignoring an irrelevant speaker. The results overall support the claims of the authors, although a more explicit behavioural task to demonstrate successful attention allocation would have strengthened the study. Importantly, the use of more "temporally continuous" analysis frameworks could have provided a better methodology to assess the entire time course of neural activity during speech listening. Despite these limitations, this interesting work will be useful to the hearing impairment and speech processing research community. The study compares speech-in-quiet vs. multi-talker scenarios, allowing to assess within-participant the impact that the addition of a competing talker has on the neural tracking of speech. Moreover, the inclusion of a population with hearing loss is useful to disentangle the effects of attention orienting and hearing ability. The diagnosis of high-frequency hearing loss was done as part of the experimental procedure by professional audiologists, leading to a high control of the main contrast of interest for the experiment. Sample size was big, allowing to draw meaningful comparisons between the two populations.

      We thank you very much for your appreciation of our research and we have now added a more description of the mTRF analyses on p.13-14 of the revised manuscript.

      An HM-LSTM model was employed to jointly extract speech features spanning from the stimulus acoustics to word-level and phrase-level information, represented by embeddings extracted at successive layers of the model. The model was specifically expanded to include lower level acoustic and phonetic information, reaching a good representation of all intermediate levels of speech. Despite conveniently extracting all features jointly, the HMLSTM model processes linguistic input sentence-by-sentence, and therefore only allows to assess the corresponding EEG data at sentence offset. If I understood correctly, while the sentence information extracted with the HM-LSTM reflects the entire sentence - in terms of its acoustic, phonetic and more abstract linguistic features - it only gives a condensed final representation of the sentence. As such, feature extraction with the HM-LSTM is not compatible with a continuous temporal mapping on the EEG signal, and this is the main reason behind the authors' decision to fit a regression at nine separate time points surrounding sentence offsets.

      Yes, you are correct. As explained in RE4, the model generates five hidden-layer activity vectors, each intended to represent all the phonemes, syllables, words, phrases within the entire sentence (“a condensed final representation”). This is the primary reason we extract EEG data surrounding the sentence offsets—this time point reflects when the full sentence has been processed by the human brain. We assume that even at this stage, residual neural responses corresponding to each linguistic level are still present and can be meaningfully analyzed.

      While valid and previously used in the literature, this methodology, in the particular context of this experiment, might be obscuring important attentional effects impacted by hearing-loss. By fitting a regression only around sentence-final speech representations, the method might be overlooking the more "online" speech processing dynamics, and only assessing the permanence of information at different speech levels at sentence offset. In other words, the acoustic attentional bias between Attended and Unattended speech might exist even in hearing-impaired participants but, due to a lower encoding or permanence of acoustic information in this population, it might only emerge when using methodologies with a higher temporal resolution, such as Temporal Response Functions (TRFs). If a univariate TRF fit simply on the continuous speech envelope did not show any attentional bias (different trial lengths should not be a problem for fitting TRFs), I would be entirely convinced of the result. For now, I am unsure on how to interpret this finding.

      We agree and we have added the mTRF results using the rate models for the 5 linguistic levels in the prior revision. The rate model aligns with the boundaries of each linguistic unit at each level. As explained in RE3, the rate regressors encode the timing of linguistic unit boundaries, while the model-derived features encode the representational content of the linguistic input. The mTRF results showed similar patterns to those observed using features from our HM-LSTM model with ridge regression (see Figure S2). These results complement each other and both provide informative results into the neural tracking of linguistic structures at different levels for the attended and unattended speech.

      We have also added TRF results fitting the envelope of attended and unattended speech at every 10 ms to the whole 10-minute EEG data at every 10 ms. Our results showed that in hearing-impaired participants, attended speech elicited a significant cluster in the bilateral temporal regions from 270 to 300 ms post-onset (t = 2.40, p = 0.01, Cohen’s d = 0.63). Unattended speech elicited an early cluster in right temporal and occipital regions from –100 ms to –80 ms (t = 3.07, p = 0.001, d = 0.83). Normal-hearing participants showed significant envelope tracking in the left temporal region at 280–300 ms after envelope onset (t = 2.37, p = 0.037, d = 0.48), with no significant cluster for unattended speech. These results further suggest that hearing-impaired listeners may have difficulty suppressing unattended streams. We have added the new TRF results for envelope to Figure S3 and the “mTRF results for attended and unattended speech” on p.7 and the “mTRF analysis” in Material and Methods of the revised manuscript.

      Despite my doubts on the appropriateness of condensed speech representations and singlepoint regression for acoustic features in particular, the current methodology allows the authors to explore their research questions, and the results support their conclusions. This work presents an interesting finding on the limits of attentional bias in a cocktail-party scenario, suggesting that fundamentally different neural attentional filters are employed by listeners with highfrequency hearing loss, even in terms of the tracking of speech acoustics. Moreover, the rich dataset collected by the authors is a great contribution to open science and will offer opportunities for re-analysis.

      We sincerely thank you again for your encouraging comments regarding the impact of our study.

      Reviewer #3 (Public review):

      Summary:

      The authors aimed to investigate how the brain processes different linguistic units (from phonemes to sentences) in challenging listening conditions, such as multi-talker environments, and how this processing differs between individuals with normal hearing and those with hearing impairments. Using a hierarchical language model and EEG data, they sought to understand the neural underpinnings of speech comprehension at various temporal scales and identify specific challenges that hearing-impaired listeners face in noisy settings.

      Strengths:

      Overall, the combination of computational modeling, detailed EEG analysis, and comprehensive experimental design thoroughly investigates the neural mechanisms underlying speech comprehension in complex auditory environments. The use of a hierarchical language model (HM-LSTM) offers a data-driven approach to dissect and analyze linguistic information at multiple temporal scales (phoneme, syllable, word, phrase, and sentence). This model allows for a comprehensive neural encoding examination of how different levels of linguistic processing are represented in the brain. The study includes both single-talker and multi-talker conditions, as well as participants with normal hearing and those with hearing impairments. This design provides a robust framework for comparing neural processing across different listening scenarios and groups.

      Weaknesses:

      The analyses heavily rely on one specific computational model, which limits the robustness of the findings. The use of a single DNN-based hierarchical model to represent linguistic information, while innovative, may not capture the full range of neural coding present in different populations. A low-accuracy regression model-fit does not necessarily indicate the absence of neural coding for a specific type of information. The DNN model represents information in a manner constrained by its architecture and training objectives, which might fit one population better than another without proving the non-existence of such information in the other group. It is also not entirely clear if the DNN model used in this study effectively serves the authors' goal of capturing different linguistic information at various layers. More quantitative metrics on acoustic/linguistic-related downstream tasks, such as speaker identification and phoneme/syllable/word recognition based on these intermediate layers, can better characterize the capacity of the DNN model.

      We agree that, before aligning model representations with neural data, it is essential to confirm that the model encodes linguistic information at multiple hierarchical levels. This is the purpose of our validation analysis: We evaluated the model’s representations across five layers using a test set of 20 four-syllable sentences in which every syllable shares the same vowel—e.g., “mā ma mà mǎ” (mother scolds horse), “shū shu shǔ shù” (uncle counts numbers; see Table S1). We hypothesized that the activity in the phoneme and syllable layer would be more similar than other layers for same-vowel sentences. The results confirmed our hypothesis: Hidden-layer activity for same-vowel sentences exhibited much more similar distributions at the phoneme and syllable levels compared to those at the word, phrase and sentence levels Figure 3C displays the scatter plot of the model activity at the five linguistic levels for each of the 20 4-syllable sentences, post dimension reduction using multidimensional scaling (MDS). We used color-coding to represent the activity of five hidden layers after dimensionality reduction. Each dot on the plot corresponds to one test sentence. Only phonemes are labeled because each syllable in our test sentences contains the same vowels (see Table S1).The plot reveals that model representations at the phoneme and syllable levels are more dispersed for each sentence, while representations at the higher linguistic levels—word, phrase, and sentence—are more centralized. Additionally, similar phonemes tend to cluster together across the phoneme and syllable layers, indicating that the model captures a greater amount of information at these levels when the phonemes within the sentences are similar.

      Apart from the DNN model, we also included the rate models which simply mark 1 at each unit boundaries across the 5 levels. We performed mTRF analyses with these rate models and found similar patterns to our ridge‐regression results with the DNN: (see Figure S2). This provides further evidence that the model reliably captures information across all five hierarchical levels.

      Since EEG measures underlying neural activity in near real-time, it is expected that lower-level acoustic information, which is relatively transient, such as phonemes and syllables, would be distributed throughout the time course of the entire sentence. It is not evident if this limited time window effectively captures the neural responses to the entire sentence, especially for lower-level linguistic features. A more comprehensive analysis covering the entire time course of the sentence, or at least a longer temporal window, would provide a clearer understanding of how different linguistic units are processed over time.

      We agree that lower-level linguistic features may be distributed throughout the whole sentence, however, using the entire sentence duration was not feasible, as the sentences in the stimuli vary in length, making statistical analysis challenging. Additionally, since the stimuli consist of continuous speech, extending the time window would risk including linguistic units from subsequent sentences. This would introduce ambiguity as to whether the EEG responses correspond to the current or the following sentence. Additionally, our model activity represents a “condensed final representation” at the five linguistic levels for the whole sentence, rather than incrementally during the sentence. We think the -100 to 300 ms time window relative to each sentence offset targets the exact moment when full-sentence representations are comprehended and a “condensed final representation” for the whole sentence across five linguistic level have been formed in the brain. We have added this clarification on p.13 of the revised manuscript.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      Here are some specifics and clarifications of my public review:

      Initially I was interpreting the R squared as a continuous measure of predicted EEG relative to actual EEG, based on an encoding model, but this does not appear to be correct. Thank you for pointing out that the y axis is z-scored R squared in your main ridge regression plots. However, I am not sure why/how you chose to represent this that way. It seems to me that a simple Pearson r would be most informative here (and in line with similar work, including Goldstein et al. 2022 that you mentioned). That way you preserve the sign of the relationships between the regressors and the EEG. With R squared, we have a different interpretation, which is maybe also ok, but I also don't see the point of z-scoring R squared. Another possibility is that when you say "z-transformed" you are referring to the Fisher transformation; is that the case? In the plots you say "normalized", so that sounds like a z-score, but this needs to be clarified; as I say, a simple Pearson r would probably be best.

      We did not use Pearson’s r, as in Goldstein et al. (2022), because our analysis did not involve a train-test split, which was central to their approach. In their study, the data were divided into training and testing sets, and a ridge regression model was trained on the training set. They then used the trained model to predict neural responses on the held-out test set, and calculated Pearson’s r to assess the correlation between the predicted and observed neural responses. As a result, their final metric of model performance was the correlation coefficient (r). In contrast, our analysis is more aligned with standard temporal response function (TRF) approaches. We did not perform a train-test split; instead, we computed the model fitting performance (R²) of the ridge regression model at each sensor and time point for each subject. At the group level, we conducted one-sample t-tests with spatiotemporal cluster-based correction on the R² values to determine which sensors and time windows showed significantly greater R² values than baseline. To establish a baseline, we z-scored the R² values across sensors and time points, effectively centering the distribution around zero. This normalization allowed us to interpret deviations from the mean R² as meaningful increases in model performance and provided a suitable baseline for the statistical tests. We have added this clarification on p.13 of the revised manuscript.

      Thank you for doing the TRF analysis, but where are the acoustic TRFs, analogous to the acoustic results for your HM-LSTM ridge analyses? And what tools did you use to do the TRF analysis? If it is something like the mTRF MATLAB toolbox, then it is also using ridge regression, as you have already done in your original analysis, correct? If so, then it is pretty much the same as your original analysis, just with more dense timepoints, correct? This is what I meant by referring to TRFs originally, because what you have basically done originally was to make a 9-point TRF (and then the plots and analyses are contrasts of pairs of those), with lags between -100 and 300 ms relative to the temporal alignment between the regressors and the EEG, I think (more on this below).

      Also with the new TRF analysis, you say that the regressors/predictors had "a value of 1 at each unit boundary offset". So this means you re-made these predictors to be discrete as I and reviewer 3 were mentioning before (rather than using the HM-LSTM model layer(s)), and also, that you put each phoneme/word/etc. marker at its offset, rather than its onset? I'm also confused as to why you would do this rather than the onset, but I suppose it doesn't change the interpretation very much, just that the TRFs are slid over by a small amount.

      We used the Python package Eelbrain (https://eelbrain.readthedocs.io/en/r0.39/auto_examples/temporal-response-functions/trf_intro.html) to conduct the multivariate temporal response function (mTRF) analyses. As we previously explained in our response to Reviewer 3, we did not apply mTRF to the acoustic features due to the high dimensionality of the input. Specifically, our acoustic representation consists of a 130-dimensional vector sampled every 10 ms throughout the speech stimuli (comprising a 129-dimensional spectrogram and a 1-dimensional amplitude envelope). This renders the 130 TRF weights to the acoustic features uninterpretable. However, we have now added TRF results from the 1- dimension envelope to the attended and unattended speech at every 10 ms.

      A similar constraint applied to the hidden-layer activations from our HM-LSTM model for the five linguistic features. After dimensionality reduction via PCA, each still resulted in 150-dimensional vectors, further preventing their use in mTRF analyses. To address this, we instead used binary predictors marking the offset of each linguistic unit (phoneme, syllable, word, phrase, sentence). These rate models are represented as five distinct binary time series, each aligned with the timing of the corresponding linguistic unit, making them well-suited for mTRF analysis. It is important to note that these rate predictors differ from the HM-LSTMderived features: They encode only the timing of linguistic unit boundaries, not the content or representational structure of the linguistic input. Therefore, we do not consider the mTRF analyses to be equivalent to the ridge regression analyses based on HM-LSTM features

      For onset vs. offset, as explained RE4, we labelled them “offsets” because our ridge‐regression with HM-LSTM features was aligned to sentence offsets rather than onsets (see RE4 and RE15 below for the rationale of using sentence offset). However, since each unit offset coincides with the next unit’s onset—and the rate model simply mark these transition points as 1—the “offset” and “onset” models yield identical mTRFs. To avoid confusion, we have relabeled “offset” as “boundary” in Figure S2.

      I'm still confused about offsets generally. Does this maybe mean that the EEG, and each predictor, are all aligned by aligning their endpoints, which are usually/always the ends of sentences? So e.g. all the phoneme activity in the phoneme regressor actually corresponds to those phonemes of the stimuli in the EEG time, but those regressors and EEG do not have a common starting time (one trial to the next maybe?), so they have to be aligned with their ends instead?

      We chose to use sentence offsets rather than onsets based on the structure of our input to the HM-LSTM model, where each input consists of a pair of sentences encoded in phonemes, such as “t a_1 n əŋ_2 f ei_1 <sep> zh ə_4 sh iii_4 f ei_1 j ii_1” (“It can fly <sep> This is an airplane”). The two sentences are separated by a special <sep> token, and the model’s objective is to determine whether the second sentence follows the first, similar to a next-sentence prediction task. Since the model processes both sentences in full before making a prediction, the neural activations of interest should correspond to the point at which the entire sentence has been processed. To enable a fair comparison between the model’s internal representations and brain responses, we aligned our neural analyses with the sentence offsets, capturing the time window after the sentence has been fully perceived by the participant. Thus, we extracted epochs from -100 to +300 ms relative to each sentence offset, consistent with our modelinformed design. If we align model activity with EEG data aligned to sentence onsets, we would be examining linguistic representations at all levels (from phoneme to sentence) of the whole sentence at the time when participants have not heard the sentence yet. By contrast, aligning to sentence offsets ensures that participants have constructed a full-sentence representation.

      We understand that it is a bit confusing why the regressor of each level is not aligned to their own offsets in the data. The hidden-layer activations of the HM-LSTM model corresponding to the five linguistic levels (phoneme, syllable, word, phrase, sentence) are consistently 150-dimensional vectors after PCA reduction. As a result, for each input sentence pair, the model produces five distinct hidden-layer activations, each capturing the representational content associated with one linguistic level for the whole sentence. We believe our -100 to 300 ms time window relative to sentence offset reflects a meaningful period during which the brain integrates and comprehends information across multiple linguistic levels.

      Being "time-locked to the offset of each sentence at nine latencies" is not something I can really find in any of the references that you mentioned, regarding the offset aspect of this method. Can you point me more specifically to what you are trying to reference with that, or further explain? You said that "predicting EEG signals around the offset of each sentence" is "a method commonly employed in the literature", but the example you gave of Goldstein 2022 is using onsets of words, which is indeed much more in line with what I would expect (not offsets of sentences).

      You are correct that Goldstein (2022) aligned model predictions to onsets rather than offsets; however, many studies in the literature also align model predictions with unit offsets. typically because they mark the point at which participants has already processed the relevant information (Brennan, 2016; Brennan et al., 2016; Gwilliams et al., 2024, 2025). Similarly, in our study, we aim to identify neural correlates for each model-derived feature. If we correlate model activity with EEG data aligned to sentence onsets, we would be examining linguistic representations at all levels (from phoneme to sentence) of the whole sentence at the time when participants have not heard the sentence yet. By contrast, aligning to sentence offsets ensures that participants have constructed a full-sentence representation. Although this limits our analysis to a subset of the data (143 sentences × 400 ms windows × 4 conditions), it targets the exact moment when full-sentence representations emerge against background speech, allowing us to examine each model-derived feature onto its neural signature. We have added this clarification on p.12 of the revised manuscript.

      This new sentence does not make sense to me: "The regressors are aligned to sentence offsets because all our regressors are taken from the hidden layer of our HM-LSTM model, which generates vector representations corresponding to the five linguistic levels of the entire sentence".

      Thank you for the suggestion. We hope our responses in RE4, 15 and 16, along with our supplementary video have now clarified the issue. We have deleted the sentence and provided a more detailed explanation on p.12 of the revised manuscript: The regressors are aligned to sentence offsets because our goal is to identify neural correlates for each model-derived feature of a whole sentence. If we align model activity with EEG data time-locked to sentence onsets, we would be finding neural responses to linguistic levels (from phoneme to sentence) of the whole sentence at the time when participants have not processed the sentence yet. By contrast, aligning to sentence offsets ensures that participants have constructed a full-sentence representation. Although this limits our analysis to a subset of the data (143 sentences × 2 sections × 400 ms windows), it targets the exact moment when full-sentence representations emerge against background speech, allowing us to examine each model-derived feature onto its neural signature. We understand that phonemes, syllables, words, phrases, and sentences differ in their durations. However, the five hidden activity vectors extracted from the model are designed to capture the representations of these five linguistic levels across the entire sentence Specifically, for a sentence pair such as “It can fly <sep> This is an airplane,” the first 2048dimensional vector represents all the phonemes in the two sentences (“t a_1 n əŋ_2 f ei_1 <sep> zh ə_4 sh iii_4 f ei_1 j ii_1”), the second vector captures all the syllables (“ta_1 nəŋ_2 fei_1 <sep> zhə_4 shiii_4 fei_1jii_1”), the third vector represents all the words, the fourth vector captures the phrases, and the fifth vector represents the sentence-level meaning. In our dataset, input pairs consist of adjacent sentences from the stimuli (e.g., Sentence 1 and Sentence 2, Sentence 2 and Sentence 3, and so on), and for each pair, the model generates five 2048dimensional vectors, each corresponding to a specific linguistic level. To identify the neural correlates of these model-derived features—each intended to represent the full linguistic level across a complete sentence—we focused on the EEG signal surrounding the completion of the second sentence rather than on incremental processing. Accordingly, we extracted epochs from -100 ms to +300 ms relative to the offset of the second sentence and performed ridge regression analyses using the five model features (reduced to 150 dimensions via PCA) at every 50 ms across the epoch.

      More on the issue of sentence offsets: In response to reviewer 3's question about -100 - 300 ms around sentence offset, you said "Using the entire sentence duration was not feasible, as the sentences in the stimuli vary in length, making statistical analysis challenging. Additionally, since the stimuli consist of continuous speech, extending the time window would risk including linguistic units from subsequent sentence." This does not make sense to me, so can you elaborate? It sounds like you are actually saying that you only analyzed 400 ms of each trial, but that cannot be what you mean.

      Yes, we analyzed only the 400 ms window surrounding each sentence offset. Although this represents just a subset of our data (143 sentences × 400 ms × 4 conditions), it precisely captures when full-sentence representations emerge against background speech. Because our model produces a single, condensed representation for each linguistic level over the entire sentence—rather than incrementally—we think it is more appropriate to align to the period surrounding sentence offsets. Additionally, extending the window (e.g. to 2 seconds) would risk overlapping adjacent sentences, since sentence lengths vary. Our focus is on the exact period when integrated, level-specific information for each sentence has formed in the brain, and our results already demonstrate different response patterns to different linguistic levels for the two listener groups within this interval. We have added this clarification on p.13 of the revised manuscript.

      In your mTRF analysis, you are now saying that the discrete predictors have "a value of 1" at each of the "boundary offsets", and those TRFs look very similar to your original plots. It sounds to me like you should not be referring to time zero in your original ridge analysis as "sentence offset". If what you mean is that sentence offset time is merely how you aligned the regressors and EEG in time, then your time zero still has a standard, typical TRF interpretation. It is just the point in time, or lag, at which the regressor(s) and EEG are aligned. So activity before zero is "predictive" and activity after zero is "reactive", to think of it crudely. So also in the text, when you say things like "50-150 ms after the sentence offsets", I think this is not really what you mean. I think you are referring to the lags of 50 - 150 ms, relative to the alignment of the regressor and the EEG.

      Thank you very much for the explanation. We agree that, in our ridge‐regression time course, pre zero lags index “predictive” processing and post-zero lags index “reactive” processing. Unlike TRF analysis, we applied ridge regression to our high-dimensional model features at nine discrete lags around the sentence offset. At each lag, we tested whether the regression score exceeded a baseline defined as the mean regression score across all lags. For example, finding a significantly higher regression score between 50 and 150 ms suggests that our regressor reliably predicted EEG activity in that time window. So here time zero refers to the precise moment of the sentence offset—not the the alignment of the regressor and the EEG.

      I look forward to discussing how much of my interpretation here makes sense or doesn't, both with the authors and reviewers.

      Thank you very much for these very constructive feedback and we hope that we have addressed all your questions.

    1. R0:

      Reviewer #1: This manuscript addresses antimicrobial resistance in Ecuador through a One Health lens, focusing on governance, infrastructure, and equity. The topic is highly relevant to PLOS Global Public Health, particularly given the emphasis on health systems, intersectoral governance, and equity in low and middle income country contexts. The study makes a valuable contribution to regional and global discussions on AMR governance. Some points need to be addressed: 1. While the conclusions are generally consistent with the qualitative findings, some claims, particularly those related to macro level political shifts, austerity policies, and governance deterioration, would benefit from clearer and more explicit linkage to the empirical data presented. In several instances, the discussion moves toward a normative or interpretive tone that appears to draw as much from secondary literature as from the study’s primary data. Strengthening signposting between interview findings, document analysis, and specific conclusions would improve analytical clarity. 2. The manuscript would benefit from more explicit clarification that the study is a qualitative governance and policy analysis rather than an epidemiological assessment of antimicrobial resistance trends. Readers may otherwise expect microbiological or quantitative AMR indicators, which are outside the scope of this work but not always clearly distinguished in the framing. 3. The Data Availability Statement indicates that all relevant data are included within the manuscript and that additional information is available upon reasonable request. However, this does not fully meet PLOS data policy requirements. The primary qualitative data underlying the findings, such as anonymized interview transcripts, coded data excerpts, or NVivo codebooks, are not publicly available as supplementary files or deposited in a repository. If there are ethical or confidentiality constraints that prevent public sharing of these materials, these restrictions should be clearly specified in the Data Availability Statement. Alternatively, the authors are encouraged to share de-identified qualitative data, coding frameworks, or analytic matrices as Supporting Information to enhance transparency and reproducibility. Here minnor suggestions: a) Consider minor language and stylistic revisions throughout the manuscript to improve clarity and flow, particularly in the Introduction and Discussion sections. b) Ensure consistent terminology when referring to governance structures, committees, and surveillance systems. c) Some tables (e.g., interview results) could benefit from brief interpretive summaries to guide readers unfamiliar with the Ecuadorian institutional context. The equity analysis is a strong component of the manuscript; however, explicitly distinguishing between findings derived from interview data versus document analysis would further strengthen this section.

      Reviewer #2: Overview: This study examines national approaches to addressing antimicrobial resistance (AMR) in Ecuador from a One Health (OH) perspective, with emphasis on governance, public policy, health infrastructure, and equity. The authors use a qualitative design combining document review, scientific literature analysis, and semi-structured interviews with key informants representing multiple OH sectors. The manuscript offers a useful overview of the challenges Ecuador faces in implementing an OH approach to AMR prevention. However, many of the broader claims are not sufficiently supported by the evidence currently presented. In particular, findings from the document analysis, the central component of the study, are not reported in a clear or substantive way, making it difficult to assess how the conclusions were derived. Strengthening the presentation of document-analysis results, clarifying how these findings were integrated with interview data, and improving the organization and flow of the manuscript would substantially increase its rigor and impact. With these revisions, the paper has the potential to become a valuable contribution to the literature on AMR and One Health in Ecuador. Major revisions • The Introduction would benefit from a brief description of Ecuador's National Plan for the Prevention and Control of AMR (2019-2023)-including its overarching goals, structure, key components/strategic axes, and intended governance/implementation approach. This context is necessary for readers to understand what was constrained in implementation and to interpret the claims made in the Discussion and Conclusions. • The Methods section needs substantial revision to clearly describe how the qualitative research was conducted and analyzed. I recommend aligning the reporting with SRQR (Standards for Reporting Qualitative Research) and citing: O'Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Academic Medicine. 2014;89(9):1245-1251. Please consider including an SRQR checklist as Supplementary Information to improve transparency and reproducibility of the qualitative analysis. • The manuscript currently provides limited explicit reporting of findings from the document analysis, despite this being a central component of the study. Please present clearer, more detailed results from the document analysis (e.g., what patterns/themes emerged, concrete examples), and explain how these findings were integrated with (or triangulated against) the semi-structured interview data. • As written, the Results and Discussion sections are difficult to follow. Consider restructuring the manuscript around the four analytical themes/framework domains used in the study: 1. Intersectoral governance analysis; 2. Situational analysis; 3. Transitions toward One Health; 4. Equity analysis using a GBA+ lens. Using these as consistent subheadings throughout would strengthen coherence and readability. • The Discussion does not yet fully unpack what the findings mean, nor does it adequately situate them in relation to experiences from other countries (Latin America, LMIC settings, and high-income settings where implementation has been more effective or similarly constrained). Additionally, the manuscript states that it proposes a "context-specific action framework," but this framework is not clearly presented or easy to locate. If this is a key contribution ("So what? What now?"), please make it explicit. • Several conclusions currently extend beyond what is clearly supported by the Results section. Please ensure the Conclusions are tightly grounded in the reported evidence (from both document analysis and interviews), or revise/soften claims where direct supporting data are not presented. Minor revisions Introduction • Line 46: Please briefly define selective pressure and explain how it contributes to the emergence and spread of antimicrobial-resistant microorganisms. • Line 59: Before discussing constraints, it would help to briefly describe the Ecuadorian National Plan for the Prevention and Control of AMR (2019–2023), for example, its overarching goals, structure, key components/strategic axes, and intended governance/implementation approach. This context will help readers understand what specific aspects were constrained. • Line 72: Grammar: “reduced” instead of “reduces.” • Lines 75–78: These statements read as interpretive claims; please clarify whether they are based on cited literature or derived from your data. If they are claims about broader context, references are needed. • Line 75: Consider starting a new paragraph around here to introduce the National Plan/Committee context more clearly before transitioning into limitations. Methods • Line 106: Please briefly define semi-structured interviews and include a reference for the approach. • Lines 106–108: The study objective is already stated earlier; consider removing repeated objective language here to streamline the Methods. • Recommend adding a clearly labeled Ethics subsection (IRB approval/waiver, consent procedures, confidentiality protections). • Lines 150–151, 157–158, 160–161: These appear related and could be consolidated into one coherent paragraph to improve flow. • Table 1: Please provide more detail on the “affiliated agencies”/ “agencies” included. For example, within the Ministry of Health, does this include INSPI or other specific bodies? Consider organizing the table using headings aligned with your interview sampling frame (e.g., human health, animal health, environment, academia, civil society) to match the manuscript text. • Line 173: Please define the acronym GBA+ at first use. The Methods section would benefit from clearer subsections. Suggested structure:1. Study design and setting; 2. Sampling and participants (sampling strategy, eligibility criteria, recruitment, number approached/interviewed; how you determined sampling adequacy/saturation); 3.Data sources and data collection document analysis: document types, inclusion criteria, extraction approach interviews: interview guide development, interviewer training/positionality if relevant, interview mode, audio recording, transcription/translation, any iterative changes to guides; 4. Data management (storage/security, de-identification/anonymization, coding workflow); 5. Data analysis (analytic approach for documents and interviews; how themes were developed; triangulation across methods; reflexivity/rigor strategies such as audit trail, double coding, member checking if used). • Consider including (as Supplementary Information) an SRQR checklist to improve transparency Results • The Results section would be clearer if organized explicitly around your four analytical themes/framework domains: 1. Intersectoral governance analysis; 2. Situational analysis; 3. Transitions toward One Health; 4. Equity analysis using a GBA+ lens. Consider using these as subheadings and presenting findings under each. • There is currently little explicit reporting of what was found from the document analysis. Please include concrete results from that component (e.g., what patterns/gaps were identified, and specific examples). • Lines 207–212: This reads like interpretation more appropriate for the Discussion (and would likely need supporting references if it’s a broader claim). Consider moving it. • Lines 223–226: These statements also appear interpretive and would fit better in the Discussion. • Line 228: “Barriers and facilitators” are introduced here but not clearly set up earlier. If identifying barriers/facilitators is a central objective, please introduce it in the Introduction/Aims and ensure consistent framing throughout. • Lines 234–241; 243–249: These sections read like discussion/interpretation rather than results. Consider revising to focus on what participants/documents explicitly reported (with evidence) and move broader implications to the Discussion. • Consider adding a small number of representative verbatim quotes from the semi-structured interviews to support each major theme. Including 1–2 quotes per theme (with anonymized participant identifiers/roles) would strengthen credibility and transparency and is standard for reporting semi-structured interview findings. If space is limited, quotes can be placed in a table or supplement.

      Discussion • Consider organizing the Discussion using the same four analytical themes as the Results to improve coherence and readability. • The Discussion would benefit from deeper comparison with related work from Latin America and other LMIC settings, as well as contrasting with experiences in high-income settings where national AMR plans may have been implemented more effectively. This would strengthen interpretation and generalizability. • The Introduction indicates that a “context-specific action framework” is proposed; however, this is not easy to locate in the current manuscript. Please clearly identify where the framework is presented (potentially Lines 320–327?) and consider adding a figure/table or a clearly labeled subsection so readers can easily find and understand it. Conclusion • Overall, the conclusions are plausible, but some claims appear stronger than what is currently supported by the Results section, especially without clearly presented document-analysis findings. • For example, the statement about deterioration in governance capacities, information system interoperability, laboratory infrastructure, and budget allocations would be strengthened by explicit evidence from the document analysis and/or interviews. If budget shifts were assessed, please report what sources were used and what changes were observed; if not directly assessed, consider softening the language or clarifying that it reflects stakeholder perceptions rather than documented budgetary evidence.

    1. Reviewer #1 (Public review):

      Summary:

      Sullivan and colleagues examined the modulation of reflexive visuomotor responses during collaboration between pairs of participants performing a joint reaching movement to a target. In their experiments, the players jointly controlled a cursor that they had to move towards narrow or wide targets. In each experimental block, each participant had a different type of target they had to move the joint cursor to. During the experiment, the authors used lateral perturbation of the cursor to test participants' fast feedback responses to the different target types. The authors suggest participants integrate the target type and related cost of their partner into their own movements, which suggests that visuomotor gains are affected by the partner's task.

      Strengths:

      The topic of the manuscript is very interesting, and the authors are using well-established methodology to test their hypothesis. They combine experimental studies with optimal control models to further support their work. Overall, the manuscript is very timely and shows important findings - that the feedback responses reflect both our and our partner's tasks.

      Weaknesses:

      However, in the current version of the manuscript, I believe the results could also be interpreted differently, which suggests that the authors should provide further support for their hypothesis and conclusions.

      Major Comments:

      (1) Results of the relevant conditions:

      In addition to the authors' explanation regarding the results, it is also possible that the results represent a simple modulation of the reflexive response to a scaled version of cursor movement. That is, when the cursor is partially controlled by a partner, which also contributes to reducing movement error, it can also be interpreted by the sensorimotor system as a scaling of hand-to-cursor movement. In this case, the reflexes are modulated according to a scaling factor (how much do I need to move to bring the cursor to the target). I believe that a single-agent simulation of an OFC model with a scaling factor in the lateral direction can generate the same predictions as those presented by the authors in this study. In other words, maybe the controller has learned about the nature of the perturbation in each specific context, that in some conditions I need to control strongly, whereas in others I do not (without having any model of the partner). I suggest that the authors demonstrate how they can distinguish their interpretation of the results from other explanations.

      (2) The effect of the partner target:

      The authors presented both self and partner targets together. While the effect of each target type, presented separately, is known, it is unclear how presenting both simultaneously affects individual response. That is, does a small target with a background of the wide target affect the reflexive response in the case of a single participant moving? The results of Experiment 2, comparing the case of partner- and self-relevant targets versus partner-irrelevant and self-relevant targets, may suggest that the system acted based on the relevant target, regardless of the presence and instructions regarding the self-target.

      (3) Experiment instructions:

      It is unclear what the general instructions were for the participants and whether the instructions provided set the proposed weighted cost, which could be altered with different instructions.

      (4) Some work has shown that the gain of visuomotor feedback responses reflects the time to target and that this is updated online after a perturbation (Cesonis & Franklin, 2020, eNeuro; Cesonis and Franklin, 2021, NBDT; also related to Crevecoeur et al., 2013, J Neurophysiol). These models would predict different feedback gains depending on the distance remaining to the target for the participant and the time to correct for the jump, which is directly affected by the small or large targets. Could this time be used to target instead of explaining the results? I don't believe that this is the case, but the authors should try to rule out other interpretations. This is maybe a minor point, but perhaps more important is the location (& time remaining) for each participant at the time of the jump. It appears from the figures that this might be affected by the condition (given the change in movement lengths - see Figure 3 B & C). If this is the case, then could some of the feedback gain be related to these parameters and not the model of the partner, as suggested? Some evidence to rule this out would be a good addition to the paper - perhaps the distance of each partner at the time of the perturbation, for example. In addition, please analyze the synchrony of the two partners' movements.

    1. eLife Assessment

      This important study addresses a classic debate in visual processing, using a strong method applied to a rare clinical population to evaluate hierarchical models of visual object perception. The paper finds only partial support for the hierarchical model: as expected, neural responses in ventral visual cortex show increased representational selectivity for faces along the posterior-anterior axes, but the onsets of the signals do not show a temporal hierarchy, indicating more parallel processing. The iEEG dataset is impressive, but the evidence for lack of temporal hierarchy is incomplete: essential quality checks need to be performed, and statistical analyses adapted to ensure that the data and analyses would be able to reveal temporal hierarchy if it were present in the data.

    2. Reviewer #2 (Public review):

      Summary:

      This very ambitious project addresses one of the core questions in visual processing related to the underlying anatomical and functional architecture. Using a large sample of rare and high-quality EEG recordings in humans, the authors assess whether face-selectivity is organised along a posterior-anterior gradient, with selectivity and timing increasing from posterior to anterior regions. The evidence suggests that it is the case for selectivity, but the data are more mixed about the temporal organisation, which the authors use to conclude that the classic temporal hierarchy described in textbooks might be questioned, at least when it comes to face processing.

      Strengths:

      A huge amount of work went into collecting this highly valuable dataset of rare intracranial EEG recordings in humans. The data alone are valuable, assuming they are shared in an easily accessible and documented format. Currently, the OSF repository linked in the article is empty, so no assessment of the data can be made. The topic is important, and a key question in the field is addressed. The EEG methodology is strong, relying on a well-established and high SNR SSVEP method. The method is particularly well-suited to clinical populations, leading to interpretable data in a few minutes of recordings. The authors have attempted to quantify the data in many different ways and provided various estimates of selectivity and timing, with matching measures of uncertainty. Non-parametric confidence intervals and comparisons are provided. Collectively, the various analyses and rich illustrations provide superficially convincing evidence in favour of the conclusions.

      Weaknesses:

      (1) The work was not pre-registered, and there is no sample size justification, whether for participants or trials/sequences. So a statistical reviewer should assess the sensitivity of the analyses to different approaches.

      (2) Frequentist NHST is used to claim lack of effects, which is inappropriate, see for instance:

      Greenland, S., Senn, S. J., Rothman, K. J., Carlin, J. B., Poole, C., Goodman, S. N., & Altman, D. G. (2016). Statistical tests, P values, confidence intervals, and power: A guide to misinterpretations. European Journal of Epidemiology, 31(4), 337-350. https://doi.org/10.1007/s10654-016-0149-3

      Rouder, J. N., Morey, R. D., Verhagen, J., Province, J. M., & Wagenmakers, E.-J. (2016). Is There a Free Lunch in Inference? Topics in Cognitive Science, 8(3), 520-547. https://doi.org/10.1111/tops.12214

      (3) In the frequentist realm, demonstrating similar effects between groups requires equivalence testing, with bounds (minimum effect sizes of interest) that should be pre-registered:

      Campbell, H., & Gustafson, P. (2024). The Bayes factor, HDI-ROPE, and frequentist equivalence tests can all be reverse engineered-Almost exactly-From one another: Reply to Linde et al. (2021). Psychological Methods, 29(3), 613-623. https://doi.org/10.1037/met0000507

      Riesthuis, P. (2024). Simulation-Based Power Analyses for the Smallest Effect Size of Interest: A Confidence-Interval Approach for Minimum-Effect and Equivalence Testing. Advances in Methods and Practices in Psychological Science, 7(2), 25152459241240722. https://doi.org/10.1177/25152459241240722

      (4) The lack of consideration for sample sizes, the lack of pre-registration, and the lack of a method to support the null (a cornerstone of this project to demonstrate equivalence onsets between areas), suggest that the work is exploratory. This is a strength: we need rich datasets to explore, test tools and generate new hypotheses. I strongly recommend embracing the exploration philosophy, and removing all inferential statistics: instead, provide even more detailed graphical representations (include onset distributions) and share the data immediately with all the pre-processing and analysis code.

      (5) Even if the work was pre-registered, it would be very difficult to calculate p-values conditional on all the uncertainty around the number of participants, the number of contacts and the number of trials, as they are random variables, and sampling distributions of key inferences should be integrated over these unknown sources of variability. The difficulty of calculating/interpreting p-values that are conditional on so many pre-processing stages and sources of uncertainty is traditionally swept under the rug, but nevertheless well documented:

      Kruschke, J.K. (2013) Bayesian estimation supersedes the t test. J Exp Psychol Gen, 142, 573-603. https://pubmed.ncbi.nlm.nih.gov/22774788/

      Wagenmakers, E.-J. (2007). A practical solution to the pervasive problems of p values. Psychonomic Bulletin & Review, 14(5), 779-804. https://doi.org/10.3758/BF03194105<br /> https://link.springer.com/article/10.3758/BF03194105

      (6) Currently, there is no convincing evidence in the article to clearly support the main claims.

      Bootstrap confidence intervals were used to provide measures of uncertainty. However, the bootstrapping did not take the structure of the data into account, collapsing across important dependencies in that nested structure: participants > hemispheres > contacts > conditions > trials.

      Ignoring data dependencies and the uncertainty from trials could lead to a distorted CI. Sampling contacts with replacement is inappropriate because it breaks the structure of the data, mixing degrees of freedom across different levels of analysis. The key rule of the bootstrap is to follow the data acquisition process, and therefore, sampling participants with replacement should come first. In a hierarchical bootstrap, the process can be repeated at nested levels, so that for each resampled participant, then contacts are resampled (if treated as a random variable), then trials/sequences are resampled, keeping paired measurements together (hemispheres, and typically contacts in a standard EEG experiment with fixed montage). The same hierarchical resampling should be applied to all measurements and inferences to capture all sources of variability. Selectivity and timing should be quantified at each contact after resampling of trials/sequences before integrating across hemispheres and participants using appropriate and justified summary measures.

      The authors already recognise part of the problem, as they provide within-participant analyses. This is a very good step, inasmuch as it addresses the issue of mixing-up degrees of freedom across levels, but unfortunately these analyses are plagued with small sample sizes, making claims about the lack of differences even more problematic--classic lack of evidence == evidence of absence fallacy. In addition, there seem to be discrepancies between the mean and CI in some cases: 15 [-20, 20]; 8 [-24, 24].

      (7) Three other issues related to onsets:

      (a) FDR correction typically doesn't allow localisation claims, similarly to cluster inferences:

      Winkler, A. M., Taylor, P. A., Nichols, T. E., & Rorden, C. (2024). False Discovery Rate and Localizing Power (No. arXiv:2401.03554). arXiv. https://doi.org/10.48550/arXiv.2401.03554

      Rousselet, G. A. (2025). Using cluster-based permutation tests to estimate MEG/EEG onsets: How bad is it? European Journal of Neuroscience, 61(1), e16618. https://doi.org/10.1111/ejn.16618

      (b) Percentile bootstrap confidence intervals are inaccurate when applied to means. Alternatively, use a bootstrap-t method, or use the pb in conjunction with a robust measure of central tendency, such as a trimmed mean.

      Rousselet, G. A., Pernet, C. R., & Wilcox, R. R. (2021). The Percentile Bootstrap: A Primer With Step-by-Step Instructions in R. Advances in Methods and Practices in Psychological Science, 4(1), 2515245920911881. https://doi.org/10.1177/2515245920911881

      (c) Defining onsets based on an arbitrary "at least 30 ms" rule is not recommended:

      Piai, V., Dahlslätt, K., & Maris, E. (2015). Statistically comparing EEG/MEG waveforms through successive significant univariate tests: How bad can it be? Psychophysiology, 52(3), 440-443. https://doi.org/10.1111/psyp.12335

      (8) Figure 5 and matching analyses: There are much better tools than correlations to estimate connectivity and directionality. See for instance:

      Ince, R. A. A., Giordano, B. L., Kayser, C., Rousselet, G. A., Gross, J., & Schyns, P. G. (2017). A statistical framework for neuroimaging data analysis based on mutual information estimated via a Gaussian copula. Human Brain Mapping, 38(3), 1541-1573. https://doi.org/10.1002/hbm.23471

      (9) Pearson correlation is sensitive to other features of the data than an association, and is maximally sensitive to linear associations. Interpretation is difficult without seeing matching scatterplots and getting confirmation from alternative robust methods.

    1. Reviewer #3 (Public review):

      Summary:

      S. Keeley & collaborators propose a computational approach to infer time-varying latent variables directly from calcium traces (for instance, obtained with 2p imaging) without the need for deconvolving the traces into spike trains in a preliminary, independent step. Their approach rests on 1 of 3 families of latent models: GPFA, HMM and dynamical systems - which they augment with an observation model that maps latent variables to fluorescence traces. They validate their approach on simulated and real data, showing that the approach improves latent variable inference and model fitting, compared to more traditional approaches (although not directly compared with the 2-step one; see below). They provide a GitHub repository with code to fit their models (which I have not tested).

      Strengths:

      The approach is sound and well-motivated. The authors are specialists in latent variable models. The manuscript is succinct, well-written, and the figures are clear. I particularly liked the diversity of latent models considered, in particular latent models with continuous (GPFA) vs. discrete (HMM) dynamics, which are useful for characterizing different types of neural computations. The validation on both simulated and real data is convincing.

      Weaknesses:

      The main weakness that I see is that the approach is tested only on a single real dataset (odor response dataset). The other model fits are obtained from simulated data. While the results are convincing, it would be useful to see the approach tested on other datasets, for instance, datasets with different brain areas, different behavioral conditions, or different calcium indicators. This would help assess the generality of the approach and its robustness to different experimental conditions.

      The other points below mostly pertain to clarifications and possible extensions of the approach, and to simple model recovery experiments that would help quantify the advantage of the proposed approach over more traditional ones.

      I have a question related to interpretability and diagnosis of model fits. One advantage of the two-step approach: (1) deconvolution => (2) latent variance inference, is that one can inspect the quality of the deconvolution step independently from the latent variable inference step. In the proposed approach, it seems more difficult to diagnose potential problems with model fitting. For instance, if the inferred latent variables are not interpretable, how can one determine whether this is due to a poor choice of latent model (e.g., HMM with too few states), or a poor fit of the observation model (e.g., wrong parameters for the calcium dynamics)? Are there any diagnostic tools that could help identify potential problems with model fitting?

      Could the authors comment on whether their approach allows for instance to compare different forms of latent models (e.g., HMM vs. GPFA) in terms of model evidence, cross-validated log-likelihood or other model comparison metrics? This would be useful to quantitatively determine which type of latent dynamics is more appropriate for a given dataset.

      The HMM part reveals a pretty large number of states, with one state being interpretable (evoked response). Shouldn't we expect a simpler scenario, with 2 states? I know this is a difficult question that is more general and common with HMM approaches, but it would be useful to discuss this point. For instance, would a hierarchical HMM (with a smaller number of "super-states") be more appropriate here?

      While it certainly makes sense that models accounting for the full transformation of latent => spikes => fluorescence data should outperform the two-step (1) deconvolution => (2) latent variance inference approach, the amount of improvement is not clear. A direct comparison (e.g., w/ parameter & model recovery metrics) between the two approaches on simulated data would be useful to quantify the advantage of the proposed approach over more traditional ones.

      It would be useful to discuss the possible extension of the approach to other types of data that are related to neural activity but have different observation models, e.g., voltage imaging, or neuromodulator sensors (e.g., GRAB-NE, dLight, etc). Do the authors see any specific challenges that would arise in these cases and that would need to be addressed in the future (other than changing the Poisson spiking part)?

    1. eLife Assessment

      Insects can act as vectors of plant diseases, hence the study of insect-pathogen interactions is relevant for agriculture. This important study identifies in Diaphorina citri a dopamine receptor responsive to 'Candidatus Liberibacter asiaticus' infection, demonstrate direct regulation of this receptor by a microRNA, and integrate dopamine signaling into an established insect reproductive hormone framework. Multiple complementary experimental approaches convincingly support the findings, but key conclusions rely on correlative data and the mechanistic evidence for the proposed linear signaling cascade is incomplete. This work will be of interest for insect physiology and vector-pathogen biology, and more broadly for citrus agriculture.

    2. Reviewer #1 (Public review):

      I read this paper with great interest based on my experience in insect sciences. I have some minor comments (and recommendations) that I believe the authors should address.

      (1) The paper has an original biological question that is overly broad and mechanistically ambitious. The central biological question, namely how CLas infection enhances fecundity of Diaphorina citri via dopamine signaling, is clearly stated and well motivated by previous literature. However, my advice to the authors is that, while the general question is clear, the manuscript attempts to answer multiple mechanistic layers simultaneously. As a result, I feel that the biological narrative becomes diffuse, especially in later sections where DA, miRNA regulation, AKH signaling, and JH signaling are all proposed as parts of a single linear cascade. In summary, my key concern is that the paper often moves from correlation to causal hierarchy without fully disentangling whether these pathways act sequentially, in parallel, or redundantly. A more explicitly framed primary hypothesis (e.g., "DA-DcDop2 is necessary and sufficient for CLas-induced fecundity") may improve conceptual clarity.

      (2) On the novelty of the data, I feel they are moderately novel, with substantial confirmatory components. If I am correct, the novel contributions include the identification of DcDop2 as the DA receptor responsive to CLas infection in D. citri, the discovery that miR-31a directly targets DcDop2, which is supported by luciferase assays and RIP, and thirdly, the integration of dopamine signaling into the already-described CLas-AKH-JH-fecundity framework. My advice to the authors is to focus more on the manuscript's novelty, which lies more in pathway integration than in discovering fundamentally new biological phenomena. This is appropriate for a mechanistic paper, but should be framed as an extension of existing models rather than a paradigm shift.

      (3) On the conclusions, I recommend that the authors modify their statements a little. I feel that there are some overstated or insufficiently supported claims. For instance, the assertion that CLas "hijacks" the DA-DcDop2-miR-31a-AKH-JH cascade implies direct pathogen manipulation, but no CLas-derived effector or mechanism is identified. Also that the model suggests a linear signaling hierarchy, but the data largely show correlation and partial dependency rather than strict epistasis. In third, the term "mutualistic interaction" may be too strong, as host fitness costs outside fecundity (e.g., longevity, immunity) are not evaluated. In conclusion, I confirm that the data support a functional association, but mechanistic causality and evolutionary interpretation are somewhat overstated.

    1. Reviewer #2 (Public review):

      Summary:

      The authors evaluate whether commonly used LLMs (ChatGPT, Claude and Gemini) can reconstruct signalling networks and predict effects of network perturbations, and propose a pipeline for benchmarking future models. Across three phenotypes (hypertrophy, fibroblast signalling, and mechanosignalling), LLMs capture upstream ligand-receptor interactions and conserved crosstalk but fail to recover downstream transcriptional programmes. Logic-based simulations show that LLM-derived networks underperform compared to manually curated models. The authors also propose that their pipeline can be used for benchmarking future models aimed at reconstructing signalling networks.

      Strength:

      The authors compare the outcomes from three LLMs with three manually curated and validated models. Additionally, they have investigated gene network reconstruction in the context of three distinct phenotypes. Using logic-based modelling, the authors assessed how LLM-derived networks predict perturbation effects, providing functional validation beyond network overlap.

      Weaknesses:

      The authors have used legacy models for all three LLMs, and the study would benefit from testing the current versions of the LLMs (ChatGPT 5.2, Claude 4.5 and Gemini 2.5). Additional metrics such as node coverage, node invention, direction accuracy and sign accuracy would be useful to make robust comparisons across models.

    1. that conversations about international migration and refugee policies might benefit from historical context regarding migration patterns, ethnic cultures, nativisms, and restrictions that have shaped the peopling of nations across the globe.

      I found this particular portion to be surprising, not because pf what is being said, but because of its continued relevancy. Despite the article being dated at a little over 10 years old, the topic is hugely relevant, especially within our own community at this time. I think this alone speaks to the entire point of thr article.

    2. reminding our neighbors that everything has a history.

      I like this sentance because I do feel strongly about educating those around me. It is extremely important to not only educate yourself but to educate those around and who you surround yourself with because it is often easy to forget that everything and everyone does have a history — and the history of people is relevant.

    3. What matters is less the propriety of the topic than the acuity of the question.

      I agree with this statement. I do believe that meaningful questions, that both sharpens your thinking and requires a deeper level of understanding are more important than comfortability and “appropriateness”. Especially with what’s going on in Minneapolis right now, I think it’s even more important to start asking questions they may make you uncomfortable but will help you improve your knowledge on certain topics and develop a better understanding of what is happening in our communities.

    1. Don’t communicate with other people. If another person is detected during the exam, the student willautomatically get a zero on Quizzes/Exams (zero can’t be dropped)

      I live with my family. What if I told them not to disturb me while I am taking the quiz, but they do it anyway? They will keep yelling until I answer. Am I allowed to tell them to leave me alone, or would I get a zero no matter what?

    1. basically, well-meaning liberal white people—are part of the problem in struggling for social justice.

      It is an incredibly important position to know when to step-up or know when to step back. Step up in the face of oppression to point out the wrong but step back so as to not overpower the voices of the oppressed.

    2. But forces of oppression can be difficult to detect when you benefit from them (we call this a privilege hazard later in the book).d-undefined, .lh-undefined { background-color: rgba(0, 0, 0, 0.2) !important; }.d-08965af7-15f0-4167-901a-aa80e2b72f3c, .lh-08965af7-15f0-4167-901a-aa80e2b72f3c { background-color: var(--pubpub-active-discussion-highlight-color, rgba(45, 46, 47, 0.5)) }2Yolanda Yang, Jillian McCarteng the choices you make (or don’t get to make) each day. These systems of power are as real as rain. But forces of oppression can be difficult to detect when you benefit from them (we call this a privilege hazard later in the book). And this is where data come in: it was a set of intersecting systems of power and privilege that D?Yolanda Yang4 years agoPeople with privilege cannot recognize, even if they do, they are less likely to make any change, as this would decrease their benefit?Jillian McCarten2 years agoOne quote that I think of often is “when one has held a position of privilege for so long, equality feels like oppression.” ?Login to discuss.

      Important

    3. result of many unnamed colleagues and friends who may or may not have considered themselves feminists.

      The casting of many ripples. Positions of low power but high influence are important to cast these ripples.

    4. major systems of oppression are interlocking

      Burn it all down.

      But realistically, we cannot. So what is the next best option? We are fighting the good fight now, but how many of us and how long will it take?

    1. Bukowskiesque

      I had never heard of Charles Bukowski which is probably shocking but I think this is a perfect way of putting it. He often depicted the lives of poor Americans, and the depravity of urban living according to my quick google search.

    2. It's shady and concentrated, a small staircase of sorts under cover of the weather, suitable for furtive transactions and exchanges.

      I get the vibe from this. I have never heard the word furtive but I understand that this is a crowd of people I feel like I see in a lot of corners in Seattle.

    1. Although some texts may have been written years ago, they live in the present. This expression means that when you analyze a literary text such as a story, play, poem, or novel, you use a form of the present tense in your discussion. Narration in the story may be in the past tense—the narrator tells the story as though it has already happened—but your discussion of the literary work is done in the present tense. Characters do this or say that. The leaves fall or the wind is howling, even though in the text, the leaves fell and the wind was howling. Your discussion nevertheless remains in the present tense. Also, when discussing the author in relation to the literary text, use the present tense, even if the author is no longer living or wrote the text in the past.

      You must speak in the present tense when talking about literary works

    1. Popular Now in News1 Trump nixes tariff threat in push for U.S. control of Greenland2More than 50 dump truck loads of dirt were removed from his yard. Now, he has to put them back3'Canada lives because of the United States,' Trump says while jabbing Carney4Trump reiterates desire to acquire Greenland, criticizes Europe, Carney, Biden in Davos speech5U.S. officials say a Toronto man posed as a pilot for years, but not to fly the planes

      Bad practice: Each item in this list is presented without additional contextual information explaining why it is ranked or “popular.” Screen reader users may have difficulty understanding the purpose and ordering of this content.

      CEID100

    1. "incremental loading" (only moving new/changed data)

      CDC is often associated with real-time streaming, but batch processes use it to determine even in a batch loading process which pieces of data have changed and should be moved.

    1. converting all dates to YYYY-MM-DD or all currencies to USD).

      Makes sense in terms of tools that align structure of data - kind of like excel but just at insane scale

    1. 'Twas right, said they, such birds to slay, That bring the fog and mist.

      I wonder what kind of birds these must be if they are "to slay" but I do think when they say "that bring fog and mist" it refers to when birds fly away from storms

    1. In sum, our findings suggest a major qualification of the authority transfer hypothesis (H5). The highest levels of politicization in the national electoral arena are not produced by the accumulated effects of authority transfers; rather they result from conflicts on membership. As we can see in the British case, these conflicts are not settled with a country's accession to the EU, they can be resuscitated at later stages of the integration process. Moreover, as the controversies on Turkey's EU membership in the mid-2000s demonstrate, membership conflicts can also be triggered by another country seeking membership in the EU. In this case, it is not national sovereignty in the first place, but national and European identity that is the cause of controversies

      CONCLUDE THAT it is not about cumulative effect of handing over power that makes countries worried, it is simply membership writ large? Basically conflicts over "authority transfers" are NOT put to bed once those transfer occur -> hesitatncy about membership is ALWAYS a concern. ??

      Moreover, membership of OTHER states can spark politicisation (i.e., Turkey) QUESTION -> was this concern about economy, Islamophobia, concern about human rights?

    2. The importance of membership conflicts becomes fully apparent in the other politicized elections. In the two United Kingdom elections in 1974, it was domestic conflicts on British European Community membership. In the Swiss elections in the 1990s, Swiss membership of the EU and the country's bilateral treaties with the EU were the key issues in the campaigns. In Germany in 2005, the relatively high politicization of Europe was mostly due to conflicts over Turkey's EU membership. The Austrian election in 2002 does not represent a ‘post-Maastricht’ effect either. In this case, high politicization resulted from debates about both eastern enlargement and dissatisfaction with the EU caused by the sanctions imposed on the country as a response to the inclusion of the radical right FPÖ in government in 2000.

      UK example cont.

      UK hesitancy / politicisation dates back to the 1970s -> always hesitant to fully commit -> election issue in 1974, etc.

      Whereas for OTHER, CONTINENTAL countries (Germany, France, Austia) -> only becomes politicsed after the 2000s (so far away to avoif being categorised in "Maastricht effect" but quite late overall.

    3. In our data, the ‘radical right hypothesis’ (H6) finds only partial support. There is evidence for it in Switzerland (and to some extent, in Austria and France); however, we also observed very high levels of politicization if there is a conflict between major mainstream parties on European issues (see Green-Pedersen, 2012). In both constellations, politicization is most likely if conflicts on European integration are framed in cultural terms by political parties; thus, the ‘cultural shift hypothesis’ (H7) is supported.

      Which of the two politiciosation CAUSES turned out to be true? External agitation by far right for electoral gain or debate / escalation between governing and opposition parties?

      Turns out a bit of both -> Has become partially politicised by Far Right preaching, but also from back and forth between governing parties / opposition.

      What they DO find / conclude is the "cultural shift hypothesis" -> MEANING that regardless of WHO is politicising EU integration (major or minor parties, mainstream or radical) -> often does so through FRAMING INTEGRATION THROUGH CULTURAL TERMS (i.e., erasure of identity, incompatibility, etc.)

    4. As a result, we get a nuanced picture of the politicization of Europe in the electoral arena. We found remarkable differences in the level, timing and patterns of politicization across countries which reflect different national histories with regard to European integration, and different positions in the integration process. This holds in particular for the United Kingdom and Switzerland, where EU membership was, for decades, a major issue of domestic political conflict and polarization. In these countries, European issues are fully integrated into the agenda of domestic politics, and in times when Europe is on the national political agenda, we observe very high levels of politicization

      OKAY -> so methodologically able to look at politicisation in these member states over a LONG PERIOD OF TIME (unlike other studies -> rectifies gap in the literature) Finds that politicisation in Member States (again, western EU) VARIES WIDELY AND IS DEPENDENT ON HISTORICAL CIRCUMSTANCES OF THAT COUNTRY -> especially the position of that country in the integration process.

      In UK and Sitzerland, politicisation is FULL -> are both not fully / never were fully integrated, therefore it WAS OFTEN AN EXPLICT NATIONAL ELECTION ISSUE IN THOSE COUNTRIES. When it was "on the political agenda" at a given election time, there was (obviously) a high degree of politicisation -> but not necessarily true for otehr EU countries. Like, integration issue might pop up in Germany (need to surrender more control) and it DOESN'T get politicised.

      Germany, France, and Austria all have LOWER THAN AVERAGE levels of politicisation for issue of EU integration -> more taken for granetd -> more BIPARTISAN SUPPORT. ALTHOUGH IT HAS RECENTLY BECOME MORE POLITICISED (especially in 2020s I would imagine ->AfD, RN, etc.)

    1. (5.19) I initially was a little confused by this passage, as Confucius, when faced with positive examples of leadership, continuously asks what the person has done to be considered authoritative. But, looking back at earlier passages like (4.16), where Confucius says he has yet to meet an authoritative person who has given their full strength to authoritative conduct, (5.8), where Confucius acknowledges that even his most trustworthy of students cannot really be considered authoritative, and (6.7), where Confucius quantifies ren as a state that lasts for a certain period of time, it seems like authoritative conduct may be less of a thing that a person achieves once and then holds, and moreso like a goal to achieve to that comes in moments. Doing authoritative things does not make one an authoritative person, it just means that in that moment, they were able to achieve authoritative conduct. Ren is something to aspire to, a goal that we can sometimes achieve and always should strive for, but is unrealistic to expect to be a permament state.

    2. (9.3) This is a really interesting passage to me, as, throughout almost the entirety of these 10 books, Confucius has been adamant about the importance of tradition and keeping to tradition, especially when it comes to ritual propriety. Here, however, he praises the use of the silk cap rather than the traditional hemp cap in ritual propriety, due to the frugality of the silk. What's most striking about it is how little attention is brought to this difference, as it is treated as just another one of Confucius' teachings. But an explanation for this is perhaps seen at the end of this passage, as Confucius says that he kowtows upon entering a hall, observing tradition of ritual even though it isn't societally accepted anymore. So, here, Confucius not explicitly states that there is some flexibility when it comes to tradition, and things must be judged on a case by case basis.

    3. (1.14) In this passage, Confucius uses the analogy of eating in order to convey the way in which the junzi has a love of learning. Confucius writes that exemplary persons eat not to be full nor for comfort, then switches to the topic of love of learning, illustrating that what he actually is teaching is not about eating, but instead, about the way an exemplary person should learn. Instead of learning for the simple purpose of gaining more knowledge or of trying to be educated and making their lives more comfortable, a truly exemplary person learns purely because they enjoy it, just like a person eating not to be full, but for the sensation and enjoyment of the taste. This does make me wonder if Confucius believes that there is any case where it is acceptable to learn simply for the sake of knowing more and becoming more well-rounded. Does this stop the learner from being an exemplary person? Or is it just that in that instance, they are still learning, and there is still much good to be gained from that, but it simply does not count as doing it just for the love of learning?

    4. (8.9) I believe that this quote (and overarching theme of this book) is saying that the ordinary person can be encouraged to follow what is considered the “right path” in morality, but it is not possible for them to fully understand it. People can still do what is morally right (or imitate good and moral behaviors) even without understanding the deeper reasons behind it that make it morally right. This quote by itself reads almost elitist, so I am curious what Confucius’s intentions were with wording it this way —does he truly believe that there are only some elite people who can understand morals while there are (a potentially underestimated) others who are clueless? If Confucius believes this to be the case, then does he believe that there are certain groups of people who need to be controlled for their own benefit? I personally think that, even if it is not ideal, it is good enough if people know what is morally right and wrong and act accordingly, even without true understanding. The system still works, and people can still cluelessly be morally good people. I am curious what people think is at stake here.

    5. (6.3) Throughout the Analects, Confucius emphasizes the importance of a love for learning, such as in this passage where he laments the death of Yan Hui, one of his disciples who was unique in his possession of this trait. Confucius identifies a love of learning as one of the most important virtues to cultivate, as it ensures that one never becomes complacent with their current state and continues to display humility and to pursue self-improvement. He seems to have a strict policy for what this entails, as he describes Yan Hui's love of learning as involving never taking out his anger on others or repeating a mistake. I am surprised that Confucius reports having but one disciple who fit his definition of loving learning, given that the virtue is so central to his philosophy.

    6. (4.1-4.3) Confucius opens by discussing "authoritative conduct" not as a trait that can be possessed, but as a community you choose to identify with. He argues that anyone who doesn't "seek to dwell among authoritative people" can't really be called wise. I understand this as an argument that character is actually a result of one's environment, rather than one's efforts, because the proper environment gives the individual a standard to strive towards. This theme of surrounding yourself with individuals who push you to be better is seen throughout the text as Confucius discusses what it means to be a son/daughter, leader, friend, and teacher. Ultimately, a core tenet of Confucius's writing is the importance of learning from and leading by example. Being surrounded by the exemplary provides a constant feedback loop of imitation, correction, and shame. This process is what Confucius believes makes one "exemplary" in his philosophy. Ultimately, the self-motivated struggle to improve is fueled by shame of failure, but this shame is only possible if one is surrounded by individuals who are wiser and more exemplary.

    7. (3.4) A connecting theme that I noticed throughout the books was the idea of prioritizing mindfulness and intention in ritual over the bells and whistles that make it impressive. This idea strikes me for two reasons. First, when I typically think of a government ceremony, I think of extravagant rituals and gaudy decorations. However, Confucius points out how this is not really a display of ritual propriety because focusing on the spectacle detracts from the intention of the ceremony. Second, I think it's interesting that this is a concept that we still wrestle with today. I immediately thought of the common phrase, "it's the thought that counts". Confucius is arguing that the optics of ritual are not as important as the deep presence it takes to make the ritual sincere. This notion is a striking contrast to the ancient China that I am familiar with, but I see how valuable this humility is in Confucius' philosophy regarding leadership.

    8. (3.7) Confucius argues here that a person's greatness is not necessarily defined just by success, but by following rituals. In the example of the archery ceremony, the competitor who is an exemplary person tries their utmost best, respects their fellow competitors, and takes seriously all other components/ceremonies of the competition. Confucius argues for a model of putting oneself fully into whatever they do and doing it the right way.

    9. This passage focuses on obeying and honoring one's parents. Even though it is not explicitly stated that the father's morals/ ways were perfect nor even upheld Confucian values, the importance of respect for a person's father goes beyond the banalities of daily life. I agree that parents should be respected but wonder to what extent that appreciation can exist. Is a parents duty simply to bring a child into the world; what if the father did wrong by his child? I am also wondering why a three year mourning period? I wonder how much of an innate obligation we have to our parents.

    1. List several characteristics of your own cultural background that may be different from the cultural background of some others on your campus.
           I used to live in Arizona for 9 years, but I was born in Pittsburgh. I lived mainly in a Spanish culture, never had to wear heavy clothing, and I visited California. In which I saw a lot more of a cultural difference, mainly Asian, and Spanish cultures. which may be different from another student at CCAC, who was raised entirely in Pittsburgh.
      
    2. Write a description of someone who is of a different race from yourself but who may not be different ethnically.
                    One who is a different race from me, so in this case I will choose Chinese. But they were born in America. So, that is an example of one who is of different race, but similar ethnically.
      
    3. List as many types of diversity as you can think of.
                Diversity in trigonometry, Diversity in angles, Diversity in rates of change, Diversity in topology, Diversity in Geometry, and Diversity in algebra.
      
                    Each one is a branch of the same tree, but each is a different form of it. Which metaphorically describes the entire article perfectly, humans can be different, but they are part of the main tree of humanity.
      
    1. Water also became an abiding symbol of what we speak of clinically as the guilt of the survivor.

      This stood out to me because how it references guilt. This shift of focus to the survivors and what they had to deal with mentally is a huge part of the effect of the event. We focus a lot on the physical effects but the mental effects and guilt were just as impactful.

    2. Many were already injured, and few survived. The rivers became choked with bodies—carrying them out to sea and then, when the tide turned, bringing some of them back again

      We have seen pictures of bodies that were left on land but I felt that this picture of bodies being carried out to sea provides a new perspective. I think it shows the lasting effects of this event because it says that the bodies would resurface as the tide turned.

    1. I think that’s the thing I would recommend for the [col-lege students] to do is to encourage the high school students to speak up, because they can have an idea that the university student has never heard or thought of.

      It's not just college students who need to encourage the younger generation to speak up and be more confident. Overall, everybody needs to encourage the younger generation to speak and be confident. College students can be influential since they were once in their shoes not too long ago, but they need more encouragement than just college students.

    2. Responding to calls for the field to smooth the transition between high school and college writing and promote college access for minoritized students

      The transition from high school style writing and college style writing is huge!! High school teaches you that college won't be much different, but I wish the difference were taught more. The reason they could not teach this is the possibility of scaring future students off, especially minority students.

    1. All I had heard about them was how poor they were, so that it had become impossible for me to see them as anything else but poor.

      First impressions of people, especially vocal impressions, are important when meeting new people, especially at a young age. They stick with you until you actually get to know the person and understand where they come from. If you truly believe something, then it will be hard for you to change your mind, as you'll find it impossible, just as she did.

    1. New analyses comparing pre-crisis and post-crisis elections would allow us to estimate more precisely how much change there has been in the level of Europeanization of national elections

      In future, need to precisely analyse data / voting records / etc. from PRE-crisis elections and POST-crisis ones.

      Again, same methodology in regards to individual preferences aligning w/ party manifesto material about EU integration DOES NOT APPLY to intervened-in countries -> that is, conclusions are very different -> partry's discussion of EU material / integhration did NOT influence vote in intervened-in countries, presumably regardless of how each voter's beliefs aligned w/ that given party

      Guess another way of putting it is that when parties ANNOUNCE loudly their EU integration stance, voters are less likely to vote for them if that stance is COUNTER to what voters believe, EVEN IF THOSE VOTERS ARE POLITICALLY / CLASS ALIGNED WITH THAT PARTY IN ALL OTHER RESPECTS -> if they announce it and it DOES align w/ voters, presumably make it more likely voters will also vote FOR that party, again even if they might disagree with that party on L/R political issues.

      Seems to imply, then, that EU is not just infiltrating itself into national elections (Europeanisation) but that it is MOST important issue for many (most?) voters.

    2. Even when we control for the perceived ideological distance to parties, there is a higher probability that citizens vote for parties that hold similar views on European integration.

      Seems simple / obvious, but important that they bring up people often vote for parties (national) that have SAME STANCE ON EU as them -> when remember, for a long time EU membership was not seen as a left/right issue -> therefore demonstrates EITHER (I think, and this is a QUESTION) that: 1. National elections have become more Europeanised because voters vote for parties w/ EU platforms that align with their interests 2. Europeanisation has become a PARTISAN L/R issue and parties have adopted pro/anti EU stance accordingly -> therefore whether or not national party includes EU stance is irrelevant, because voters would vote for that party based on domestic L/R issues anyway. The party has just taken up Europianisation because it has become polarised / feels right that a rightist party would be Eurosceptic. idk

    3. In contrast, in countries in which the government and their European representatives have a more limited influence on European policymaking, mostly due to the size of the country, national elections are considered less consequential in European terms and citizens will have more incentives to pay attention to other issues.

      Basically countries w/ GREATER influence in EU see their NATIONAL VOTE as contributing (marginally) to EU-wide policy, even if not in EP electin -> National elections, then, are Europeanised because voters have "INTERNALISED" IDEA THAT THEY WILL INDIRECTLY INFLUENCE EU w/ NATIONAL VOTE (esp. Germany)

      Marginal / smaller countries have LESS Europeanised national elections -> see themselves as not influencing anything beyond their own borders

      QUESTION -> probably why UK issue was so severe in the end -> was a DOMINANT country in EU and so voters...well...but weren't they not thinking of how their vote would influence EP/Europe? idk.

    4. was that EU attitudes varied more among voters than among parties—which held more similar and favorable EU views—, implying that the EU was a “sleeping giant” that could be woken up if parties decided to politicize the issue.

      pre-existing scholarly analysis of EU issue as a "sleeping giant" -> i.e., an issue that is ignore by parties but which the public holds much interets in -> could be exploited electorially if woken up

    5. The electoral success of parties with a rhetoric of distrust toward the EU in national and regional elections suggests that European issues have an increased impact on the vote beyond the European Parliament elections and that, consequently, national elections are becoming more Europeanized (

      Again, literally arguing the opposite of Hix and Cunningham -> National elections are becoming MORE Europeanised -> not emergent EU issues, but rather EU as a whole becoming more important in nat context - Says austerity measures followed Eurozone crisis -> led to people suddenly becoming aware of what EU did for them. - Anti-EU parties are evidence of this (success on national level)

    1. As a long and violent abuse of power, is generally the Means of calling the right of it in question (and in Matters too which might never have been thought of, had not the Sufferers been aggravated into the inquiry) and as the King of England hath undertaken in his OWN RIGHT, to support the Parliament in what he calls THEIRS, and as the good people of this country are grievously oppressed by the combination, they have an undoubted privilege to inquire into the pretensions of both, and equally to reject the usurpation of either

      In the first line , Paine says that ideas that go against tradition usually come with the idea of resistance at first . But as time goes on , People start to question authority when it starts to become abusive . He says that both the British King and parliament have been acting unfair to the American people .(Paine)

    1. Youcan’tmakeandhoardcapitalwithoutstealingLandfirst.

      What if we understood sustainability not as just an environmental practice, but more of a political and economical one?

    1. Some social media sites only allow reciprocal connections, like being “friends” on Facebook Some social media sites offer one-way connections, like following someone on Twitter or subscribing to a YouTube channel. There are, of course, many variations and nuances besides what we mentioned above, but we wanted to get you started thinking about some different options.

      There are many social media connection pathways out there that we don’t always perceive as dangerous, but they can be. For example, even one-way connections like following someone on Instagram or subscribing to a YouTube channel can expose personal information or allow strangers to influence our opinions without us realizing it.

    2. With that in mind, you can look at a social media site and think about what pieces of information could be available and what actions could be possible. Then for these you can consider whether they are: low friction (easy) high friction (possible, but not easy) disallowed (not possible in any way)

      I found the discussion of affordances and friction especially thought-provoking, because design choices are not neutral—they actively guide user behavior. Features like infinite scroll reduce friction in a way that benefits engagement metrics, but from an ethical perspective (especially care ethics or virtue ethics), they can undermine users’ ability to rest, reflect, or disengage. This makes me think that “frictionless design” is not always ethically better, and sometimes intentional friction can actually support more responsible and humane use of social media.

    3. One famous example of reducing friction was the invention of infinite scroll. When trying to view results from a search, or look through social media posts, you could only view a few at a time, and to see more you had to press a button to see the next “page” of results. This is how both Google search and Amazon search work at the time this is written. In 2006, Aza Raskin invented infinite scroll, where you can scroll to the bottom of the current results, and new results will get automatically filled in below. Most social media sites now use this, so you can then scroll forever and never hit an obstacle or friction as you endlessly look at social media posts. Aza Raskin regrets what infinite scroll has done to make it harder for users to break away from looking at social media sites.

      Infinite scrolling removes the "stopping point" from the interface, changing not the functionality itself, but people's behavioral rhythm and self-control costs. It makes continuing to consume content the default option, thus treating attention as an extractable resource, which is a classic example of "design as governance."

    4. One famous example of reducing friction was the invention of infinite scroll. When trying to view results from a search, or look through social media posts, you could only view a few at a time, and to see more you had to press a button to see the next “page” of results. This is how both Google search and Amazon search work at the time this is written. In 2006, Aza Raskin invented infinite scroll, where you can scroll to the bottom of the current results, and new results will get automatically filled in below. Most social media sites now use this, so you can then scroll forever and never hit an obstacle or friction as you endlessly look at social media posts. Aza Raskin regrets what infinite scroll has done to make it harder for users to break away from looking at social media sites.

      I think infinite scroll is a classic example of “friction-reducing design,” but its impact is actually a bit scary. In the past, with formats like search results, you had to click “next page” after finishing one page. While this action was a bit cumbersome, it provided a pause point, reminding you, “Should I stop now?” Infinite scroll completely removes that barrier. You just keep scrolling down, and content automatically loads, making you completely unaware of how long you've been scrolling.

      I think this is also why scrolling through social media is so addictive: it's not because we genuinely want to look for that long, but because the design eliminates every opportunity to stop.

    5. Sometimes designers add friction to sites intentionally. For example, ads in mobile games make the “x” you need to press incredibly small and hard to press to make it harder to leave their ad:

      I find this example particularly relatable because I frequently encounter this issue myself when playing mobile games: the “X” button for closing ads is designed to be super tiny, making it incredibly difficult to tap. Sometimes you accidentally click into the ad page instead. On the surface, this seems like a minor design detail, but it's actually a deliberate tactic to increase friction, making it harder for users to leave the ad. For advertisers and platforms, this keeps users engaged longer and even generates accidental clicks, boosting revenue. But from the user's perspective, this design is downright annoying since it exploits our attention and clumsy interactions to “force” us into unwanted actions. I believe this goes beyond ordinary design; it's manipulative design.

    6. One famous example of reducing friction was the invention of infinite scroll. When trying to view results from a search, or look through social media posts, you could only view a few at a time, and to see more you had to press a button to see the next “page” of results. This is how both Google search and Amazon search work at the time this is written. In 2006, Aza Raskin invented infinite scroll, where you can scroll to the bottom of the current results, and new results will get automatically filled in below. Most social media sites now use this, so you can then scroll forever and never hit an obstacle or friction as you endlessly look at social media posts. Aza Raskin regrets what infinite scroll has done to make it harder for users to break away from looking at social media sites.

      From the perspective of the social media companies, I can see why they'd add the infinite scroll to their apps. It keeps the users from leaving the app and allows them to engage with more content- watch more ads, etc. But as a user I find the infinite scroll to be incredibly harmful, especially to children and mentally ill people. When you're stuck in a scolling-trance, it can be hard to stop, and before you know it you've spent the entirety of your day scrolling on TikTok. One can become addicted to their phone, and although the health affects social media has done to people isn't that well studied- it's easy to tell that long-term use of one's phone can negatively impact their health.

    1. between these two women: the one fair, young, pale, the other just as fair, but older, fiercer; the one a daughter, the other a mother; the one sweet, ignorant, passive, the other both artful and active; the one a sort of angel the other an undeniable witch.

      it depends on what story line you are following. The grimms story its just called snow white. But the disney rendition, the dwarfs play a larger role. But I do agree, its more of the step daughter and step mother fued and less of the dwarfs.

    2. the Grimm tale of "Little Snow White" dramatizes the essential but equivocal relationship between the angel-woman and the monster-woman.

      They're two sides of the same coin, both beautiful but representing different aspects of female power and threat, with the Queen's vanity and Snow White's beauty driving the central conflict of good vs. evil and societal roles.

    1. The Great Migration brought blacks further afield throughout the country, including to the Midwest. Inthe 1970s, Doris Larkins introduced Juneteenth to Wichita, Kansas, getting it recognized as a municipalcelebration.

      This shows the great migration didnt just move people but moved cultures and spread them around the US.

    2. Watch Night began as the most widely recognized commemoration, but since the 1980sJuneteenth has spread more widely and gained greater national observance.

      This shows how Juneteenth grew beyond a regional tradition and became more widely recognized across the United States in recent decades

    1. but cannot belinked by particularly efficient voice leading; the same holds for {C, G, Af} and {G,Af, Df}

      Showing that the voice leadings that the Tonnetz represents are not always efficient.

    1. Reviewer #1 (Public review):

      Summary:

      The authors use methylphenidate (MPH) administration after learning a Pavlovian-to-instrumental transfer (PIT) task to parse decision making from instrumental influences. While the main pharmacological effects were null, individual differences in working memory ability moderated the tendency of MPH to boost cognitive control in order to override PIT-biased instrumental learning. Importantly, this working memory moderator had symmetrical effects in appetite and aversive conditions, and these patterns replicated within each valence condition across different values of gain/loss (Fig S1c), suggesting a reliable effect that is generalized across instances of Pavlovian influence.

      Strengths:

      The idea of using pharmacological challenge after learning but prior to transfer is a novel technique that highlights the influence of catecholamines on the expression of learning under Pavlovian bias, and importantly it dissociates this decision feature from the learning of stimulus-outcome or action-outcome pairings.

      Comments on revisions:

      I have no further recommendations or concerns.

    2. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      The authors use methylphenidate (MPH) administration after learning a Pavlovian to instrumental transfer (PIT) task to parse decision making from instrumental influences. While the main effects were null, individual differences in working memory ability moderated the tendency of MPH to boost cognitive control in order to override PIT-biased instrumental learning. Importantly, this working memory moderator had symmetrical effects in appetite and aversive conditions, and these patterns replicated within each valence condition across different values of gain/loss (Fig S1c), suggesting a reliable effect that is generalized across instances of Pavlovian influence.

      Strengths:

      The idea of using pharmacological challenge after learning but prior to transfer is a novel technique that highlights the influence of catecholamines on the expression of learning under Pavlovian bias, and importantly it dissociated this decision feature from the learning of stimulus-outcome or action-outcome pairings.

      We thank the reviewer for highlighting the timing of the pharmacological intervention as a strength for this study and for the suggested improvements for clarification.

      Weaknesses:

      While the report is largely straightforward and clearly written, some aspects may be edited to improve the clarity for other readers.

      (1) Theoretical clarity. The authors seem to hedge their bets when it comes to placing these findings within a broader theoretical framework.

      Our findings ask for a revision of theories on how catecholamines are involved in instantiation of Pavlovian biases in decision making. The reviewer rightly notices that we offer three routes to modify current theory to be able to incorporate our findings. Briefly, these routes discuss catecholaminergic modulation of Pavlovian biases (i) through modulation of the putative striatal ‘origin’ of Pavlovian biases, (ii) through top-down control, primarily relying on prefrontal processes, and (ii) a combination of the two, where catecholamines regulate the balance between these striatal and frontal processes.

      Given the systemic nature of the pharmacological manipulation, we cannot dissociate between these three accounts. We believe that discussing these possible explanations enriches our Discussion and strengthens our recommendation in the ultimate paragraph to use pharmacological neuroimaging studies to arbitrate between these options. In the revision, we have made this line of reasoning more clear, in part by adding guiding titles to the Discussion section and adding a summary paragraph in the Discussion (Discussion, page 9-12).

      (2) Analytic clarity: what's c^2?

      C^2 seems a technical pdf conversion error problem: all chi-squares (Χ2) have been converted to C2. This is now corrected in our revision.

      Reviewer #2 (Public review):

      Summary:

      In this study, Geurts et al. investigated the effects of the catecholamine reuptake inhibitor methylphenidate (MPH) on value-based decision making using a combination of aversive and appetitive Pavlovian to Instrumental Transfer (PIT) in a human cohort. Using an elegant behavioural design they showed a valence- and action-specific effects of Pavlovian cues on instrumental responses. Initial analyses show no effect of MPH on these processes. However the authors performed a more in-depth analysis and demonstrated that MPH actually modulates PIT in actionspecific manner depending of individual working memory capacities. The authors interpret that as an effect on cognitive control of Pavlovian biasing of actions and decision making more than an invigoration of motivational biases.

      Strengths:

      A major strength of this study is its experimental design. The elegant combination of appetitive and aversive Pavlovian learning with approach/avoidance instrumental actions allows to precisely investigate the different modulation of value-based decision making depending on the context and environmental stimuli. Important MPH is only administered after Pavlovian and instrumental learning, restricting the effect on PIT performance only. Finally, the use of a placeboontrolled crossover design allows within-comparisons between PIT effect under placebo and MPH and the investigation of the relationships between working memory abilities, PIT and MPH effects.

      We thank the reviewer for highlighting the experimental design as a strength for this study and the suggested improvements for clarification.

      Weaknesses:

      As authors stated in their discussion, this study is purely correlational and their conclusions could be strengthened by the addition of interesting (but time- and resource-consuming) neuroimaging work.

      We employ a pharmacological intervention within a randomized placebo controlled cross-over design, which allows for causal inferences with respect to the placebo-controlled intervention. Thus, the reported interactions of interest include correlations, but these are causally dependent on our intervention.

      Perhaps the reviewer refers to the implications of our findings for hypotheses regarding neural implementation of Pavlovian bias-generation. Indeed, based on our data we are not able to arbitrate between frontal and striatal accounts, due to the systemic nature of the pharmacological intervention. Thus, we agree with the reviewer that neuroimaging (in combination with for example brain stimulation) would be a valuable next step to identify the neural correlates to these pharmacological intervention effects, to dissociate between frontal and striatal basis of the effects. In the revision, as per our reply to reviewer 1, we have made this line of reasoning more clear, in part by adding guiding titles to the Discussion section and adding a summary paragraph in the Discussion (Discussion, page 9-12).

      The originality of this work compared to their previous published work using the same cohort could also be clarified at different stages of the article, as I initially wondered what was really novel. This point is much clearer in the discussion section.

      As recommended, we brought forward parts of the Discussion that clarify the originality of the current experiment to the introduction (page 4/5) and result section (page 8).

      A point which, in my opinion, really requires clarification is when the working memory performance presented in Figure 2B has been determined. Was it under placebo (as I would guess) or under MPH? If it is the former, it would be also interesting to look at how MPH modulates working memory based on initial abilities.

      We now clarified that working memory span was assessed for all participants on Day 2 prior to the start of instrumental training (as illustrated in figure 1A). Importantly, this was done prior to ingestion of the drug or placebo (which subjects received after Pavlovian training, which followed the instrumental training). This design also precludes an assessment of the effects of MPH on working memory capacity.

      A final point is that it could be interesting to also discuss these results, not only regarding dopamine signalling, but also including potential effect of MPH on noradrenaline in frontal regions, considering the known role of this system in modulating behavioural flexibility.

      We indeed focus our Discussion more on dopamine than on noradrenaline. Our revision now also discusses noradrenaline in light of our frontal control hypothesis and the recommendation, in future studies, to use a multi-drug design, incorporating, for example, a session with the drug atomoxetine, which modulates cortical catecholamines, but not striatal dopamine (Discussion, page 12).

      Reviewer #3 (Public review):

      The manuscript by Geurts and colleagues studies the effects of methylphenidate on Pavlovian to instrumental transfer in humans and demonstrates that the effects of the drug depend on the baseline working memory capacity of the participants. The experiment used a well established cognitive task that allows to measure the effects of Pavlovian cues predicting monetary wins and losses on instrumental responding in two different contexts, namely approach and withdraw. By administering the drug after participants went through the instrumental and Pavlovian learning phases of the experiment, the authors limited the effects of the drug to the transfer phase in extinction. This allowed the authors to make inference about the invigorating effects of the cues independently from any learning bias. Moreover, the authors employed a within subject design to study the effect of the drug on 100 participants, which also allows to detect continuous between-subject relationships with covariates such as working memory capacity.

      The study replicates previous findings using this task, namely that appetitive cues promote active responding, and aversive cues promote passive responding in an approach instrumental context, whereas the effect of the cues reverses in a withdraw instrumental context. The results of the methylphenidate manipulation show that the drug decreases the effects of the Pavlovian cues on instrumental responding in participants with low working memory capacity but increases the Pavlovian effects in participants with high working memory capacity. Importantly, in the latter group, methylphenidate increases the invigorating effect of appetitive Pavlovian cues on active approach and aversive Pavlovian cues on active withdrawal as well as the inhibitory effects of aversive Pavlovian cues on active approach and appetitive Pavlovian cues on active withdrawal. These results cannot be explained if catecholamines are just involved in Pavlovian biases by modulating behavioral invigoration driven by the anticipation of reward and punishment in the striatum, as this account can't account for the reversal of the effects of a valence cue on vigor depending on the instrumental context.

      In general, I find the methods of this study very robust and the results very convincing and important. However, I have some concerns:

      We thank the Reviewer for highlighting the robustness of the methods and the importance of the results. We are glad to shortly address the concerns here and have incorporated these in our revision.

      I am not convinced that the inclusion of impulsivity scores in the logistic mixed model to analyze the effects of methylphenidate on PIT is warranted. The authors do not show whether inclusion of this covariate is justified in terms of BIC. Moreover, they include this covariate but do not report the effects. Finally, it is possible that impulsivity is correlated with working memory capacity. In that case, multicollinearity may impact the estimation of the coefficient estimates and may inflate the p-values for the correlated covariates. Are the reported results robust when this factor is not included?

      With regard to the inclusion of impulsivity we first like to mention that this inclusion in our analyses was planned a priori and therefore consistently implemented in the other reports resulting from the overarching study (Froböse et al., 2018; Cook et al., 2019; Rostami Kandroodi et al., 2021), especially the study with regard to which the current report is an e-life research advance (Swart et al., 2017). Moreover, we preregistered both working memory span and impulsivity as potential factors (under secondary measures) that could mediate the effects of catecholamines (see https://onderzoekmetmensen.nl/nl/trial/26989). The inclusion of working memory span was based on evidence from PET imaging studies demonstrating a link with dopamine synthesis capacity (Cools et al., 2008; Landau et al, 2009), whereas the inclusion of trait impulsivity was based on evidence from other PET imaging studies showing a link with dopamine (auto)receptor availability (Buckholtz et al., 2010; Kim et al., 2014; Lee et al., 2009; Reeves et al., 2012). Although there was no significant improvement for the model with impulsivity compared with the model without impulsivity, we feel that we should follow our a priori established analyses.

      We can confirm that impulsivity and working memory were not correlated in this sample (r98=-0.16, p=0.88), which rules out multicollinearity.

      Most importantly, results are robust to excluding impulsivity scores as evidenced by a significant four-way interaction from the omnibus GLMM without impulsivity (Action Context x Valence x Drug x WM span: X<sup>2</sup> = 9.5, p=0.002). We will report these findings in the revised manuscript. We now added the text to the Supplemental Results: Control analyses, page 28.

      The authors state that working memory capacity is an established proxy for dopamine synthesis capacity and cite some studies supporting this view. However, the authors omit a recent reference by van den Bosch et al that provides evidence for the absence of links between striatal dopamine synthesis capacity and working memory capacity. The lack of a robust link between working memory capacity and dopamine synthesis capacity in the striatum strengthens the alternative explanations of the results suggested in the discussion.

      We agree with the Reviewer that the lack of a robust link between working memory capacity and dopamine synthesis capacity in the striatum, as measured with [<sup>18</sup>F]-FDOPA PET imaging, is lending support for the proposed hypothesis incorporating a broader perspective on Pavlovian bias generation than the dopaminergic direct/indirect pathway account (although it is possible that the association will hold in a larger sample when synthesis capacity is measured with [<sup>18</sup>F]-FMT PET imaging, which is sensitive to a different component of the metabolic pathway). We will indeed incorporate in our planned revision the findings from our group reported in van den Bosch et al (2022).

      See Supplemental methods 2: Working memory and impulsivity assessment, page 26.

      ** Recommendations for the authors:**

      Reviewer #1 (Recommendations for the authors):

      (1) Theoretical clarity. Some aspects of the paper are ideally clear: Figure 1 clearly explains the paradigm. The general take-home message is clearly described in the last line of the abstract, the last line of the introduction, the first line of the discussion, and throughout other places in the discussion. Yet the authors seem to hedge their bets when it comes to placing these findings within a broader theoretical framework.

      The discussion includes many possible theoretical interpretations of the findings, which is laudable, but many readers may get lost in this multitude (particularly anyone who isn't an RL/DA aficionado). The group's prior work (i.e. striatal hypothesis) is first described, followed by a rather complex breakdown of valenceaction tendencies, then the seemingly preferred explanation for the current study (i.e. cognitive control hypothesis) is advanced as "an alternative account ...". This is followed by a third, more complex idea (i.e. cortico-striatal balance hypothesis), then the paper ends. A reader may be forgiven for skimming through this discussion and not having a clear idea of how to frame these effects. I think some subheaders would help, as well as clearer labeling of the theoretical interpretations in line with a more authoritative description of the author's preferred interpretation of the empirical effects.

      Our findings ask for a revision of theories on how catecholamines are involved in instantiation of Pavlovian biases in decision making. The reviewer rightly notices that we offer three routes to modify current theory to be able to incorporate our findings. Briefly, these routes discuss catecholaminergic modulation of Pavlovian biases (i) through modulation of the putative striatal ‘origin’ of Pavlovian biases, (ii) through top-down control, primarily relying on prefrontal processes, and (ii) a combination of the two, where catecholamines regulate the balance between these striatal and frontal processes.

      Given the systemic nature of the pharmacological manipulation, we cannot dissociate between these three accounts. We believe that discussing these possible explanations enriches our Discussion and strengthens our recommendation in the ultimate paragraph to use pharmacological neuroimaging studies to arbitrate between these options. In the revision, we have made this line of reasoning more clear, in part by adding guiding titles to the Discussion section and adding a summary paragraph in the Discussion (Discussion, page 9-12).

      (2) All statistical effects are presented as c^2 with no df. The methods only describe LMER and make no mention of what the c^2 measure represents.

      C^2 seems a technical pdf conversion error problem: all chi-squares (Χ2) have been converted to C2. This is now corrected in our revision.

      Reviewer #2 (Recommendations for the authors):

      Few minor points:

      Figure 2A is not cited in the text I think

      Checked and changed.

      Figure 2C: "C" is not present in the figure. Also I could not see the data corresponding at MPH-Approach context in Neutral Pavlovian condition but I think it is probably masked by another curve.

      Checked and changed. Indeed, the one curve is masked by the other curve.

      As I stated in the public review, a clarification or more detailed analysis of working memory performance depending on if it was measured under MPH or placebo could be a plus.

      Changed this (see public review reply).

      I did not see any statement about the availability of data but I may have missed it.

      Yes, the statement can be found:

      Methods, page 13: Data and code for the study are freely available at https://data.ru.nl/collections/di/dccn/DSC_3017031.02_734.

      Reviewer #3 (Recommendations for the authors):

      The authors should check that inclusion of impulsivity in the logistic mixed model is justified and if it is justified make sure that multicollinearity is not problematic.

      See answer to public review for convenience reiterated below:

      With regard to the inclusion of impulsivity we first like to mention that this inclusion in our analyses was planned a priori and therefore consistently implemented in the other reports resulting from the overarching study (Froböse et al., 2018; Cook et al., 2019; Rostami Kandroodi et al., 2021), especially the study with regard to which the current report is an e-life research advance (Swart et al., 2017). Moreover, we preregistered both working memory span and impulsivity as potential factors (under secondary measures) that could mediate the effects of catecholamines (see https://onderzoekmetmensen.nl/nl/trial/26989). The inclusion of working memory span was based on evidence from PET imaging studies demonstrating a link with dopamine synthesis capacity (Cools et al., 2008; Landau et al, 2009), whereas the inclusion of trait impulsivity was based on evidence from other PET imaging studies showing a link with dopamine (auto)receptor availability (Buckholtz et al., 2010; Kim et al., 2014; Lee et al., 2009; Reeves et al., 2012). Although there was no significant improvement for the model with impulsivity compared with the model without impulsivity, we feel that we should follow our a priori established analyses.

      We can confirm that impulsivity and working memory were not correlated in this sample (r98=-0.16, p=0.88), which rules out multicollinearity.

      Most importantly, results are robust to excluding impulsivity scores as evidenced by a significant four-way interaction from the omnibus GLMM without impulsivity (Action Context x Valence x Drug x WM span: X<sup>2</sup> = 9.5, p=0.002). We will report these findings in the revised manuscript. We now added the text to the Supplemental Results Control analyses, page 28.

      I would recommend that the authors make clear that the effects of methylphenidate are dependent on working memory capacity in the first sentence of the fore last paragraph of the introduction on page 4.

      Changed this accordingly, see Introduction, page 5.

      I would make sure that the text in the figures is readable without needing to enlarge the figures. I would also highlight the significant effects in the figures.

      We changed the font size accordingly and added significance statements to the caption, because depicting the significance of a four-way interaction including one continuous variable is not straightforward.

      The distributions of p(Go) by conditions such as in figure 1D or 2A are very intuitive. Figure 2B is very informative as it shows the continuous effects of working memory capacity on the PIT effect. I would add (in figure 2 or in the supplement) a plot of the p(Go) with a tertile split based on working memory. Considering that the correspondent analysis is being reported, having the plot would strengthen and simplify the understanding of the results.

      The continuous effects of working memory are based on WM values on the listening span ranging from 2.5-7, in steps of 0.5, resulting in 10 different values. A tertile split would result in binning these into two bins of three values, and one bin of four values. Given that all of the datapoints for this tertile split are already presented in the current figures, we strongly prefer not to include this additional figure.

      I would add some sentences in the results section (and maybe in the discussion if needed) addressing the results that the effect of Valence by drug by WM span is only significant in the withdrawal context but not in the approach context.

      We now added an emphasis on the specifically significant drug effects in withdrawal in the Results section, page 8.

    1. Few Japanese wished to dwell on the madness and misery of the war, and the hibakusha—the survivors of the atomic bombs—were essentially stigmatized or ignored until the late 1950s and early 1960s.

      This was upsetting. You’d expect survivors to get support, but instead they were pushed aside. It shows how trauma doesn’t just come from the event itself, but also from how society treats you afterward.

    2. By the time Hiroshima and Nagasaki were bombed in August 1945, over sixty Japanese cities had been targeted with “conventional” napalm fire-bombing.

      This really puts things into perspective. We always focus on the atomic bombs, but Japan had already been dealing with a lot of issues before that. It makes the situation feel even more overwhelming.

    1. It produces knowledge, but without necessarily deepening human understanding. We don’t really know how AI reaches its conclusions – even the programmers admit as much. Nor can we verify its reasoning against clear, objective criteria. So when we follow AI’s advice, we are not guided by reason. We are back in the realm of faith. In dubio pro machina: when in doubt, trust the machine – that may become our future guiding principle.

      We trust machines more than our instincts.

    2. But artificial intelligence threatens to become our new “other” – a silent authority that guides our thoughts and actions. We are in danger of ceding the hard-won courage to think for ourselves – and this time, not to gods or kings, but to code.

      Our brains are shrinking; machines are doing the work for us

    1. 3 But even Titus, who was with me, was not compelled to be circumcised

      Joe, Shay, Josh, Chase

      1. Gentile believers are not required to follow Jewish law. Early Christianity wrestled with identity boundaries; circumcision functioned as a mark of belonging.
    1. Figure 6.17 Competitive and noncompetitive inhibition affect the reaction's rate differently. Competitive inhibitors affect the initial rate but do not affect the maximal rate; whereas, noncompetitive inhibitors affect the maximal rate.

      Giver mening fordi mere substrat vil cancel competitive inhibitorens effekt.

    1. Author response:

      The following is the authors’ response to the original reviews

      eLife Assessment

      This is a valuable polymer model that provides insight into the origin of macromolecular mixed and demixed states within transcription clusters. The well-performed and clearly presented simulations will be of interest to those studying gene expression in the context of chromatin. While the study is generally solid, it could benefit from a more direct comparison with existing experimental data sets as well as further discussion of the limits of the underlying model assumptions.

      We thank the editors for their overall positive assessment. In response to the Referees’ comments, we have addressed all technical points, including a more detailed explanation of the methodology used to extract gene transcription from our simulations and its analogy with real gene transcription. Regarding the potential comparison with experimental data and our mixing–demixing transition, we have added new sections discussing the current state of the art in relevant experiments. We also clarify the present limitations that prevent direct comparisons, which we hope can be overcome with future experiments using the emerging techniques.

      Reviewer #1 (Public Review):

      This manuscript discusses from a theory point of view the mechanisms underlying the formation of specialized or mixed factories. To investigate this, a chromatin polymer model was developed to mimic the chromatin binding-unbinding dynamics of various complexes of transcription factors (TFs).

      The model revealed that both specialized (i.e., demixed) and mixed clusters can emerge spontaneously, with the type of cluster formed primarily determined by cluster size. Non-specific interactions between chromatin and proteins were identified as the main factor promoting mixing, with these interactions becoming increasingly significant as clusters grow larger.

      These findings, observed in both simple polymer models and more realistic representations of human chromosomes, reconcile previously conflicting experimental results. Additionally, the introduction of different types of TFs was shown to strongly influence the emergence of transcriptional networks, offering a framework to study transcriptional changes resulting from gene editing or naturally occurring mutations.

      Overall I think this is an interesting paper discussing a valuable model of how chromosome 3D organisation is linked to transcription. I would only advise the authors to polish and shorten their text to better highlight their key findings and make it more accessible to the reader.

      We thank the Referee for carefully reading our manuscript and recognizing its scientific value. As suggested, we tried to better highlight our key findings and make the text more accessible while addressing also the comments from the other Referees.

      Reviewer #2 (Public Review):

      Summary:

      With this report, I suggest what are in my opinion crucial additions to the otherwise very interesting and credible research manuscript ”Cluster size determines morphology of transcription factories in human cells”.

      Strengths:

      The manuscript in itself is technically sound, the chosen simulation methods are completely appropriate the figures are well-prepared, the text is mostly well-written spare a few typos. The conclusions are valid and would represent a valuable conceptual contribution to the field of clustering, 3D genome organization and gene regulation related to transcription factories, which continues to be an area of most active investigation.

      Weaknesses:

      However, I find that the connection to concrete biological data is weak. This holds especially given that the data that are needed to critically assess the applicability of the derived cross-over with factory size is, in fact, available for analysis, and the suggested experiments in the Discussion section are actually done and their results can be exploited. In my judgement, unless these additional analysis are added to a level that crucial predictions on TF demixing and transcriptional bursting upon TU clustering can be tested, the paper is more fitted for a theoretical biophysics venue than for a biology journal such as eLife.

      We thank the Reviewer for their positive assessment of the soundness of our work and its contribution to the field. We have added a paragraph to the Conclusions highlighting the current state of experimental techniques and outlining near-term experiments that could be extended to test our predictions. We also emphasise that our analysis builds on state-of-the-art polymer models of chromatin and on quantitative experimental datasets, which we used both to build the model construction and to validate its outcomes (gene activity). We hope this strengthened link to experiment will catalyse further studies in the field.

      Major points:

      (1) My first point concerns terminology.The Merriam-Webster dictionary describes morphology as the study of structure and form. In my understanding, none of the analyses carried out in this study actually address the form or spatial structuring of transcription factories. I see no aspects of shape, only size. Unless the authors want to assess actual shapes of clusters, I would recommend to instead talk about only their size/extent. The title is, by the same argument, in my opinion misleading as to the content of this study.

      We agree with the Referee that the title could be misleading. In our study we characterized clusters size, that is a morphological descriptor, and cluster composition that isn’t morphology per se but used in the community in a broader sense. Nevertheless to strength the message we have changed the title in: “Cluster size determines internal structure of transcription factories in human cells”

      (2) Another major conceptual point is the choice of how a single TF:pol particle in the model relates to actual macromolecules that undergo clustering in the cell. What about the fact that even single TF factories still contain numerous canonical transcription factors, many of which are also known to undergo phase separation? Mediator, CDK9, Pol II just to name a few. This alone already represents phase separation under the involvement of different species, which must undergo mixing. This is conceptually blurred with the concept of gene-specific transcription factors that are recruited into clusters/condensates due to sequencespecific or chromatin-epigenetic-specific affinities. Also, the fact that even in a canonical gene with a ”small” transcription factory there are numerous clustering factors takes even the smallest factories into a regime of several tens of clustering macromolecules. It is unclear to me how this reality of clustering and factory formation in the biological cell relates to the cross-over that occurs at approximately n=10 particles in the simulations presented in this paper.

      This is a good point. However in our case we can either look at clustering transcription factors or transcription units. In an experimental situation, transcription units could be “coloured”, or assigned different types, by looking at different cell types, so that they can be classified as housekeeping, or cell-type independent, or cell-type specific. This is similar to how DHS can be clustered. In this way the mixing or demixing state can be identified by looking at the type of transcription unit, removing any ambiguity due to the fact that the same protein may participate in different TF complexes..

      (3) The paper falls critically short in referencing and exploiting for analysis existing literature and published data both on 3D genome organization as well as the process of cluster formation in relation to genomic elements. In terms of relevant literature, most of the relevant body of work from the following areas has not been included:

      (i) mechanisms of how the clustering of Pol II, canonical TFs, and specific TFs is aided by sequence elements and specific chromatin states

      (ii) mechanisms of TF selectivity for specific condensates and target genomic elements

      (iii) most crucially, existing highly relevant datasets that connect 3D multi-point contacts with transcription factor identity and transcriptional activity, which would allow the authors to directly test their hypotheses by analysis of existing data

      Here, especially the data under point (iii) are essential. The SPRITE method (cited but not further exploited by the authors), even in its initial form of publication, would have offered a data set to critically test the mixing vs. demixing hypothesis put forward by the authors. Specifically, the SPRITE method offers ordered data on k-mers of associated genomic elements. These can be mapped against the main TFs that associate with these genomic elements, thereby giving an account of the mixed / demixed state of these k-mer associations. Even a simple analysis sorting these associations by the number of associated genomic elements might reveal a demixing transition with increasing association size k. However, a newer version of the SPRITE method already exists, which combines the k-mer association of genomic elements with the whole transcriptome assessment of RNAs associated with a particular DNA k-mer association. This can even directly test the hypotheses the authors put forward regarding cluster size, transcriptional activation, correlation between different transcription units’ activation etc.

      To continue, the Genome Architecture Mapping (GAM) method from Ana Pombo’s group has also yielded data sets that connect the long-range contacts between gene-regulatory elements to the TF motifs involved in these motifs, and even provides ready-made analyses that assess how mixed or demixed the TF composition at different interaction hubs is. I do not see why this work and data set is not even acknowledged? I also strongly suggest to analyze, or if they are already sufficiently analyzed, discuss these data in the light of 3D interaction hub size (number of interacting elements) and TF motif composition of the involved genomic elements.

      Further, a preprint from the Alistair Boettiger and Kevin Wang labs from May 2024 also provides direct, single-cell imaging data of all super-enhancers, combined with transcription detection, assessing even directly the role of number of super-enhancers in spatial proximity as a determinant of transcriptional state. This data set and findings should be discussed, not in vague terms but in detailed terms of what parts of the authors’ predictions match or do not match these data.

      For these data sets, an analysis in terms of the authors’ key predictions must be carried out (unless the underlying papers already provide such final analysis results). In answering this comment, what matters to me is not that the authors follow my suggestions to the letter. Rather, I would want to see that the wealth of available biological data and knowledge that connects to their predictions is used to their full potential in terms of rejecting, confirming, refining, or putting into real biological context the model predictions made in this study.

      References for point (iii):

      - RNA promotes the formation of spatial compartments in the nucleus https://www.cell.com/cell/fulltext/S0092-8674(21)01230-7?dgcid=raven_jbs_etoc_email

      - Complex multi-enhancer contacts captured by genome architecture mapping https://www.nature.com/articles/nature21411

      - Cell-type specialization is encoded by specific chromatin topologies https://www.nature.com/articles/s41586-021-04081-2

      - Super-enhancer interactomes from single cells link clustering and transcription https://www.biorxiv.org/content/10.1101/2024.05.08.593251v1.full

      For point (i) and point (ii), the authors should go through the relevant literature on Pol II and TF clustering, how this connects to genomic features that support the cluster formation, and also the recent literature on TF specificity. On the last point, TF specificity, especially the groups of Ben Sabari and Mustafa Mirx have presented astonishing results, that seem highly relevant to the Discussion of this manuscript.

      We appreciate the Reviewer’s insightful suggestion that a comparison between our simulation results and experimental data would strengthen the robustness of our model. In response, we have thoroughly revised the literature on multi-way chromatin contacts, with particular attention to SPRITE and GAM techniques. However, we found that the currently available experimental datasets lack sufficient statistical power to provide a definitive test of our simulation predictions, as detailed below.

      As noted by the Reviewer, SPRITE experiments offer valuable information on the composition of highorder chromatin clusters (k-mers) that involve multiple genomic loci. A closer examination of the SPRITE data (e.g., Supplementary Material from Ref. [1]) reveals that the majority of reported statistics correspond to 3-mers (three-way contacts), while data on larger clusters (e.g., 8-mers, 9-mers, or greater) are sparse. This limitation hinders our ability to test the demixing-mixing transition predicted in our simulations, which occurs for cluster sizes exceeding 10.

      Moreover, the composition of the k-mers identified by SPRITE predominantly involves genomic regions encoding functional RNAs—such as ITS1 and ITS2 (involved in rRNA synthesis) and U3 (encoding small nucleolar RNA)—which largely correspond to housekeeping genes. Conversely, there is little to no data available for protein-coding genes. This restricts direct comparison to our simulations, where the demixing-mixing transition depends critically on the interplay between housekeeping and tissue-specific genes.

      Similarly, while GAM experiments are capable of detecting multi-way chromatin contacts, the currently available datasets primarily report three-way interactions [2,3].

      In summary, due to the limited statistical data on higher-order chromatin clusters [4], a quantitative comparison between our simulation results and experimental observations is not currently feasible. Nevertheless, we have now briefly discussed the experimental techniques for detecting multi-way interactions in the revised manuscript to reflect the current state of the field, mentioning most of the references that the Reviewer suggested.

      (4) Another conceptual point that is a critical omission is the clarification that there are, in fact, known large vs. small transcription factories, or transcriptional clusters, which are specific to stem cells and ”stressed cells”. This distinction was initially established by Ibrahim Cisse’s lab (Science 2018) in mouse Embryonic Stem Cells, and also is seen in two other cases in differentiated cells in response to serum stimulus and in early embryonic development:

      - Mediator and RNA polymerase II clusters associate in transcription-dependent condensates https://www.science.org/doi/10.1126/science.aar4199

      - Nuclear actin regulates inducible transcription by enhancing RNA polymerase II clustering https://www.science.org/doi/10.1126/sciadv.aay6515

      - RNA polymerase II clusters form in line with surface condensation on regulatory chromatin https://www.embopress.org/doi/full/10.15252/msb.202110272

      - If ”morphology” should indeed be discussed, the last paper is a good starting point, especially in combination with this additional paper: Chromatin expansion microscopy reveals nanoscale organization of transcription and chromatin https://www.science.org/doi/10.1126/science.ade5308

      We thank the Reviewer for pointing out the discussion about small and large clusters observed in stressed cells. Our study aims to provide a broader mechanistic explanation on the formation of TF mixed and demixed clusters depending on their size. However, to avoid to generate confusion between our terminology and the classification that is already used for transcription factories in stem and stressed cells, we have now added some comments and references in the revised text.

      (5) The statement scripts are available upon request is insufficient by current FAIR standards and seems to be non-compliant with eLife requirements. At a minimum, all, and I mean all, scripts that are needed to produce the simulation outcomes and figures in the paper, must be deposited as a publicly accessible Supplement with the article. Better would be if they would be structured and sufficiently documented and then deposited in external repositories that are appropriate for the sharing of such program code and models.

      We fully agree with the Reviewer. We have now included in the main text a link to an external repository containing all the codes required to reproduce and analyze the simulations.

      Recommendations for the authors:

      Minor and technical points

      (6) Red, green, and yellow (mix of green and red) is a particularly bad choice of color code, seeing that red-green blindness is the most common color blindness. I recommend to change the color code.

      We appreciate the Reviewer’s thoughtful comment regarding color accessibility. We fully agree that red–green combinations can pose challenges for color-blind readers. In our figures, however, we chose the red–green–yellow color scheme deliberately because it provides strong contrast and intuitive representation for different TF/TU types. To ensure accessibility, we optimized brightness and saturation within red-green schemes and we carefully verified that the chosen hues are distinguishable under the most common forms of color vision deficiency, i.e. trichromatic color blindness, using color-blindness simulation tools (e.g., Coblis).

      How is the dispersing effect of transcriptional activation and ongoing transcription accounted for or expected to affect the model outcome? This affects both transcriptional clusters (they tend to disintegrate upon transcriptional activation) as well as the large scale organization, where dispersal by transcription is also known.

      We thank the Reviewer for this very insightful question. The current versions of both our toy model and the more complex HiP-HoP model do not incorporate the effects of RNA Polymerase elongation. Our primary goal was to develop a minimalisitc framework that focuses on investigating TF clusters formation and their composition. Nevertheless, we find that this straightforward approach provides a good agreement between simulations and Hi-C and GRO-seq experiments, lending confidence to the reliability of our results concerning TF cluster composition.

      We fully agree, however, that the effects of transcription elongation are an interesting topic for further exploration. For example, modeling RNA Polymerases as active motors that continually drive the system out of equilibrium could influence the chromatin polymer conformation and the structure of TF clusters. Additionally, investigating how interactions between RNA molecules and nuclear proteins, such as SAF-A, might lead to significant changes in 3D chromatin organization and, consequently, transcription [5], is also in intriguing prospect. Although we do not believe that the main findings of our study, particularly regarding cluster composition and mixed-demixed transition, would be impacted by transcription elongation effects, we recognize the importance of this aspect. As such, we have now included some comments in the Conclusions section of the revised manuscript.

      “and make the reasonable assumption that a TU bead is transcribed if it lies within 2.25 diameters (2.25σ) of a complex of the same colour; then, the transcriptional activity of each TU is given by the fraction of time that the TU and a TF:pol lie close together.” How is that justified? I do not see how this is reasonable or not, if you make that statement you must back it up.

      As pointed out by the Referee, we consider a TU to be active if at least one TF is within a distance 2.25σ from that TU. This threshold is a slightly larger than the TU-TF interaction cutoff distance, r<sub>c</sub> \= 1.8σ between TFs and TUs. The rationale for this choice is to ensure that, in the presence of a TU cluster surrounded by TFs, TUs that are not directly in contact with a TF are still considered active. Nonetheless, we find that using slightly different thresholds, such as 1.8σ or 1.1σ, leads to comparable results, as shown in Fig. S11, demonstrating the robustness of our analysis.

      Clearly, close proximity in 1D genomic space favours formation of similarly-coloured clusters. This is not surprising, it is what you built the model to do. Should not be presented as a new insight, but rather as a check that the model does what is expected.

      We believed that this sentence already conveyed that the formation of single-color clusters driven by 1D genomic proximity is not a surprising outcome. However, we have now slightly rephrased it to better emphasize that this is not a novel insight.

      That said, we would like to highlight that while 1D genomic proximity facilitates the formation of clusters of the same color, the unmixed-to-mixed transition in cluster composition is not easily predictable solely from the TU color pattern. Furthermore, in simulations of real chromosomes, where TU patterns are dictated by epigenetic marks, the complexity of these patterns makes it challenging—if not impossible—to predict cluster composition based solely on the input data of our model.

      “…how closely transcriptional activities of different TUs correlate…” Please briefly state over what variable the correlation is carried out, is it cross correlation of transcription activity time courses over time? Would be nice to state here directly in the main text to make it easier for the reader.

      We have now included a brief description in the revised manuscript explaining how the transcriptional correlations were evaluated and how the correlation matrix was constructed.

      “The second concerns how expression quantitative trait loci (eQTLs) work. Current models see them doing so post-transcriptionally in highly-convoluted ways [11, 55], but we have argued that any TU can act as an eQTL directly at the transcriptional level [11].” This text does not actually explain what eQTLs do. I think it should, in concise words.

      We agree with the Referee’s suggestion. We have revised the sentence accordingly and now provide a clear explanation of eQTLs upon their first mention. The revised paragraph now reads as follows:

      “The second concerns how expression quantitative trait loci (eQTLs)—genomic regions that are statistically associated with variation in gene expression levels—function. While current models often attribute their effects to post-transcriptional regulation through complex mechanisms [6,7], we have previously argued that any transcriptional unit (TU) can act as an eQTL by directly influencing gene expression at the transcriptional level [7]. Here, we observe individual TUs up-regulating or down-regulating the activity of others TUs – hallmark behaviors of eQTLs that can give rise to genetic effects such as “transgressive segregation” [8]. This phenomenon refers to cases in which alleles exhibit significantly higher or lower expression of a target gene, and can be, for instance, caused by the creation of a non-parental allele with a specific combination of QTLs with opposing effects on the target gene.”

      “In the string with 4 mutations, a yellow cluster is never seen; instead, different red clusters appear and disappear (Fig. 2Eii)…” How should it be seen? You mutated away most of the yellow beads. I think the kymograph is more informative about the general model dynamics, not the effects of mutations. Might be more appropriate to place a kymograph in Figure 1.

      We agree with the Referee that the kymograph is the most appropriate graphical representation for capturing the effects of mutations. Panel 2E already refers to the standard case shown in Figure 1. We have now clarified this both in the caption and in the main text. In addition, we have rephrased the sentence—which was indeed misleading—as follows:

      “From the activity profiles in Fig. 2C, we can observe that as the number of mutations increases, the yellow cluster is replaced by a red cluster, with the remaining yellow TUs in the region being expelled (Fig. 2B(ii)). This behavior is reflected in the dynamics, as seen by comparing panels E(i) and E(ii): in the string with four mutations, transcription of the yellow TUs is inhibited in the affected region, while prominent red stripes—corresponding to active, transcribing clusters—emerge (Fig. 2E(ii)).” We hope that the comparison is now immediately clear to the reader.

      “…but this block fragments in the string with 4 mutations…” I don’t know or cannot see what is meant by ”fragmentation” in the correlation matrix.

      With the sentence “this block fragments in the string with 4 mutations” we mean that the majority of the solid red pixels within the black box become light-red or white once the mutations are applied. We have now added a clarification of this point in the revised manuscript.

      “Fig. 3D shows the difference in correlation between the case with reduced yellow TFs and the case displayed in Fig. 1E.” Can you just place two halves of the different matrices to be compared into the same panel? Similar to Fig. S5. Will be much easier to compare.

      We thank the Referee for this suggestion. We tried to implement this modification, and report the modified figure below (Author response image 1). As we can see, in the new figure it is difficult to spot the details we refer to in the main text, therefore we prefer to keep the original version of the figure.

      Author response image 1.

      Heatmap comparing activity correlations of TUs in the random string under normal conditions (top half) and with reduced yellow-TF concentration (bottom half).

      What is the omnigenic model? It is not introduced.

      We thank the Reviewer for highlighting this important point. The omnigenic model, first introduced by Boyle et al in Ref. [6], was proposed to explain how complex traits, including disease risk, are influenced by a vast number of genes. Accordingly to this model, the genetic basis of a trait is not limited to a small set of core genes whose expression is directly related to the trait, but also includes peripheral genes. The latter, although not directly involved in controlling the trait, can influence the expression of core genes through gene regulatory networks, thereby contributing to the overall genetic influence on the trait. We have now added a few lines in the revised manuscript to explain this point.

      “Additionally, blue off-diagonal blocks indicate repeating negative correlations that reflect the period of the 6-pattern.” How does that look in a kymograph? Does this mean the 6 clusters of same color steal the TFs from the other clusters when they form?

      The intuition of the Referee is indeed correct. The finite number of TFs leads to competition among TUs of the same colour, resulting in anticorrelation:when a group of six nearby TUs of a given colour is active, other, more distant TUs of the same colour are not transcribing due to the lack of available TFs. As the Referee suggested,this phenomenon is visible in the kymograph showing TU activity. In Author response image 2, it can be observed that typically there is a single TU cluster for each of the three colours (yellow, green, and red). These clusters can be long-lived (e.g., the yellow cluster at the center of the kymograph) or may destroy during the simulation (e.g., the red cluster at the top of the kymograph, which dissolves at t ∼ 600 × 10<sup>5</sup> τ<sub>B</sub>). In the latter case, TFs of the corresponding colour are released into the system and can bind to a different location, forming a new cluster (as seen with the red cluster forming at the bottom of the kymograph for t > 600 × 10<sup>5</sup> τ<sub>B</sub>). This point is further discussed at the point 2.30 of this Reply where additional graphical material is provided.

      Author response image 2.

      Kymograph showing the TU activity during a typical run in the 6-pattern case. Each row reports the transcriptional state of a TU during one simulation. Black pixels correspond to inactive TUs, red (yellow, green) pixels correspond to active red (yellow, green) TUs.

      “Conversely, negative correlations connect distant TUs, as found in the single-color model…” But at the most distal range, the negative correlation is lost again! Why leave this out? Your correlation curves show the same , equilibration towards no correlation at very long ranges.

      As highlighted in Figure 5Ai, long-range negative correlations (grey segments) predominantly connect distant TUs of the same colour. This is quantified in Figure 5Bi: restricting to same-colour TUs shows that at large genomic separations the correlation is almost entirely negative, with small fluctuations at distances just below 3000 kbp where sampling is sparse; we therefore avoid further interpretation of this regime.

      “These results illustrate how the sequence of TUs on a string can strikingly affect formation of mixed clusters; they also provide an explanation of why activities of human TUs within genomic regions of hundreds of kbp are positively correlated [60].” This is a very nice insight.

      We thank the Reviewer for the very supportive comment.

      “To quantify the extent to which TFs of different colours share clusters, we introduce a demixing coefficient, θ<sub>dem</sub> (defined in Fig. 1).” This is not defined in Fig. 1 or anywhere else here in the main text.

      We thank the Referee for pointing this out. For a given cluster, the demixing coefficient is defined as

      where n is the number of colors, i indexes each color present in the model, and x<sub>i,max</sub> the largest fraction of TFs of the same i-th color in a single TF cluster.

      The demixing coefficient is defined in the Methods section; therefore, we have replaced defined in Fig. 1 with see Methods for definition.

      “Mixing is facilitated by the presence of weakly-binding beads, as replacing them with non-interacting ones increases demixing and reduces long-range negative correlations (Figure S3). Therefore, the sequence of strong and weak binding sites along strings determines the degree of mixing, and the types of small-world network that emerge. If eQTLs also act transcriptionally in the way we suggest [11], we predict that down-regulating eQTLs will lie further away from their targets than up-regulating ones.” Going into these side topics and minke points here is super distracting and waters down the message. Maybe first deal with the main conclusions on mixed vs demixed clusters in dependence on the strong and specific binding site patterns, before dealing with other additional points like the role of weak binding sites.

      Thank you for the suggestion. We now changed the paragraph to highlight the main results. The new paragraph is as follows. “These results on activity correlation and TF cluster composition suggest that, if eQTLs act transcriptionally as expected [7], down-regulating eQTLs are likely to be located further from their target genes than up-regulating ones. In addition, it is important to note that mixing is promoted by the presence of weakly binding beads; replacing these with non-interacting ones leads to increased demixing and a reduction in long-range negative correlations (Figure S3). More generally, our findings indicate that the presence of multiple TF colors offers an effective mechanism to enrich and fine-tune transcriptional regulation.”

      “…provides a powerful pathway to enrich and modulate transcriptional regulation.” Before going into the possible meaning and implications of the results, please discuss the results themselves first.

      See previous point.

      Figure 5B. Does activation typically coincide with spatial compaction of the binding sites into a small space or within the confines of a condensate? My guess would be that colocalization of the other color in a small space is what leads to the mixing effect?

      As the Reviewer correctly noted, the activity of a given TU is indeed influenced by the presence of nearby TUs of the same color, since their proximity facilitates the recruitment of additional TFs and enhances the overall transcriptional activity. In this context, the mixing effect is certainly affected by the 1D arrangement of TUs along the chromatin fiber. As emphasized in the revised manuscript, when domains of same-color TUs are present (as in the 6-pattern string), the degree of demixing is greater compared to the case where TUs of different colors alternate and large domains are absent (as in the 1-pattern string). This difference in the demixing parameter as a function of the 1D TU arrangement is clearly visible in Fig. S2B.

      “…euchromatic regions blue, and heterochromatic ones grey.” Please also explain what these color monomers mean in terms of non specific interactions with the TFs.

      Generally, in our simulation approach we assume euchromatin regions to be more open and accessible to transcription factors, whereas heterochromatin corresponds to more compacted chromatin segments [9]. To reflect this, we introduce weak, non-specific interactions between euchromatin and TFs, while heterochromatin interacts with TFs only thorugh steric effects. To clarify this point, we have now slightly revised the caption of Fig.6.

      “More quantitatively, Spearman’s rank correlation coefficient is 3.66 10<sup>−1</sup>, which compares with 3.24 10<sup>−1</sup> obtained previously using a single-colour model [11].” This comparison does not tell me whether the improvement in model performance justifies an additional model component. There are other, likelihood based approaches to assess whether a model fits better in a relevant extent by adding a free model parameter. Can these be used for a more conclusive comparison? Besides, a correlation of 0.36 does not seem so good?

      We understand the Reviewer’s concern that the observed increase in the activity correlation may not appear to provide strong evidence for the improvement of the newly introduced model. However, within the context of polymer models developed to study realistic gene transcription and chromatin organization, this type of correlation analysis is a widely accepted approach for model validation. Experimental data commonly used for such validation include Hi-C maps, FISH experiments, and GRO-seq data [10,11]. The first two are typically employed to assess how accurately the model reproduces the 3D folding of chromatin; a comparison between experimental and simulated Hi-C maps is provided in the Supplementary Information (Fig. S5), showing a Pearson correlation of 0.7. GRO-seq or RNA-seq data, on the other hand, are used to evaluate the model’s ability to predict gene transcription levels. To date, the highest correlation for transcriptional activity data has been achieved by the HiP-HoP model at a resolution of 1 kbp [10], reporting a Spearman correlation of 0.6. Therefore, the correlation obtained with our 2-color model represents a good level of agreement when compared with the more complex HiP-HoP model. In this context, the observed increase in correlation—from 0.324 to 0.366—can be regarded as a modest yet meaningful improvement.

      “…consequently, use of an additional color provides a statisticallysignificant improvement (p-value < 10<sup>−6</sup>, 2-sided t-test).” I do not follow this argument. Given enough simulation repeats, any improvement, no matter how small, will lead to statistically significant improvements.

      We agree that this sentence could be misleading. We have now rephrased it in a clearer manner specifying that each of the two correlation values is statistically significant alone, while before we were wrongly referring to the significance of the improvement.

      “Additionally, simulated contact maps show a fair agreement with Hi-C data (Figure S5), with a Pearson correlation r ∼ 0.7 (p-value < 10<sup>−6</sup>, 2-sided t-test).” Nice!

      We thank the Reviewer for the positive comment.

      “Because we do not include heterochromatin-binding proteins, we should not however expect a very accurate reproduction of Hi-C maps: we stress that here instead we are interested in active chromatin, transcription and structure only as far as it is linked to transcription.” Then why do you not limit your correlation assessment to only these regions to show that these are very well captured by your model?

      We thank the Reviewer for this insightful comment. Indeed, we could have restricted our investigation to active chromatin regions, as done in our previous works [11,12]. However, our intention in this section of the manuscript was to clarify that the current model is relatively simple and therefore not expected to achieve a very high level of agreement between experimental and simulated Hi-C maps. Another important limitation of the two color model described in the section is the absence of active loop extrusion mediated by SMC proteins, which is known to play a central role in establishing TADs boundaries. Consequently, even if our analysis were limited to active chromatin regions, the agreement with experimental Hi-C maps would still remain lower than that obtained with more comprehensive models, such as HiP-HoP, that we use later in the last section of the paper. We have now added a comment in the revised manuscript explicitly noting the lack of active loop extrusion in our 2-color model.

      “We also measure the average value of the demixing coefficient, θ<sub>dem</sub> (Materials and Methods). If θ<sub>dem</sub> = 1, this means that a cluster contains only TFs of one colour and so is fully demixed; if θ<sub>dem</sub> = 0, the cluster contains a mixture of TFs of all colors in equal number, and so is maximally mixed.” Repetitive.

      We have now rephrased the sentence in a more concise way.

      “…notably, this is similar to the average number of productivelytranscribing pols seen experimentally in a transcription factory [6].” That seems a bit fast and loose. The number of Polymerases can differ depending on state, type of factory, gene etc. and vary between anything from to a few hundreds of Polymerase complexes depending on definition of factory, and what is counted as active. Also, one would think that polymerases only make up a small part of the overall protein pool that constitutes a condensate, so it is unclear whether this is a pertinent estimate.

      Here we refer to the average size of what is normally referred to as a PolII factory, not a generic nuclear condensate. These are the clusters which arise in our simulations. These structures emerge through microphase separation and have been well characterised, for instance see [13] for a recent review. For these structures while there is a distribution the average is well defined and corresponds to a size of about 100 nm, which is very much in line with the size of the clusters we observe, both in terms of 3D diameter and number of participating proteins. Because of the size, the number of active complexes which can contribute cannot be significantly more than ∼ 10. These estimates are, we note, very much in line with super-resolution measurements of SAF-A clusters [14], which are associated with active transcription and hence it is reasonable to assume they colocalise with RNA and polymerase clusters.

      “Conversely, activities of similar TUs lying far from each other on the genetic map are often weakly negatively correlated, as the formation of one cluster sequesters some TFs to reduce the number available to bind elsewhere.” This point is interesting, and I strongly suspect that this indeed happening. But I don’t think it was shown in the analysis of the simulation results in sufficient clarity. We need direct assessment of this sequestration, currently it’s only indirectly inferred.

      Indeed, this is the mechanism underlying the emergence of negative long-range correlations among TU activity values. As the Reviewer correctly pointed out, the competition for a finite number of TFs was only indirectly inferred in the original manuscript. To address this, we have now included a new figure explicitly illustrating this effect. In Fig. S12, we show the kymograph of active TUs (left panel), as in Fig. 2E(i) of the main text, alongside a new kymograph depicting the number of green TFs within a sphere of radius 10σ centered on each green TU (right panel). For simplicity, we focus here only on green TUs and TFs. It can be observed that, during the initial part of the simulation, green TFs are localized near genomic position ∼ 2000(right panel), where green TUs are transcriptionally active (left panel). Toward the end of the simulation, TUs near genomic position ∼ 500 become active, coinciding with the relocation of TFs to this region and the depletion of the previous one.

      In the definition for the demixing coefficient (equation 1), what does the index i stand for?

      Here i is an index denoting each of the colors present in the model. We have now specified the meaning of i after Eq. 1.

      Reviewer 3 (Public Review):

      In this work, the authors present a chromatin polymer model with some specific pattern of transcription units (TUs) and diffusing TFs; they simulate the model and study TFclustering, mixing, gene expression activity, and their correlations. First, the authors designed a toy polymer with colored beads of a random type, placed periodically (every 30 beads, or 90kb). These colored beads are considered a transcription unit (TU). Same-colored TUs attract with each other mediated by similarly colored diffusing beads considered as TFs. This led to clustering (condensation of beads) and correlated (or anti-correlation) ”gene expression” patterns. Beyond the toy model, when authors introduce TUs in a specific pattern, it leads to emergence of specialized and mixed cluster of different TFs. Human chromatin models with realistic distribution of TUs also lead to the mixing of TFs when cluster size is large.

      Strengths.

      This is a valuable polymer model for chromatin with a specific pattern of TUs and diffusing TF-like beads. Simulation of the model tests many interesting ideas. The simulation study is convincing and the results provide solid evidence showing the emergence of mixed and demixed TF clusters within the assumptions of the model.

      Weaknesses.

      Weakness of the work: The model has many assumptions. Some of the assumptions are a bit too simplistic. Concerns about the work are detailed below:

      We thank the Referee for this overall positive evaluation.

      We thank the Referee for this important observation. The way we The authors assume that when the diffusing beads (TFs) are near a TU, the gene expression starts. However, mammalian gene expression requires activation by enhancer-promoter looping and other related events. It is not a simple diffusion-limited event. Since many of the conclusions are derived from expression activity, will the results be affected by the lack of looping details?

      We do not need to assume promoter-enhancer contact, this emerges naturally through the bridging-induced phase separation and indeed is a key strength of our model. Even though looping is not assumed as key to transcriptional initiation, in practice the vast majority of events in which a TF is near a TU are associated with the presence of a cluster where regulatory elements are looped. So transcription in our case is associated with the bridging-induced phase separation, and there is no lack of looping, looping is naturally associated with transcription, and this is an emergent property of the model (not an assumption), which is an important feature of our model. Accordingly, both contact maps and transcriptional activity are well predicted by our model, both in the version described here and in the more sophisticated single-colour HiP-HoP model [10] (an important ingredient of which is the bridging-induced phase separation).

      Authors neglect protein-protein interactions. Without proteinprotein interactions, condensate formation in natural systems is unlikely to happen.

      We thank the Reviewer for pointing out the absence of protein-protein interactions in our simulations. While we acknowledge this limitation, we would like to emphasize that experimental studies have not observed nuclear proteins forming condensates at physiological concentrations in the absence of DNA or chromatin. For example, studies such as Ryu et al. [15] and Shakya et al. [16] show that protein-protein interactions alone are insufficient to drive condensate formation in vivo. Instead, the presence of a substrate, such as DNA or chromatin, is essential to favor and stabilize the formation of protein clusters.

      In our simulations, we propose that protein liquid-liquid phase separation (LLPS) is driven by the presence of both strong and weak attractions between multivalent protein complexes and the chromatin filament. As stated in our manuscript, the mechanism leading to protein cluster formation is the bridging induced attraction. This mechanism involves a positive feedback loop, where protein binding to chromatin induces a local increase in chromatin density, which then attracts more proteins, further promoting cluster formation.

      While we acknowledge that adding protein-protein interactions could be incorporated into our simulations, we believe this would need to be a weak interaction to remain consistent with experimental data. Additionally, incorporating such interactions would not alter the conclusions of our study.

      What is described in this paper is a generic phenomenon; many kinds of multivalent chromatin-binding proteins can form condensates/clusters as described here. For example, if we replace different color TUs with different histone modifications and different TFs with Hp1, PRC1/2, etc, the results would remain the same, wouldn’t they? What is specific about transcription factor or transcription here in this model? What is the logic of considering 3kb chromatin as having a size of 30 nm? See Kadam et al. (Nature Communications 2023). Also, DNA paint experimental measurement of 5kb chromatin is greater than 100 nm (see work by Boettiger et al.).

      We thank the Reviewer for this important observation, which we now address. To begin, we consider the toy model introduced in the first part of the manuscript, where TUs are randomly positioned rather than derived from epigenetic data. As the Reviewer points out, in this simplified context, our results reflect a generic phenomenon: the composition of clusters depends primarily on their size, independent of the specific types of proteins involved. However, the main goal of our work is to gain insights into apparently contradictory experimental findings, which show that some transcription factories consist of a single type of transcription factors, while other contain multiple types. This led us to focus on TF clusters and their role in transcriptional regulation and co-regulation of distant genes. Therefore, in the second part of the manuscript, we use DNase I hypersensitive site (DHS) data to position TUs based on predicted TF binding sites, providing a more biological framework. In both the toy model and the more realistic HiP-HoP model, we observe a size-dependent transition in cluster composition. However, we refrain from generalizing these results to clusters composed of other protein complexes, such as HP1 and PRC, as their binding is governed by distinct epigenetic marks (e.g. H3K927me3 and H3K27me3), which exhibit different genomic distributions compared to DHS marks.

      Finally, the mapping of 3kb to 30nm is an estimate which does not significantly impact our conclusions. The relationship between genomic distance (in kbp) and spatial distance (in nm) is highly dependent on the degree of chromatin compaction, which can vary across cell types and genomic context. As such, providing an exact conversion is challenging [17]. For example, in a previous work based on the HiP-HoP model [12] we compared simulated and experimental FISH measurements and found that 1kbp typically corresponds to 15 − 20nm, implying that 3kbp could span 60nm. Nevertheless, we emphasize that varying this conversion factor does not affect the core results or conclusions of our study. We have now included a clarification in the revised SI to highlight this point.

      Recommendations for the authors:

      Other points.

      Figure 1(D) caption says 2.25σ = 1.6 nanometer. Is this a typo? Sigma is 30nm.

      Yes, it was. As 1σ ∼ 30nm, we have 2.25σ = 2.25 · 30 nm = 67.2 nm ∼ 6.7 × 10<sup>−8</sup>m. We have now corrected the caption.

      Page 6, column 2nd, 3rd para, it is written that θ<sub>dem</sub> (”defined in Fig.1”). There is no θ<sub>dem</sub> defined in Fig.1, is there? I can see it defined in Methods but not in Fig. 1.

      Correct, we replaced (defined in Fig.1) with (see Methods for definition).

      Page 6, column 2, 4th para: what does “correlations overlap and correlations diverge mean”?

      With reference to the plots from Fig. 5B, correlation overlap and diverge simply refers to the fact that same-colour (red curves) and different-colour (blue curves) correlation trends may or may not overlap on each other. We have now clarified this point.

      What is the precise definition of correlation in Fig 5B (Y-axis)?

      In Fig.5B, correlation means Pearson correlation. We have now specified this point in the revised text and in the caption of Fig.5.

      References

      (1) S. A. Quinodoz, J. W. Jachowicz, P. Bhat, N. Ollikainen, A. K. Banerjee, I. N. Goronzy, M. R. Blanco, P. Chovanec, A. Chow, Y. Markaki et al., “Rna promotes the formation of spatial compartments in the nucleus,” Cell, vol. 184, no. 23, pp. 5775–5790, 2021.

      (2) R. A. Beagrie, A. Scialdone, M. Schueler, D. C. Kraemer, M. Chotalia, S. Q. Xie, M. Barbieri, I. de Santiago, L.-M. Lavitas, M. R. Branco et al., “Complex multi-enhancer contacts captured by genome architecture mapping,” Nature, vol. 543, no. 7646, pp. 519–524, 2017.

      (3) R. A. Beagrie, C. J. Thieme, C. Annunziatella, C. Baugher, Y. Zhang, M. Schueler, A. Kukalev, R. Kempfer, A. M. Chiariello, S. Bianco et al., “Multiplex-gam: genome-wide identification of chromatin contacts yields insights overlooked by hi-c,” Nature Methods, vol. 20, no. 7, pp. 1037–1047, 2023.

      (4) L. Liu, B. Zhang, and C. Hyeon, “Extracting multi-way chromatin contacts from hi-c data,” PLOS Computational Biology, vol. 17, no. 12, p. e1009669, 2021.

      (5) R.-S. Nozawa, L. Boteva, D. C. Soares, C. Naughton, A. R. Dun, A. Buckle, B. Ramsahoye, P. C. Bruton, R. S. Saleeb, M. Arnedo et al., “Saf-a regulates interphase chromosome structure through oligomerization with chromatin-associated rnas,” Cell, vol. 169, no. 7, pp. 1214–1227, 2017.

      (6) E. A. Boyle, Y. I. Li, and J. K. Pritchard, “An expanded view of complex traits: from polygenic to omnigenic,” Cell, vol. 169, no. 7, pp. 1177–1186, 2017.

      (7) C. Brackley, N. Gilbert, D. Michieletto, A. Papantonis, M. Pereira, P. Cook, and D. Marenduzzo, “Complex small-world regulatory networks emerge from the 3d organisation of the human genome,” Nat. Commun., vol. 12, no. 1, pp. 1–14, 2021.

      (8) R. B. Brem and L. Kruglyak, “The landscape of genetic complexity across 5,700 gene expression traits in yeast,” Proceedings of the National Academy of Sciences, vol. 102, no. 5, pp. 1572– 1577, 2005.

      (9) M. Chiang, C. A. Brackley, D. Marenduzzo, and N. Gilbert, “Predicting genome organisation and function with mechanistic modelling,” Trends in Genetics, vol. 38, no. 4, pp. 364–378, 2022.

      (10) M. Chiang, C. A. Brackley, C. Naughton, R.-S. Nozawa, C. Battaglia, D. Marenduzzo, and N. Gilbert, “Genome-wide chromosome architecture prediction reveals biophysical principles underlying gene structure,” Cell Genomics, vol. 4, no. 12, 2024.

      (11) A. Buckle, C. A. Brackley, S. Boyle, D. Marenduzzo, and N. Gilbert, “Polymer simulations of heteromorphic chromatin predict the 3d folding of complex genomic loci,” Mol. Cell, vol. 72, no. 4, pp. 786–797, 2018.

      (12) G. Forte, A. Buckle, S. Boyle, D. Marenduzzo, N. Gilbert, and C. A. Brackley, “Transcription modulates chromatin dynamics and locus configuration sampling,” Nature Structural & Molecular Biology, vol. 30, no. 9, pp. 1275–1285, 2023.

      (13) P. R. Cook and D. Marenduzzo, “Transcription-driven genome organization: a model for chromosome structure and the regulation of gene expression tested through simulations,” Nucleic acids research, vol. 46, no. 19, pp. 9895–9906, 2018.

      (14) M. Marenda, D. Michieletto, R. Czapiewski, J. Stocks, S. M. Winterbourne, J. Miles, O. C. Flemming, E. Lazarova, M. Chiang, S. Aitken et al., “Nuclear rna forms an interconnected network of transcription-dependent and tunable microgels,” BioRxiv, pp. 2024–06, 2024.

      (15) J.-K. Ryu, C. Bouchoux, H. W. Liu, E. Kim, M. Minamino, R. de Groot, A. J. Katan, A. Bonato, D. Marenduzzo, D. Michieletto et al., “Bridging-induced phase separation induced by cohesin smc protein complexes,” Science advances, vol. 7, no. 7, p. eabe5905, 2021.

      (16) A. Shakya, S. Park, N. Rana, and J. T. King, “Liquid-liquid phase separation of histone proteins in cells: role in chromatin organization,” Biophysical journal, vol. 118, no. 3, pp. 753–764, 2020.

      (17) A.-M. Florescu, P. Therizols, and A. Rosa, “Large scale chromosome folding is stable against local changes in chromatin structure,” PLoS computational biology, vol. 12, no. 6, p. e1004987, 2016.

    1. Technicians: People who build, operate, maintain, and repair the items that the experts design and theorize about. They have highly technical knowledge as well, but of a more practical nature (the hands-on users, operators).

      One of the common types of audience which would be technicians stood out to me because our class is about technical writing.

    2. Add and vary graphics. For non-specialist audiences, you may want to use more, simpler graphics. Graphics for specialists are often more detailed and technical. In technical documents for non-specialists, there also tend to be more “decorative” graphics—ones that are attractive but serve no strict informative or persuasive purpose at all.

      Find ways to create visualizations of data or protocols included in the document to improve clarity, understanding, and readability.

    3. Add cross-references to important information. In technical information, you can help readers by pointing them to background sources. If you cannot fully explain a topic at a certain time in a document, point to a section, chapter, or external source where the information is located. One can also include glossary of terms or appendices at the end of a document with extra information that is related, but not 100% necessary, to understand the document’s content.

      Cross reference any information that is not strictly necessary, but may be helpful, so that interested readers can find it.

    4. Change sentence style. How you write—at the individual sentence level—can make a difference to the effectiveness of your document. In instructions, for example, using imperative voice and “you” phrasing is vastly more understandable than the passive voice or third-personal phrasing. Passive voice is where one switches the location of the subject and object in a sentence. A simple, active sentence such as “The boy threw the ball” becomes the wordy, passive sentence “The ball was thrown by the boy.” Taking the emphasis off the noun—in this case, the boy—and the action—throw vs was thrown—detracts from meaning of the sentence. Passive, person-less writing is harder to read—put people and action in your writing. There are times to write in passive voice, but technical documents generally need active sentence structure.

      Use more active sentence structure than passive

    5. Change the organization of your information. Sometimes, you can have all the right information but arrange it in the wrong way. For example, there can be too much background information up front (or too little) such that certain readers get lost. Other times, background information needs to be placed throughout the main text—for example, in instructions it is sometimes better to feed in chunks of background at the points where they are immediately needed. If the document does not seem to work for the audience, try reorganizing some of the information so that the document is clearer and easier to understand. Strengthen transitions and key words. It may be difficult for readers, particularly non-specialists, to see the connections between the main sections of your report, between individual paragraphs and sometimes even between individual sentences. You can make these connections much clearer by adding transition words and by echoing key words more accurately. Words like “therefore,” “for example,” “however” are transition words—they indicate the logic connecting the previous thought to the upcoming thought. You can also strengthen transitions by carefully echoing the same key words. A report describing new software for architects might use the word software several times on the same page or even in the same paragraph. In technical documents, it is not a good idea to vary word choice—use the same words so that people can clearly understand your ideas. Your design choices can also visually connect and transition between sections (see the “Strategies to revise document design” below). Write stronger introductions—for the whole document and for major sections. People seem to read with more confidence and understanding when they have the big picture—a view of what is coming, and how it relates to what they have just read. Therefore, writing a strong introduction to the entire document—one that makes clear the topic, purpose, audience, and contents of that document—makes the document easier to understand. In most types of technical documents, each major section includes mini-introductions that indicate the topic of the section and give an overview of the subtopics to be covered in that section to let the reader know what information each section will contain. Create topic sentences for paragraphs and paragraph groups. It can help readers immensely to give them an idea of the topic and purpose of a section (a group of paragraphs) and in particular to give them an overview of the subtopics about to be covered. This is the first sentence of the paragraph, and states the main point or idea. The type of topic sentence can vary depending on document type. In an argumentative paragraph, you will make a claim which you will prove through the rest of the paragraph (e.g., reports; proposals; some emails, letters, and memos). In informative documents, the topic sentence will be an overall point which you will explain and back up in the detail sentences (e.g., informative emails, letters, and memos; results section of a report).

      Review

    6. Change the technical level of the information. You may have the right information in your document, but it may be pitched at too high or too low a technical level. Are you using terms the reader will be familiar with? Is the sentence structure clear for the audience’s reading comprehension? It may be pitched at the wrong kind of audience—for example, an expert audience rather than a technician audience. This happens often when product-design notes are passed off as instructions. Think about your audience’s education level and familiarity with the topic and terms used, and revise to make sure your content is clear for that audience.

      Ensure that the document matches the audience in terms of reading level and general understanding: write for a specific group of people at their level.

    7. Corporate culture and values are similar, but on a micro level. Corporate culture is created by the employees and how they interact. Within a company, different departments may have their own cultures, in addition to the company’s collective culture. Corporate values are set by the company, and are often reflected in their mission statements, policies, and other structures. These are the principles that guide the company’s decisions and goals. When considering culture and values, identify both personal and corporate factors which can influence the reader.

      In addition to personal culture, if the document is intended for company use, make sure that it is in line with the company's culture and way of doing things.

    8. Experts: People who know the business or organization (and possibly the theory and the product) inside and out. They designed it, they tested it, and they know everything about it. Often, they have advanced degrees and operate in academic settings or in research and development areas of the government and technology worlds (the creators, specialists). Technicians: People who build, operate, maintain, and repair the items that the experts design and theorize about. They have highly technical knowledge as well, but of a more practical nature (the hands-on users, operators).

      Experts and technicians are different audience types: experts have an understanding of the workings of the product, and its design, while technicians work on the product, understand how to use and fix it, but do not necessarily understand the theory behind the product.

    1. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      Wiesner et al. use a combination of state-of-the-art imaging techniques to visualize the exocytosis of vesicles labeled with vamp2-phluorin. This work builds on previous findings of the group and aims to quantify vesicle exocytosis along the axon and relate it to the location of actin rings (MPS). Exocytosis indeed occurs in axonal and dendritic regions ; however, at a significantly lower rate than in presynaptic terminals. Exceptionally, the AIS shows a remarkably high exocytosis rate compared to other axonal regions. Perturbation of the MPS with swinholide increases the nonsynaptic release of vamp2-phluorin. The spots supporting exocytosis along the axon lack spectrin but are spatially segregated from regions used for CCP formation.

      This work takes advantage of last-generation optical microscopy approaches to provide a quantitative analysis of exocytosis along the axon in nonsynaptic regions. Findings are solid, and the segregation of spots supporting exocytosis and endocytosis is intriguing. However, it is unclear to me whether the results obtained reflect a general mechanism or if they are biased by the experimental approach. Specifically, I have these major comments:

      1) Use of the term "spontaneous." I understand that the term "spontaneous" refers to exocytosis that "just" occurs. But exocytosis cannot be evaluated without considering electrical activity. Vamp2-phluorin has been extensively used to investigate neurotransmitter release. Since spontaneous neurotransmitter release occurs in the absence of action potentials, it is important to know how the rates of exocytosis are affected after incubation with TTX. These experiments are necessary to show if vesicles are indeed released spontaneously or if they require the presence of action potentials.

      2) The rates of spontaneous exocytosis are expressed in μm² /hour because these events are quite infrequent. According to the methods section, typical recording times are 5 minutes or less (lines 522-533). It would be more appropriate to express values per minute to establish comparisons with other works. The goal is to understand what sort of vesicles are being exocytosed. This is a key question that must be addressed before exploring other aspects such as the relationship to spectrin or endocytosis. If the authors can provide more information about the types of vesicles being exocytosed, this work becomes very relevant. Since I am aware of the technical difficulties associated with this, some suggestions are: use a vamp2-apex2-phluorin construct and confirm vesicle identity by EM, or, use iGluSnFR to confirm neurotransmitter release along axons.

      Minor comments:

      1) Since culture conditions promote synapse formation, could spontaneous exocytosis found along axons related to synapse formation? This aspect could be tested by co-staining with PSD-95 after fixation.

      Significance

      Significance:

      This is state-of-the-art study of the cell biology of neurons. The works demonstrates that vesicles are exocytosed along the axon and describes the molecular characteristics of the cytoskeletal elements involved.

      General assessment:

      The main strength of the study is the quality and diversity of imaging approaches used. The main limitation is defining the type of vesicle being exocytosed. It is important to know if vesicles imaged contain neurotransmitters.

      Advance:

      This paper is technically sound and provides interesting new concepts about how exocytosis occurs in nonsynaptic regions.

      Audience:

      This paper is appropriate for an audience familiarized with cell biology or cellular and molecular neuroscience

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #2

      Evidence, reproducibility and clarity

      The manuscript by Wiesner et al examines the non-synaptic exocytosis of vesicles in axon initial segment (AIS) as well as proximal and distal axons. Using VAMP2-pHluorin authors convincingly demonstrated that exocytosis occurs in AIS, along the axon shaft, cell body and dendrite. Upon perturbing membrane-associated periodic skeleton (MPS) pharmacologically, exocytic events at the AIS seem to increase, suggesting a potential inhibition of exocytosis at the baseline by MPS. To test whether exocytosis occurs in the area where the MPS is disorganized, authors developed a novel correlative live-cell/super resolution microscopy where exocytic events are identified live by HiLo imaging, followed by fixing the cells and imaging spectrins using SMLM platform and identified the repetitive spectrin map with SReD detector. Using this approach, they have identified that exocytosis in proximal and distal axons occurs at the membrane area with less spectrin. This area is distinct from the clathrin-enriched area where the group has previously identified as the endocytic sites. The strength of the paper lies within the imaging techniques. However, for publication, the following concerns should be addressed.

      Major concerns

      1) The absence of spectrin mesh. Unlike their previous paper using platinum replica EM where filaments are clearly visible, they are using the antibodies against one end of the spectrin, and therefore, they can visualize the periodic distribution of spectrin ends but cannot visualize its meshwork in this study. In addition to this limitation, at thicker processes, molecules below and above the focal plane may or may not be visible, potentially creating the spectrin-less area at the center. Thus, the conclusion regarding the absence of spectrin meshes at the exocytic sites is not well supported.

      2) The rate of exocytosis among controls. The rate of exocytosis at the AIS does not match between Fig. 2C and 3B (1.08 vs 0.64 events/um2/hour). Although the increase in the rate by swinA is relative to the DMSO control, the rate in swinA-treated neurons can be said similar to the control in Fig. 2C. So it is equally likely that DMSO is affecting the rate, rather than swinA. They need an additional control group with no treatment.

      3) Alignment of pHluorin with SMLM images. Since the interpretation depends highly on the perfect alignment of live-cell images with SMLM data and fixation can alter the ultrastructure [PMC7339343], using internal structures like mitochondria as fiducials would be more helpful. However, adding discussion would suffice.

      Minor concerns

      1) The data presented here do not support the claim that actin perturbation favors non synaptic exocytosis (line 205). Please revise the sentence.

      Significance

      General assessment: The strength of the paper is in the imaging techniques. Visualizing the exocytic sites along the axons relative to the MPS is novel. The limitation of the paper is the lack of an approach to fully visualize spectrin networks.

      Advance: If they can provide more convincing data demonstrating that exocytic sites are devoid of the spectrin meshwork, this paper will establish a novel concept regarding how non-synaptic exocytosis occurs along the axon.

      Audience: Researchers working in the neuronal cell biology field will be the main audience of the manuscript.

      Reviewer's expertise: Neuronal cell biology, exocytosis and endocytosis, and imaging

    3. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      Summary

      This manuscript investigates non-synaptic exocytosis along the axon shaft and examines how the submembrane actin-spectrin skeleton shapes the distribution of exocytic sites. Using cultured hippocampal neurons expressing VAMP2-pHluorin, the authors map spontaneous exocytosis along axons and apply a correlative live-cell/super-resolution imaging workflow to visualize the nanoscale organization of spectrin gaps relative to exocytic hotspots. They report that axonal shaft exocytosis is enriched at the AIS, that perturbing the actin-spectrin lattice alters shaft exocytosis, and that exocytic sites generally correspond to spectrin-free regions. The methodological quality and imaging data are excellent and represent a strength of the study.

      Major comments

      The manuscript presents compelling imaging, but several major claims require additional experimental evidence or clarification. The most critical issue concerns the distinction between synaptic and non-synaptic exocytic events. In Figure 1, synaptophysin is used to define synapses, but this is insufficient since synaptophysin is a presynaptic marker and does not confirm the presence of a postsynaptic compartment. The classification of exocytic events as synaptic, therefore, requires co-localization with postsynaptic markers such as PSD95 or Homer. Without this, the paper's main conceptual distinction is not fully supported. Figure 6 requires revision because endocytosis needs to be assessed using a synaptic vesicle-specific assay. A synaptotagmin luminal-domain antibody uptake experiment is recommended, as it would allow precise identification of bona fide SV recycling. This is essential to conclude whether the reported endocytic events reflect synaptic vesicle turnover. The nature of the vesicles undergoing exocytosis along the shaft and at the AIS also remains unresolved. It will be essential to determine whether the authors are observing exocytosis of synaptic vesicles (e.g., VGLUT-positive) or large dense-core vesicles (e.g., BDNF-containing). This can be addressed straightforwardly using available pHluorin-tagged constructs (VGLUT-pHluorin, BDNF-pHluorin). Calcium dependence of the AIS exocytic events should be evaluated. Experiments removing extracellular calcium, blocking voltage-gated calcium channels, or depolarizing neurons (for example, with elevated KCl) would clarify whether these correspond to classical calcium-triggered SV fusion. These requested experiments are realistic in scope and can generally be completed in a few weeks. The imaging, analysis, and methodological descriptions are of high quality, although more information on replication, sample size, and statistical treatment would improve reproducibility.

      Minor comments

      Clarification of the criteria used to classify spectrin gaps versus clathrin clearings would be helpful. Some figure legends require more detailed acquisition parameters. A clearer description of the image registration and alignment steps in the correlative pipeline would improve transparency. Prior literature on non-synaptic axonal exocytosis and on AIS trafficking could be cited more extensively. The figures are generally high quality, and a schematic summarizing the main findings might help readers.

      Referees cross-commenting

      Our reviews are pretty consistent overall, I think. Major requests relate to calcium/depolarization dependency, and I would like to insist on the synaptic vs extra-synaptic and SV vs LDCV issues.

      Significance

      General assessment: This is a technically strong study addressing an interesting and timely question in neuronal cell biology. The imaging quality and methodological innovation are clear strengths. Significant limitations include insufficient distinction between synaptic and non-synaptic release, lack of characterization of vesicle identity, and unclear calcium dependence of AIS exocytosis. Addressing these points will significantly improve the rigor and impact of the study. Advance: The work provides new insights into the spatial organization of axonal exocytosis and its relationship to the actin-spectrin skeleton. The correlative imaging pipeline is valuable. However, the conceptual advance depends on resolving the synaptic/non-synaptic distinction and identifying the vesicle populations involved. Audience: The study will primarily interest a specialized audience in cellular neuroscience, membrane trafficking, and axonal biology. With the recommended revisions, it will also appeal to a broader neurobiology readership interested in nanoscale cytoskeletal organization, synaptic physiology, and axonal signaling.

      Field of expertise: neuronal membrane trafficking, synaptic vesicle cycling, autophagy and endolysosomal pathways, cytoskeletal organization. I do not claim expertise in advanced optical engineering, but feel comfortable evaluating the biological interpretations and trafficking mechanisms.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      In this manuscript, Aghabi et al. present a comprehensive characterization of ZFT, a metal transporter located at the plasma membrane of the eukaryotic parasite Toxoplasma gondii. The authors provide convincing evidence that ZFT plays a crucial role in parasite fitness, as demonstrated by the generation of a conditional knockdown mutant cell line, which exhibits a marked impact on mitochondrial respiration, a process dependent on several iron-containing proteins. Consistent with previous reports, the authors also show that disruption of mitochondrial metabolism leads to conversion into the persistent bradyzoite stage. The study then employed advanced techniques, such as inductively coupled plasma-mass spectrometry (ICP-MS) and X-ray fluorescence microscopy (XFM), to demonstrate that ZFT depletion results in reduced parasite-associated metals, particularly iron and zinc. Additionally, the authors show that ZFT expression is modulated by the availability of these metals, although defects in the transporter could not be compensated for by exogenous addition of iron or zinc. 

      While the manuscript does not directly investigate the transport function of ZFT through biochemical assays, the authors indirectly support the notion that ZFT can transport zinc by demonstrating its ability to compensate for a lack of zinc transport in a yeast heterologous system. Furthermore, phenotypic analyses suggest defects in iron availability, particularly with regard to Fe-S mitochondrial proteins and mitochondrial function. Overall, the manuscript provides a solid, well-rounded argument for ZFT's role in metal transport, using a combination of complementary approaches. Although direct biochemical evidence for the transporter's substrate specificity and transport activity is lacking, the converging evidence, including changes in metal concentrations upon ZFT depletion, yeast complementation data, and phenotypic changes linked to iron deficiency, presents a convincing case. Some aspects of the results may appear somewhat unbalanced, particularly since iron transport could not be confirmed through heterologous complementation, while zinc-related phenotypes in the parasites have not been thoroughly explored (which is challenging given the limited number of zinc-dependent proteins characterized in Toxoplasma). Nevertheless, given that metal acquisition remains largely uncharacterized in Toxoplasma, this manuscript provides an important first step in identifying a metal transporter in these parasites, and the data presented are generally convincing and insightful. 

      We thank the reviewer for their assessment and would like to highlight that we now add direct biochemical characterisation in the new Figure 8, supporting our hypothesis and confirming iron transport by this protein.

      Reviewer #2 (Public review): 

      Summary: 

      The intracellular pathogen Toxoplasma gondii scavenges metal ions such as iron and zinc to support its replication; however, mechanistic studies of iron and zinc uptake are limited. This study investigates the function of a putative iron and zinc transporter, ZFT. In this paper, the authors provide evidence that ZFT mediates iron and zinc uptake by examining the regulation of ZFT expression by iron and zinc levels, the impact of altered ZFT expression on iron sensitivity, and the effects of ZFT depletion on intracellular iron and zinc levels in the parasite. The effects of ZFT depletion on parasite growth are also investigated, showing the importance of ZFT function for the parasite. 

      Strengths: 

      A key strength of the study is the use of multiple complementary approaches to demonstrate that ZFT is involved in iron and zinc uptake. Additionally, the authors build on their finding that loss of ZFT impairs parasite growth by showing that ZFT depletion induces stage conversion and leads to defects in both the apicoplast and mitochondrion. 

      Weaknesses: 

      (1) Excess zinc was shown not to alter ZFT expression, but a cation chelator (TPEN) did lead to decreased expression. While TPEN is often used to reduce zinc levels, does it have any effect on iron levels? Could the reduction in ZFT after TPEN treatment be due to a reduction in the level of iron or another cation?

      WE thank the reviewers for this comment, we agree that TPEN is a fairly unspecific cation chelator so to determine if its effects are due to removal of zinc or other cations we treated with TPEN and either zinc or iron. Co-incubation of TPEN and zinc prevented ZFT depletion, while TPEN+FAC had no effect compared to TPEN alone (new Figure 6h and i), strongly suggesting the effects on ZFT abundance are linked to zinc and not just iron.  

      (2) ZFT expression was found to be dynamic depending on the size of the vacuole, based on mean fluorescence intensity measurements. Looking at protein levels by Western blot at different times during infection would strengthen this finding. 

      We show here that ZFT expression is highly dynamic, depending both the iron status of the host cell and the number of parasites/vacuole. However, validating this finding by western would be complex due to the highly unsynchronised nature of parasite replication and the large number (5x10<sup>6</sup> - 1x10<sup>7</sup>cells) of parasites required to visualise ZFT. Further, we show that ZFT is apparently internalised prior to degradation. For this reason, we have not attempted to validate this finding by western blotting at this time.

      (3) ZFT localization remained at the parasite periphery under low iron conditions. However, in the images shown in Figure S1c, larger vacuoles (containing 4-8 parasites) are shown for the untreated conditions, and single parasite-containing vacuoles are shown for the low iron condition. As ZFT localization is predominantly at the basal end of the parasite in larger PV and at the parasite periphery for smaller vacuoles, it would be better to compare vacuoles of similar size between the untreated and low-iron conditions.

      The reviewer brings up a good point, the concentration of iron chelator that we used here does not enable parasite replication, making an assessment of changes in localisation challenging. To address this, have new data using a much lower concentration of chelator (20 mM), which is still expected to impact the parasites (Hanna et al, 2025), but allows for replication. In this low iron environment, ZFT localisation remained significantly more peripheral (Fig. S1d,e), supporting our hypothesis that ZFT localisation is iron dependent, independent of vacuolar stage.

      Reviewer #3 (Public review): 

      Summary:

      Aghabi et al set out to characterize a T. gondii transmembrane protein with a ZIP domain, termed ZFT. The authors investigate the consequences of ZFT downregulation and overexpression for parasite fitness. Downregulation of ZFT causes defects in the parasite's endosymbiotic organelles, the apicoplast and the mitochondrion. Specifically, lack of ZFT causes a decrease in mitochondrial respiration, consistent with its role as an iron transporter. This impact on the mitochondria appears to trigger partial differentiation to bradyzoites. The authors furthermore demonstrate that expression of TgZFT can rescue a yeast mutant lacking its zinc transporter and perform an array of direct metal ion measurements, including X-ray fluorescence microscopy and inductively coupled mass spectrometry (ICP-MS). These reveal reduced metal ions in parasites depleted in ZFT. Overall, the data by Aghabi et al. reveal that ZFT is a major metal ion transporter in T. gondii, importing iron and zinc for diverse essential processes. 

      Strengths:

      This study's strength lies in the thorough characterization of the transporter. The authors combine a number of techniques to measure the impact of ZFT depletion, ranging from the direct measurement of metal ions to determining the consequences for the parasite's metabolism (mitochondrial respiration), as well as performing a yeast mutant complementation. This work is very thorough and clearly presented, leaving little doubt about this protein's function. 

      Weaknesses:

      This study offers no major novel insights into the biology of T. gondii. The transporter was already annotated as a zinc transporter (ToxoDB), was deemed essential (PMID: 27594426), and localized to the plasma membrane (PMID: 33053376). This study mostly confirms and validates these previous datasets. The authors identify three other proteins with a ZIT domain. Particularly, the role of TGME49_225530 is intriguing, as it is likely fitness-conferring (score: -2.8, PMID: 27594426) and has no subcellular localization assigned. Characterizing this protein as well, revealing its localization, and identifying if and how these transporters coordinate metal ion transport would have been worthwhile. 

      We agree that the work presented here validates the previous datasets, and if that was all we had done, we agree that the biological insights would be limited. However, we have gone significantly beyond the predictions, demonstrating dynamic localisation changes, iron-mediated regulation, the lack of substrate-based complementation and validating transport activity of both zinc and iron. Although in silico predictions and screens can be informative, it remains important to validate biological functions experimentally. While we agree that characterisation of TGME49_225530 (as well as the other two annotated ZIP proteins) would be interesting, and will certainly form part of our future plans, it is significantly beyond the scope of the presented manuscript.

      Another weakness is the data related to the impact of ZFT downregulation on the apicoplast in Figure 4. The authors show that downregulation of ZFT causes an increase in elongated apicoplasts (Figure 4d). The subsequent panels seem to show that the parasites present a dramatic growth defect at that time point. This growth arrest can directly explain the elongated apicoplast, but does not allow any conclusion about an impact on the organelle. In any case, an assessment of 'delayed death' as presented in Figure 4c seems futile, since the many other processes affected by zinc and iron depletion likely cause a rapid death, masking any potential delayed death.

      To address this point, we agree that given the importance of iron and zinc to the parasite that we cannot differentiate the death of the parasite due to apicoplast defects from death from other causes and we have modified the discussion to reflect this, as below.

      “However, given the delayed phenotype typically seen upon apicoplast disruption, we cannot determine if this is a direct effect of ZFT, or a downstream consequence of metal depletion”

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      Specific Comments: 

      (1) The background on the typical sequence features that would identify Toxoplasma ZIP homologues should be expanded and clarified. While these proteins are likely quite divergent and may lack many conserved features, the manuscript currently does not provide enough detail to assess how similar (or different) TgZIPs are from well-characterized family members. Additionally, the justification for focusing on TGGT1_261720 (ZFT) over TGGT1_225530, as stated in the first paragraph of the results section, seems unclear. There is no predictive data supporting a potential plasma membrane localization for TGGT1_225530 (yet this cannot be excluded), and TGGT1_225530 appears to have more canonical metal-binding motifs. I believe that the fact that only TGGT1_261720 is iron-regulated should be sufficient justification for its selection, and this point could be emphasized more clearly. Furthermore, the discussion mentions a leucine residue that may be associated with broad substrate specificity, but this is not addressed in the initial comparative sequence analysis. These residues and the HK motif are not actually addressed in the Gyimesi et al. reference currently mentioned; thus this could be clarified and updated with references (such as PMID: 31914589) that provide more recent insights into key residues involved in metal selectivity in ZIP transporters.

      We thank you for this comment, to address these points:

      We agree that the iron-mediated regulation is sufficient for our focus on ZFT and have clarified the text to reflect this, as described above.

      We have also updated the references as suggested, our apologies for this oversight.

      We have further expanded the discussion, especially with reference to our new results using heterologous expression in oocytes (please see above).

      (2) Figure 1D, Figure 2A, C, H, Figure 3D, Figure 6F, H, corresponding text and paragraph 2 of the Discussion: It seems that most of the "non-specific bands" annotated in Figure 1D, which are lower molecular weight products, are not present in the parental cell line, suggesting they may not be non-specific after all. These bands also vary depending on the cell line (e.g., promoter used, see Figures 2H and 3D) or experimental conditions (e.g., iron excess or depletion). Given the dynamic localization of ZFT during intracellular development, it may be worth exploring whether these lower molecular weight bands represent degraded forms of TgZFT, possibly corresponding to the basally-clustered signal observed by immunofluorescence, with only the full-length protein associating with the plasma membrane. This possibility should be investigated or at least discussed further.

      While the lower bands are not present in the parental, we do see them in other HA-tagged lines, especially when the expression of the tagged protein is low, seen below (Author response image 1). We don’t currently have an explanation for these, but we can confirm that they do not change in abundance in parallel with the full length protein, supporting our hypothesis that these bands are an artefact of the anti-HA antibody in our system. Although ZFT is clearly degraded (e.g. Fig. 1g), we currently do not believe these bands are ZFT c-terminal degradation products.

      Author response image 1.

      Western blot of ZFT-3HA<sub>zft</sub> and another HA-tagged unrelated cytosolic protein, demonstrating that the lower bands are most likely nonspecific.

      (3) It is unfortunate that ZFT could not complement a yeast iron transporter mutant cell line, as this would have provided a strong argument for ZFT's role in iron transport. The manuscript does not provide much detail about the Δfet2/3 yeast mutant line. Fet3 is the ferroxidase subunit, while Ftr1 is the permease subunit of the high-affinity iron transport complex in yeast. Fet2, however, appears to be Saccharomyces cerevisiae's VPS41 homolog. Therefore, is Δfet2/3 the most appropriate mutant to use, or would another mutant line (e.g., ΔFtr1) be a better choice? Additionally, while Figure 7 suggests a decrease in metal uptake upon ZFT depletion, it would be useful to test whether overexpression of ZFT leads to enhanced metal incorporation, perhaps using a FerroOrange assay. 

      We thank the reviewer for their comments, which we have answered below:

      The Δfet2/3 yeast mutant was a typo and has been corrected, or apologies, we did use the  Δfet3/4 mutant line, based on previous successful experiments involving plant metal transporters (e.g  (DiDonato et al., 2004)).

      Unfortunately, we were unable to perform the FerroOrange assay in the overexpression line as this line is endogenously fluorescent in the same channel as FerroOrange.

      However, as detailed above we have now added significant new data, confirming our hypothesis that ZFT is an iron/zinc transporter through heterologous expression in Xenopus oocytes in the new figure 8. This provides direct evidence of transport of iron, and evidence that zinc can inhibit this transport, consistent with our hypothesis.  

      (4) The annotation of the blot in Figure 2H suggests that overexpressed ZFT-TY can only be detected in the absence of heat denaturation. However, this is not addressed in the text. Does heat denaturation also affect the detection of ZFT-3HA or the lower molecular weight products? This should be clarified in the manuscript. 

      Interestingly, ZFT is detectable after boiling at 95° C for 5 minutes when expressed at endogenous (or near endogenous) levels in the ZFT-3HA<sub>sag1</sub> and ZFT-3HA<sub>zft</sub> tagged parasite lines. However, overexpression of ZFT leads to a loss of detection via western blot when boiled, although the protein is detectable without heat denaturation.

      A possible explanation for this is that overexpression of protein may cause ZFT to miss-fold, making the protein more prone to aggregation following boiling, rendering the protein insoluble and unable to enter the gel. Moreover, heat aggregation can sometimes mask the epitope tags on the protein that is required for the antibody to be recognised, possibly explaining by ZFT is undetectable when overexpressed and exposed to boiling conditions, as has previously been observed for other transmembrane proteins (e.g. (Tsuji, 2020)).

      We have clarified this in the results section, although we do not have a full explanation for this, we consider it important to share for others who may be looking at expression of these proteins.

      (5) Figure 3G: It might be helpful to include an uncropped gel profile to allow readers to visualize that the main product does indeed correspond to a potential dimeric form in the native PAGE. 

      This has now been added in Figure S3e, thank you for this suggestion.

      (6) The investigation of the impact of ZFT depletion on the apicoplast could be improved. The authors suggest that ZFT knockdown inhibits apicoplast replication based on a modest increase in elongated organelles, but the term "delayed death" is not appropriate in that case, as it is typically linked to a loss of the organelle. This is not observed here and is also illustrated by the unchanged CPN60 processing profile. So, clearly, there seems to be no strong morphological effect on the apicoplast early on after ZFT depletion. On the other hand, the authors dismiss any impact on TgPDH-E2 lipoylation (which is iron-dependent) based on the fact that the lipoylated form of the protein is still detected by Western blot. However, closer inspection of the blot in Figure 4B suggests that the intensity of the annotated TgPDH-E2 signal is reduced compared to the -ATc condition (although there might be differences in protein loading, as indicated by the control) or even with the mitochondrial 2-oxoglutarate dehydrogenase-E2, whose lipoylation is presumably iron-independent (see PMID: 16778769). This experiment should be repeated, and the results quantified properly in case something was missed, and the duration of depletion conditions perhaps extended further. Of note, it would also be worthwhile to revisit size estimations, as the displayed profiles seem inconsistent with the typical sizes of lipoylated proteins detected with the anti-lipoyl antibody (e.g., ~100 kDa for PDH-E2, ~60 kDa for branched-chain 2-oxo acid dehydrogenase, and ~40 kDa 2-oxoglutarate dehydrogenase).

      We thank the reviewer for this comment. We agree that there is no strong defect on the apicoplast in the first lytic cycle and we have modified the language to remove reference to delayed death, as given the magnitude of changes associated with loss of iron and zinc, we cannot be certain about the role of the apicoplast.

      Based on this suggestion, we have now quantified the levels of lipoylation of PDH-E2, BDCK-E2 and OGDH-E2 and now include this in Figure S4b, c, d. Supporting our other results, we do not see a significant change in PDH-E2 lipolyation upon ZFT knockdown. However, although OGDH-E2 lipoylation is unchanged (Figure S4c) interestingly we do see a significant increase in BDCK-E2 lipoylation (Figure S4d). This process is not expected to be directly iron related, as mitochondrial lipoylation is through scavenging rather than synthesis however, speaks to the larger mitochondrial disruption that we see. We now consider this further in the discussion.

      For the sizes, we thank the reviewer for bringing this up, our apologies this was due to an error in the annotation, and we have now corrected this in the figure.

      (7) In the third paragraph of the discussion, the authors mention the inability to complement ZFT loss by adding exogenous metals. One argument is the potential lack of metal access to the parasitophorous vacuole (PV). Although largely unexplored, this point could be expanded further in the discussion, as the issue of metal transport to the parasite involves not only the parasite plasma membrane but also the PV membrane. Additionally, the authors mention the absence of functional redundancy in transporters, but it would be helpful to discuss potential stage-specific or differential expression of other ZIP candidates. Transcriptomic data available on Toxodb.org could provide useful insights into this, and experimental approaches, such as RT-PCR, could be used to assess the expression of these candidates in the absence of ZFT. 

      On the issue of metals crossing the PV membrane, we agree that while we do not currently know mechanisms of metal transport within the infected host cell, we do have experimental confirmation that the concentration and form of the metals that we are using can impact the parasites. We show that metal treatment inhibits parasites growth (e.g. Figure 3k-n, Figure 6a-d) and we can detect the increased metals through our experiments using FerroOrange and FluroZine (Figure 7a, c). In these experiments, parasites were treated intracellularly and so we can confirm that, regardless of the mechanism, iron and zinc can reach the parasite. While entry of metals across the PV is an intriguing question, it is beyond the scope of the present work which focuses on the role of the selected transporter.

      We agree that a more detailed discussion of the other ZIP transporters is warranted. We have extended this section of the discussion although for now, we cannot determine the role of the other ZIP transporters in Toxoplasma.

      (8) In the discussion, the authors mention that « Inhibition of respiration has previously been linked to bradyzoite conversion ». To strengthen their point, the authors could mention that mitochondrial Fe-S mutants, as well as mutants affecting mitochondrial translation or the mitochondrial electron transport chain, also initiate bradyzoite conversion (PMID: 34793583). This would reinforce the connection between mitochondrial dysfunction and stage conversion. 

      This is an excellent point and we have added this to the discussion as follows:

      “Inhibition of mitochondrial Fe-S biogenesis or mitochondrial respiration have both previously been linked to bradyzoite conversion (Pamukcu et al., 2021; Tomavo and Boothroyd, 1995), however we do not yet know the signalling factors linking iron, zinc or mitochondrial function to bradyzoite differentiation”.

      (9) As a general comment on manuscript formatting, providing page and line numbers would significantly improve the manuscript's readability and allow reviewers to more easily reference specific sections. This would help address the minor issues of typos (e.g., multiple occurrences of "promotor"). I suggest a careful read-through to correct these issues. 

      We thank the reviewer for this comment and in the resubmitted version we have corrected these issues. 

      Reviewer #2 (Recommendations for the authors): 

      (1) In the alignment (Figure 1a), the BPZIP sequence is from which organism (genus, species)? It would be helpful to include this information in the figure legend.

      Apologies for this oversight, this figure and section have been reworked and the species name (Bordetella bronchiseptica) added.

      (2) In reference to Figure 1a, the authors state, "Interestingly, all parasite ZIP-domain proteins examined have a HK motif at the M2 metal binding". I was wondering if by "all" the authors mean Toxoplasma and Plasmodium falciparum (shown in Figure 1a) or did the authors also look at other apicomplexan parasites such as Cryptosporidium or Neospora? Is this a general feature of apicomplexan parasites? 

      We looked at this, and the HK motif in the M2 binding site is conserved in Neospora Cryptosporidium, and even the digenic gregarine Porospora cf. gigantea. However, in the more distantly related Chromera we find a HH motif at the same position. This suggests that the HK motif is present in the Apicomplexa, but not conserved in the free-living Alveolata. Although we cannot speculate on the role of this motif currently, its role in metal import in Apicomplexa does deserve future scrutiny. To reflect this finding we have modified Figure 1a and the text.

      (3) In Figure 1e, to better visualize the ZFT-3HA staining at the basal pole, it would be better to omit the DAPI staining from the merged image. It is difficult to see the ZFT staining in the image of the large vacuole.

      We have removed the DAPI from this image to improve clarity.

      (4) Based on the "delayed-death" phenotype of the apicoplast, it is not surprising that no defects were observed in CPN60 processing or protein lipoylation. Have the authors considered measuring these phenotypes after a further round of growth (as was done for visualizing apicoplast morphology)? 

      We agree that changes in apicoplast function are often only seen in the second round of replication. However, here we wanted to check if ZFT depletion led to immediate changes in function of the organelle, which was not the case. It is highly likely that after the second round, we would see significant defects in the apicoplast function, however given the immediate importance of iron and zinc to many processes within the parasite, we believe that these experiments would be complicated to interpret.

      (5) Depleting ZFT led to a reduction in expression levels for the mitochondrial Fe-S protein SDHB but not for a cytosolic Fe-S protein. Is it expected that less intracellular iron (via depleted ZFT) would differentially affect mitochondrial versus cytosolic Fe-S proteins? 

      Previous studies (e.g., Maclean et al., 2024; Renaud et al., 2025) have shown that upon direct inhibition of the cytosolic Fe-S pathway, ABCE1 is fairly stable and levels can persist for 2-3 days post treatment. However, our recent work has shown that rapid and acute depletion of iron directly (though treatment with a chelator) can lead to ABCE1 levels decreasing within 24h (Hanna et al., 2025). In the case of ZFT knockdown, due to the more gradual reduction in iron levels seen (e.g. Figure 7j) we believe the parasites are prioritising key Fe-S pathways (e.g. essential proteostasis through ABCE1), probably while remodelling metabolism (as seen in our Seahorse assays). However, there are many proteins expected to be directly impacted by iron and zinc restriction that these parasites experience, and different protein classes are expected to behave differently in these conditions.

      Reviewer #3 (Recommendations for the authors): 

      (1) Is the effect on the plaque size between T7S4-ZFT (-aTc) in regular and 'high iron' conditions significant? The authors show convincingly that the plaque size is smaller due to the swapped promoter and the resulting overexpression of ZFT. But is the effect aggravated in high iron? This would be expected if excess iron were the problem.

      The plaque sizes are significantly smaller in the T7S4-ZFT line under high iron compared to the untreated condition, and compared to the parental untreated line. However, if we normalise plaque size to untreated conditions for both lines, there is not a significant change in plaque size in high iron between the parental and T7S4-ZFT. This is possibly due to the concentration of iron used (200 mM), which may not be optimal to see this effect, or the time taken for plaque assays (6-7 days), which may allow the excess iron to be stored by the host cells, changing the effective concentration of parasite exposure.

      (2) I struggle to understand the intracellular growth assay in Figure 5b. Here, T7S4-ZFT parasites show 25 % of vacuoles with more than 8 parasites (labelled 8+). But such large vacuoles are not observed in the parental strain. It appears as if the inducible strain grows faster even though it was earlier shown to have a fitness defect (see Figure 3j). Can you please clarify?

      This is a result of rapid growth of the parental line, some vacuoles in this line lysed and initiated a new round of replication at this time point while we saw no evidence at any timepoint that ZFT-depleted parasites were able to lyse the host cell. However, the initial (24-48h post ATc addition) replication rate of the ZFT KD remains similar to the parental. In this panel, we wanted to emphasize that the major phenotype we see upon ZFT depletion is vacuole disorganisation, which we believe is linked to the start of differentiation into bradyzoites.

      (3) Did the authors perform an IFA in addition to the Western blot to localize the 2nd Ty-tagged ZFT copy? It seems important to validate that the protein correctly localizes to the plasma membrane. 

      We have done so and now include these data in Figure S2b. Overexpression of ZFT-Ty localises to internal structures (probably vesicles) with some signal at the periphery, however, this limited expression at the periphery is sufficient to mediate the phenotypes that we see.

      (4) First sentence of the abstract and introduction: The authors speak of metabolism and cellular respiration as though they are two different processes. Is respiration not part of metabolism? 

      This is an excellent point, we wanted to distinguish mitochondrial respiration  from general cellular metabolism, but this was not clear. We have now changed this in the introduction to the below:

      “Iron, and other transition metals such as zinc, manganese and copper, are essential nutrients for almost all life, playing vital roles in biological processes such as DNA replication, translation, and metabolic processes including mitochondrial respiration (Teh et al., 2024)”

      (5) 2nd paragraph of the introduction: toxoplasmosis is written capitalized but should be lower case.

      This has been corrected.

      (6) Figure 4j legend: change 'shits parasites to a more quiescent stage' to 'shifts parasites'.

      This has been corrected, our apologies.

      (7) Please correct the following sentence: 'These data demonstrate ZFT depletion leads to the expression of the bradyzoite-specific markers BAG1 and DBL.' DBL is not expressed by the parasite. It is a lectin that binds to the sugars in the cyst wall.

      We have now modified this in the text. The sentence now reads: “These data show that ZFT depletion leads to the expression of the bradyzoite marker BAG1 and the production of the cyst wall, as detected by DBL”.

      (8) In the section on yeast complementation with TgZFT, the authors write: 'Based on this success, we also attempted to complement...'. Please consider changing 'Success' to something more neutral.

      We have modified the text to now read: “Based on these results, we also attempted to complement”…

      (9) In the discussion, the authors write: 'We see a delayed phenotype on the apicoplast, suggesting that metal import is also required in this organelle, although no apicoplast metal transporters have yet been identified.' Please consider the study Plasmodium falciparum ZIP1 Is a Zinc-Selective Transporter with Stage-Dependent Targeting to the Apicoplast and Plasma Membrane in Erythrocytic Parasites (PMID: (38163252).

      We thank the reviewer for the note and have modified the text to include this and the reference. Please see below:

      “Iron is known to be required in the apicoplast (Renaud et al., 2022), zinc also may be required, as the fitness-conferring Plasmodium zinc transporter ZIP1 is transiently localised to the apicoplast (Shrivastava et al., 2024), although the functional relevance of this localisation has not yet been established”.

      (10) The authors write: 'Iron is known to be required in the apicoplast (Renaud et al., 2022), although a potential role for zinc in this organelle has not yet been established.' The role for zinc in the apicoplast may not have been shown formally, but surely among its hundreds of proteins, and those involved in replication and transcription, there are some that depend on zinc...?

      Yes, we agree it would make sense, however multiple searches using ToxoDB and the datasets from Chen et al (2025) were unable to find any apicoplast-localised proteins with zinc-binding domains. We cannot exclude that zinc is in the apicoplast, and the results from Plasmodium (Shrivastava et al., 2024) may suggest that is, however currently we do not have any evidence for its role within this organelle.

      References

      DiDonato, R.J., Roberts, L.A., Sanderson, T., Eisley, R.B., Walker, E.L., 2004. Arabidopsis Yellow Stripe-Like2 (YSL2): a metal-regulated gene encoding a plasma membrane transporter of nicotianamine-metal complexes. Plant J 39, 403–414. https://doi.org/10.1111/j.1365-313X.2004.02128.x

      Hanna, J.C., Shikha, S., Sloan, M.A., Harding, C.R., 2025. Global translational and metabolic remodelling during iron deprivation in Toxoplasma gondii. https://doi.org/10.1101/2025.08.11.669662

      Maclean, A.E., Sloan, M.A., Renaud, E.A., Argyle, B.E., Lewis, W.H., Ovciarikova, J., Demolombe, V., Waller, R.F., Besteiro, S., Sheiner, L., 2024. The Toxoplasma gondii mitochondrial transporter ABCB7L is essential for the biogenesis of cytosolic and nuclear iron-sulfur cluster proteins and cytosolic translation. mBio 15, e00872-24. https://doi.org/10.1128/mbio.00872-24

      Pamukcu, S., Cerutti, A., Bordat, Y., Hem, S., Rofidal, V., Besteiro, S., 2021. Differential contribution of two organelles of endosymbiotic origin to iron-sulfur cluster synthesis and overall fitness in Toxoplasma. PLoS Pathog 17, e1010096. https://doi.org/10.1371/journal.ppat.1010096

      Renaud, E.A., Maupin, A.J.M., Berry, L., Bals, J., Bordat, Y., Demolombe, V., Rofidal, V., Vignols, F., Besteiro, S., 2025. The HCF101 protein is an important component of the cytosolic iron–sulfur synthesis pathway in Toxoplasma gondii. PLoS Biol 23, e3003028. https://doi.org/10.1371/journal.pbio.3003028

      Shrivastava, D., Jha, A., Kabrambam, R., Vishwakarma, J., Mitra, K., Ramachandran, R., Habib, S., 2024. Plasmodium falciparum ZIP1 Is a Zinc-Selective Transporter with Stage-Dependent Targeting to the Apicoplast and Plasma Membrane in Erythrocytic Parasites. ACS Infect. Dis. 10, 155–169. https://doi.org/10.1021/acsinfecdis.3c00426

      Teh, M.R., Armitage, A.E., Drakesmith, H., 2024. Why cells need iron: a compendium of iron utilisation. Trends in Endocrinology & Metabolism 35, 1026–1049. https://doi.org/10.1016/j.tem.2024.04.015 Tomavo, S., Boothroyd, J.C., 1995. Interconnection between organellar functions, development and drug resistance in the protozoan parasite, Toxoplasma gondii. International Journal for Parasitology 25, 1293–1299. https://doi.org/10.1016/0020-7519(95)00066-B.

    1. Reviewer #3 (Public review):

      Summary:

      The paper from Hall et al. reports the effects of an altered function spx allele on the physiology of S. aureus. Since Spx is essential in this organism, the authors compare WT with a spx C10A allele that retains Spx functions that are independent of the formation of a C10-C13 disulfide. However, the major role of Spx in maintaining disulfide homeostasis in this organism appears to be reduced by this mutation, including a reduction (relative to WT) in the DIA-induction of thioredoxin, thioredoxin reductase, and BSH biosynthesis and reduction enzymes.

      Strengths:

      Based on a wide range of studies, the authors develop a model in which Spx is required for adaptation to disulfide stress, and this adaptation involves (in part) induction of both cystine/Cys uptake and the Fur regulon. Overall, the results are compelling, but further efforts to clarify the presentation will aid readers in being able to follow this very complicated story.

      Weaknesses:

      (1) More details are needed on how relative growth is defined and calculated (e.g., line 145 and Figure 1C). The raw data (growth curves) should be included when reporting relative growth so that readers can see what changed (lag, growth rate, final OD?). Later in the paper, the authors refer to "the diamide-induced growth delay of the spxC10A mutant" (line 379), but this is not apparent from the presented data.

      (2) Are the spx C10A, spx C13A, and spx C10A,C13A all really equivalent? In all cases, the Spx protein is presumably made (as confirmed for C10A in panel 1D). However, the only evidence to suggest that they are equivalent is the similar growth effects in panel 1C, and (as noted above), this data presentation can mask differences in how the mutations affect protein activity.

      (3) Figure 1D and Figure 1 Supplement 2 report results related to the effect of diamide treatment on protein half-life (t1/2). Only single results are shown for both panels, and the conclusions do not seem to be statistically robust. For example, in Figure 1, Supplement 2 concludes that Spx C10A has a t1/2 is 3.38 min (this should be labeled correctly in the Figure legend as the red line). and WT Spx is 8.69 min. However, Figure 1D suggests that the protein levels at time 0 may not be equivalent, and this is lost in the data processing. Indeed, there are significant differences in Spx levels between time 0 - and + DIA, which is curious. Further, the authors' conclusion relies very heavily on line-fitting that includes a final point that has very low signal intensity (as judged from Figure 1D) and therefore is likely the least reliable of all the data. It might be worth showing curve fitting for multiple gels. Regardless of the overfitting of the data, the general conclusion that Spx is partially stabilized against proteolysis by ClpXP, and that the C10A mutant is reduced in stabilization, is probably correct.

      (4) Figure 2 concludes that despite differences in the mRNA profiles between WT and spx C10A after 15 min. of DIA treatment, the overall level of responsiveness of the bacillithiol pool is unchanged. The authors find it "surprising" that the BSH pool responds normally despite some differences in gene expression. This is not surprising. The major events visualized in panel 2D are the chemical oxidation of BSH to BSSB and, presumably, the re-reduction by Bdr(YpdA). While it is seen that BSH synthesis (bshC) and ypdA expression may be less induced by DIA in the C10A mutant (2C), there is no evidence that the basal levels are different prior to stress. Therefore, the chemical oxidation and enzymatic re-reduction might be expected to occur at similar rates, as observed.

      (5) Line 215. For the reason stated above, there is no reason to invoke Cys uptake as needed for the reduction of BSSB. Further, since CySS (presumably an abbreviation for cystine) is imported, this itself can contribute to disulfide stress.

      (6) Line 235. Following on the above point, "diamide-induced disulfide stress increased L-CySS uptake in the spxC10A mutant to re-establish the BSH redox equilibrium." This is counterintuitive since LCySS is itself a disulfide and is thought to be reduced to 2 L-Cys in cells by BSH (leading to an increase in BSSB, not a reduction). Is there a known cystine reductase? Could cystine or L-cys be affecting gene regulation? (e.g., through CymR or Spx or ?). Cystine can also lead to mixed disulfide formation (e.g., could it modify Spx on C13?).

      (7) l. 247 "a functional Spx redox switch allows S. aureus to avoid this trade-off and maintain thiol homeostasis without excessive L-CySS uptake." Can the authors expand on how this is thought to work? Does Spx normally affect cystine uptake? I thought this was CymR? I am not following the logic here.

      (8) Line 258. "The fur mutant, which is known to accumulate iron...". My understanding is that fur mutant strains typically have higher bioavailable (free) Fe pools. This is seen in E. coli, for example, using EPR methods. However, they also often have lower total Fe due to the iron-sparing response, which represses the expression of abundant, Fe-rich proteins. Please provide a reference that supports this statement that in S. aureus fur mutants have higher total iron per cell.

      (9) Figure 4. For the reasons stated above (point 1), it is hard to interpret data presented only as "Rel. Growth". Perhaps growth curve data could be included in a supplement.

      (10) The interpretation of Figure 4 is complicated. It is not clear that there is necessarily a change in bioavailable Fe pools, although it does seem clear that Fe homeostasis is perturbed. It has been shown that one strong effect of DIA on B. subtilis physiology is to oxidize the BSH pool to BSSB (as shown also here), and this leads to a mobilization of Zn (buffered by BSH). Elevated Zn pools can inactivate some Fe(II)-dependent enzymes, which could account for the rescue by Fe(II) supplementation. Zn(II) can also dysregulate PerR and likely Fur regulons.

    1. Reviewer #1 (Public review):

      Summary:

      This study uses a novel DNA origami nanospring to measure the stall force and other mechanical parameters of the kinesin-3 family member, KIF1A, using light microscopy. The key is to use SNAP tags to tether a defined nanospring between a motor-dead mutant of KIF5B and the KIF1A to be integrated. The mutant KIF5B binds tightly to a subunit of the microtubule without stepping, thus creating resistance to the processive advancement of the active KIF1A. The nanospring is conjugated with 124 Cy3 dyes, which allows it to be imaged by fluorescence microscopy. Acoustic force spectroscopy was used to measure the relationship between the extension of the NS and force as a calibration. Two different fitting methods are described to measure the length of the extension of the NS from its initial diffraction-limited spot. By measuring the extension of the NS during an experiment, the authors can determine the stall force. The attachment duration of the active motor is measured from the suppression of lateral movement that occurs when the KIF1A is attached and moving. There are numerous advantages of this technology for the study of single molecules of kinesin over previous studies using optical tweezers. First, it can be done using simple fluorescence microscopy and does not require the level of sophistication and expense needed to construct an optical tweezer apparatus. Second, the force that is experienced by the moving KIF1A is parallel to the plane of the microtubule. This regime can be achieved using a dual beam optical tweezer set-up, but in the more commonly used single-beam set-up, much of the force experienced by the kinesin is perpendicular to the microtubule. Recent studies have shown markedly different mechanical behaviors of kinesin when interrogated by the two different optical tweezer configurations. The data in the current manuscript are consistent with those obtained using the dual-beam optical tweezer set-up. In addition, the authors study the mechanical behavior of several mutants of KIF1A that are associated with KIF1A-associated neurological disorder (KAND).

      Strengths:

      The technique should be cheaper and less technically challenging than optical tweezer microscopy to measure the mechanical parameters of molecular motors. The method is described in sufficient detail to allow its use in other labs. It should have a higher throughput than other methods.

      Weaknesses:

      The experimenter does not get a "real-time" view of the data as it is collected, which you get from the screen of an optical tweezer set-up. Rather, you have to put the data through the fitting routines to determine the length of the nanospring in order to generate the graphs of extension (force) vs time. No attempts were made to analyze the periods where the motor is actually moving to determine step-size or force-velocity relationships.

      Comments on revisions:

      I am satisfied with the revision made by the authors in response to my first round of criticisms.

    2. Reviewer #2 (Public review):

      Summary:

      This work is important in my view because it complements other single-molecule mechanics approaches, in particular optical trapping, which inevitably exerts off-axis loads. The nanospring method has its own weaknesses (individual steps cannot be seen), but it brings new clarity to our picture of KIF1A and will influence future thinking on the kinesins-3 and on kinesins in general.

      Strengths:

      By tethering single copies of the kinesin-3 dimer under test via a DNA nanospring to a strong binding mutant dimer of kinesin-1, the forces developed and experienced by the motor are constrained into a single axis, parallel to the microtubule axis. The method is imaging-based which should improve accessibility. In principle, at least, several single-motor molecules can be simultaneously tested. The arrangement ensures that only single molecules can contribute. Controls establish that the DNA nanospring is not itself interacting appreciably with the microtubule. Forces are convincingly calibrated and reading the length of the nanospring by fitting to the oblate fluorescent spot is carefully validated. The excursions of the wild type KIF1A leucine zipper-stabilised dimer are compared with those of neuropathic KIF1A mutants. These mutants can walk to a stall plateau, but the force is much reduced. The forces from mutant/WT heterodimers are also reduced.

      Weaknesses:

      The tethered nanospring method has some weaknesses; it only allows the stall force to be measured in the case that a stall plateau is achieved, and the thermal noise means that individual steps are not apparent. The nanospring does not behave like a Hookean spring - instead linearly increasing force is reported by exponentially smaller extensions of the nanospring under tension. The estimated stall force for Kif1A (3.8 pN) is in line with measurements made using 3 bead optical trapping, but those earlier measurements were not of a stall plateau, but rather of limiting termination (detachment) force, without a stall plateau.

      Comments on revisions:

      The authors have successfully addressed my previous criticisms.

    3. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      We thank Reviewer #1 for the careful reading of our manuscript and for the constructive comments. We have provided responses to each of the comments below.

      We greatly appreciate Reviewer #1’s accurate public review of our study on the kinesin motor using the DNA origami nanospring (NS). With respect to the strengths, we fully agree with Reviewer #1’s comments. Regarding the weakness, we would like to respond as follows.

      It is true that, unlike optical tweezers, our method does not provide real-time data display. Optical tweezers enable real-time observation and manipulation of kinesin molecules at arbitrary time points. Achieving real-time observation and manipulation is indeed an important challenge for the future development of the NS technique. On the other hand, Iwaki et al. (our co-corresponding author) has already investigated dynamic properties of motor proteins under load, such as step size and force–velocity relationship of myosin VI using NS. We are now preparing high spatiotemporal resolution microscopy experiments on the KIF1A system to measure its step size and force–velocity relationship, which inherently require such resolution.

      Reviewer #2 Public Review

      We appreciate the constructive comments of Reviewer #2, which have strengthened both the presentation and interpretation of our results.

      We would like to thank Reviewer #2 for providing a highly accurate assessment of the strengths of our experiments. Regarding the weaknesses, we would like to respond as follows. First, Iwaki et al. (our co-corresponding author) have already succeeded in observing the stepping motion of myosin VI using the nanospring (NS) in their previous work. We are also currently preparing high spatiotemporal resolution microscopy experiments to observe the stepping motion of KIF1A in our system. Second, while it is true that the NS does not follow Hooke’s law, it is possible to design and construct NSs with an appropriate dynamic range by tuning the spring constant to match the forces exerted by protein molecules. Finally, we agree that our first observation of the stall plateau in KIF1A using the NS is a meaningful achievement. However, with respect to the suggestion that “increasing validity requires also studying kinesin-1,” we have a somewhat different perspective. The validity of the NS method has already been thoroughly examined in the previous work on myosin VI by Iwaki et al., where results were compared with those obtained using optical tweezers. Moreover, the focus of this manuscript is on KAND caused by KIF1A mutations. From this perspective, although we appreciate the suggestion, we consider it important to keep the present study focused on KIF1A and its implications for KAND.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      (1) The authors detect the attachments that occur during a processive run by KIF1A by monitoring the suppression of the angular fluctuations of the fluorescent signal and plot this, for example, in Figure 3a as the Length of the NS (which presumably is a readout of force) vs time. This interval includes the time when the KIF1A is actively moving along the MT and when it is stalled. It would be interesting to know the actual stall time of the motor in order to be able to calculate a detachment rate constant. For attachment periods such as the first example highlighted in pink in Figure 3a, the stall time is pretty much equal to the attachment time since the motor is moving so fast and the stall period is so long. However, for short attachment times such as the fifth pink interval shown in this same figure or the traces with the mutant KIF1As in Figure 4 this is not so. Can the authors institute a program to identify the periods where the motor has stretched the NS spring to the point where it stalls, and then calculate this time in order to do an exponential fit to the "dwell time distribution"?

      By introducing another criterion (see Methods, “Rate of relative increase in NS’s length”), the attachment duration was separated into the two time regions noted by the reviewer. After reanalyzing all the data, we evaluated only the stall duration this time. As a result, the estimated stall-force values became more reliable and accurate. The dwell time analysis of was performed and included in the supplementary material for WT KIF1A, for which sufficient data were available.

      (2) The histogram of stall events in Figure 3b is quite broad. Please discuss.

      The newly added distributions from individual molecules (Fig. 3b) show that the variety in the stall force distribution is not due to multiple molecules, but is primarily an intrinsic property of single KIF1A molecules reflecting the complex kinetics of KIF1A under load, including occasional backward steps and reattachments. In addition, because the nanospring is a non-linear spring, a disadvantage is that even small fluctuations in extension can result in a substantial deviation in the measured stall force. These points have been added to the Discussion section.

      (3) Figure 3c, it is clear that for attachment times greater than 5s the attachment duration is independent of the Lstall, but this is not so clear for the short attachment durations. Some of this may relate to the fact that you're measuring attachment durations and not stall or dwell times as described in my first comment. Do you feel this is due to less precision in measuring the "attachment duration" during the short attachments, or just simply that more data is needed here? I assume that you do not want to imply that there is a load-dependence of the attachment durations here? Perhaps an expanded view of the data set from 0-10 seconds would clarify. 

      As described in our response to comment (1), the stall durations were separated from the attachment durations. This improved the measurement accuracy and revealed that and are uncorrelated (Fig. 3c). We appreciate this constructive comment.

      Reviewer #2 (Recommendations for the authors):

      (1) Off-axis forces are described as 'upward', 'perpendicular', and 'horizontal'. Consider referring to off-axis force, and if necessary, defining the direction of the force(s) relative to the axis of the immobilised MT. If necessary, a cartoon of XYZ axes might be added to F1c? 

      An XZ axis was added to the schematic in Fig. 1c.

      (2) If I understand correctly, stall forces are calculated by averaging the entire region in which the angular fluctuation is reduced below a threshold. In cases like the 3rd and 7th events on the trace in F1a, this will reduce the average. Perhaps consider separately averaging the later time points in each stall event? Perhaps also consider correlating the angular fluctuation signals and the spring length signal? Some fluctuations during stall plateaus might indicate slip back and re-engage events? 

      Instead of separately averaging the later time points in each stall event, we separated the stall force duration from the overall attachment duration (Fig. 3). This allowed us to obtain more accurate stall force values. The relationship between the NS length and the angular fluctuation during KIF1A slip-back events differed among individual stall events, and no clear trend was observed. Two representative examples are shown in the Author response image 1.

      Author response image 1.

      (3) Please describe all relevant methods fully instead of referencing previous work. For example, nanospring preparation refers readers to reference 21 (which in turn references an earlier paper).

      We revised the Methods section to include the procedures described in the previous reference, and we added the sequence information of the DNA origami to the supplementary information.

      (4) Were any experiments tried at reduced ATP concentration?

      (5) Were any data obtained from WT KIF5B? For kinesin-1, stall plateau forces of >7 pN are obtained.

      This study focused on comparing the stall forces of wild-type and KAND-related mutant KIF1A molecules under physiological ATP conditions, as our main goal was to characterize the disease-relevant phenotypes. Experiments at reduced ATP concentrations and with WT KIF5B are indeed important future directions but are beyond the scope of the present study. These follow-up experiments are currently in progress.

      (6) In Figure 1b, consider showing the attachment to the mutant KIF5B, and reversing the orientation so it corresponds to Figure 1c.

      KIF1A and KIF5B share the same binding method, so to indicate that the schematic in Fig. 1b represents both, we replaced ‘KIF1A’ with ‘Kinesin’.

      (7) In Figure 3d, add force axis. In general, please re-check all force axes. In Supplement S3, the stall plateau labels appear well above their corresponding axis ticks. In Figure 4, several mutants appear to be stalling at well over 5 pN, yet Table 1 gives a much lower value. Presumably, this reflects averaging effects?

      We added the force axis to Fig. 3d. Besides, we corrected Fig. S3 and Fig. 4 because there were errors in the conversion from length to force. As the reviewer pointed out, the apparent discrepancy between the force values in Fig. 4 and Table 1 arises mainly from averaging effects.

    1. Such trends, which philosophers call ‘pernicious ignorance’, enable us to overlook inconvenient bits of data to make our utility calculus easier or more likely to turn out in favor of a preferred course of action.

      So-called "pernicious ignorance" is not simply a lack of knowledge, but rather a selective blindness reinforced by social reward structures. For example, when posting photos of volunteer trips abroad, we are more inclined to consider the attention, fundraising, and personal image enhancement they bring, while ignoring the consent rights of those being photographed, the risks of long-term stigmatization, and the harm caused by power imbalances. This makes actions that "appear beneficial" seem morally easier and more readily justifiable.

    2. If you have really comprehensive data about potential outcomes, then your utility calculus will be more complicated, but will also be more realistic.

      This statement highlights a frequently overlooked tension: the "accuracy" of moral judgments often comes at the cost of complexity. The mechanisms of social media (instant feedback, likes/shares) encourage us to pursue "quick and certain" conclusions, leading to a natural tendency to make decisions based on simplified data. The result is not simply miscalculation, but the systematic exclusion of inconvenient consequences.

    1. inspector’s meaty face scrunchedinto a scowl, but his eyes were kind

      In this book, there is another POV; that of an inspector themselves. The feel sympathy for the plights of the immigrants, but they just "doing their job". They don't want to turn people away, but sometimes they have to.

    2. They’d be detained atEllis Island until they were to take the next ship home.

      All that time and work, only to be sent back home. Back to poverty, famine, and political instability. It is unknown which region in Italy the man was from, as even the sisters couldn't understand his dialect. But it is clear, that he was escaping something. As all passengers on these boats were.

    3. Lui è pazzo

      "He's crazy". But really, the reality of the situation he was desperate. He had left behind everything and everyone he had known, and now was denied entry into the United States. Denied entry into what he believed would be a better future for himself, and most like his family. Very frequently men would make the journey to New York City, work, and then once they had enough money send for their wives, children, and then extended family. His denial not only meant his future has ended, but that of his family as well.

    1. Play in scientifij ic research is seldom discussed in print. Perhaps wescientists take it for granted. Or maybe we are a little self-conscious andtry to hide it from others. After all, we don’t want taxpayers to thinkthey are subsidizing adults who are acting like a bunch of kids, therebysquandering hefty amounts of public money. Play in science is thus anelusive and difffijicult topic. (Laszlo 2004, 389)There is a great diffference between how science and artefacts are presentedto the ‘outer world,’ and how they are actually produced in workplaces suchas laboratories. This is important to acknowledge if we want to open newpotential for citizen science games. Play is crucial to processes that precedethe formation of scientifijic artefacts, but we, as ‘normal’ citizens, are notsupposed to see this. Playing is an intrinsic part of scientifijic knowledgeproduction, yet it is mostly covered up, or ‘black-boxed’ for outsiders, for thereasons sketched above

      It stems from Shirky principle self-preservation. From having to be employed to survive.

    Annotators

    1. increase options in order to rebuild sovereignty – sovereignty that was once grounded in rules, but will increasingly be anchored in the ability to withstand pressure.This room knows this is classic risk management. Risk management comes at a price, but that cost of strategic autonomy, of sovereignty can also be shared.

      very much this. sovereignty anchored in withstanding pressure, and the effort shared in networks of likeminded parties. This is networked agency for nations!

    2. But more recently, great powers have begun using economic integration as weapons, tariffs as leverage, financial infrastructure as coercion, supply chains as vulnerabilities to be exploited.

      USA and China foremost, Russia (hybrid)

    3. We knew the story of the international rules-based order was partially false that the strongest would exempt themselves when convenient, that trade rules were enforced asymmetrically. And we knew that international law applied with varying rigour depending on the identity of the accused or the victim.This fiction was useful, and American hegemony, in particular, helped provide public goods, open sea lanes, a stable financial system, collective security and support for frameworks for resolving disputes.So, we placed the sign in the window. We participated in the rituals, and we largely avoided calling out the gaps between rhetoric and reality.

      comparison of international rules-based order w the Havel's green grocer as a known fiction that yielded results (ofc the global south knew this early, but for us it worked)

    1. the ATS, initially opened up five different trades to women: clerk, cook,storewoman, driver and orderly, all of which, with the possible exception ofdriver, reflected the kinds of paid work that women had been employed inbefore the war.

      Women experienced a greater role in employment, but the roles they fufilled weren't all that different from what they had done prior to the war.

    2. Married women without young children and with husbands absent on warwork or military service could be conscripted, but only into part-time, localwork. The maintenance of the domestic home thus superseded the needs ofproduction, demonstrating the importance attributed to retaining linksbetween femininity and domesticity in wartime, even when this linkthreatened the efficiency of the war effort

      Was this the same in germany? Intially seeking to maintain traditional gener roles, placing emphasis on a woman's important work in the home and as a mother, the experience of women on the home front in Britain was interesting, the push and pull of mobilisation often leading to an experience of confliction.

    3. women would not be subject to conscriptionand direction, but instead the State would rely upon their patriotism andsense of duty as a means of ensuring that they volunteered their labour.

      Was this the same in Germany?

    Annotators

    1. for Loops# for loops let us perform an action or a set of actions for all of the items in a list. So, if we wanted to go through all the the users that liked our tweet and display a message for each one, we could do this: for user in users_who_liked_our_post: display("Yay! " + user + " liked our post!") Copy to clipboard 'Yay! @pretend_user_1 liked our post!' Copy to clipboard 'Yay! @pretend_user_2 liked our post!' Copy to clipboard 'Yay! @pretend_user_3 liked our post!'

      The introduction if loops is a a pretty important fundamental to programming. I don't usually use Python but the syntax is very easy to remember and simple to understand. These explanations are also very straight forward.

    1. La Cour d'École : Enjeux pour le Bien-être des Élèves

      Résumé Exécutif

      Ce document de synthèse analyse les enjeux fondamentaux liés à l'aménagement des cours de récréation, en se basant sur les expertises croisées d'Annie Sbir, spécialiste en éducation physique et sportive, et de Charlotte Vanesburg, architecte-urbaniste impliquée dans le projet des "Cours Oasis".

      La cour d'école, loin d'être un simple espace de défoulement, est un lieu essentiel au développement de l'autonomie, à l'apprentissage du vivre-ensemble et à la gestion des conflits.

      Le constat est que le modèle traditionnel français – une surface de bitume vide et centrée sur un terrain de sport – génère du stress, des conflits et renforce les stéréotypes de genre en marginalisant les activités calmes et mixtes.

      Les stratégies de réaménagement proposées visent à transformer cet espace en un écosystème riche et diversifié.

      Cela passe par la multiplication des types d'espaces (dynamiques, calmes, de repli), la végétalisation pour réduire le bruit et la chaleur (projet "Cours Oasis"), et l'introduction de matériaux variés (copeaux de bois, sable, etc.).

      Une telle transformation encourage une prise de risque mesurée, essentielle à la construction de la confiance en soi, et permet de briser la monopolisation de l'espace par des jeux uniques comme le football.

      La réussite de ce projet repose sur une démarche collective, impliquant les élèves, les enseignants et l'ensemble du personnel de l'école dans un processus de diagnostic et de conception, faisant de la cour un levier puissant pour améliorer le climat scolaire global.

      --------------------------------------------------------------------------------

      1. Le Constat : La Cour d'École Traditionnelle et ses Limites

      La cour de récréation classique en France est souvent un espace négligé sur le plan éducatif, réduit à un "carré de bitume" dont la fonction première est d'assurer la sécurité et la surveillance. Cette conception minimaliste engendre plusieurs problématiques majeures.

      La Monotonie des Aménagements

      La différence la plus frappante entre les cours de maternelle et celles des niveaux supérieurs (primaire, collège) est la disparition quasi totale des structures de jeu.

      Absence de Jeux : À partir du CP, les jeux fixes disparaissent, remplacés majoritairement par des équipements sportifs basiques (buts de football, paniers de basket) et des bancs. Charlotte Vanesburg souligne ironiquement : "à partir du CP c'est tout le monde le sait on ne joue plus, on a plus besoin de jouer donc on ne met plus de jeu dans une cour de récréation."

      Uniformité des Sols : Que ce soit dans les grandes villes ou à la campagne, la majorité des cours sont asphaltées. Même dans les zones rurales où l'espace est plus grand, la cour elle-même reste un carré de bitume.

      La Reproduction des Stéréotypes Sociaux et de Genre

      La cour d'école est décrite comme un "microcosme social", le premier pour les enfants. Une cour non aménagée reproduit et amplifie les schémas sociaux existants, notamment les stéréotypes de genre.

      Domination Spatiale : L'espace central est massivement occupé par les jeux de ballon, principalement le football, pratiqué majoritairement par les garçons.

      Marginalisation : Les autres élèves, et en particulier les filles, sont relégués sur les pourtours et dans les "petits coins qu'on avait bien voulu leur laisser".

      Leurs activités sont souvent réduites à la discussion ou à des jeux statiques. Certaines finissent même par se réfugier dans les toilettes pour éviter les ballons.

      Cristallisation des Rôles : Dès la maternelle, des rôles de dominants et de dominés se mettent en place.

      Le film documentaire de Claire Simon (1994) est cité comme une illustration "terrifiante" de cette dynamique, montrant des violences verbales et des comportements qui se cristallisent très tôt.

      Source de Stress et de Conflits

      Un environnement pauvre en sollicitations et en aménagements devient une source de stress et de tensions pour les élèves et les adultes.

      Besoin de Mouvement non Canalisé : Annie Sbir insiste sur le "besoin impératif de mouvement" des enfants, souvent contraint en classe.

      Une cour vide ne propose pas de support pour canaliser cette énergie. Le corps devient alors le principal support de jeu, menant à des bousculades et des chahuts. "Il faut que le corps bouge, exprime, et que si on ne m'offre pas des moyens d'investir mon énergie bah je vais pas forcément l'investir comme il faut."

      Le Bruit : Le bruit constant et les cris sont une source de stress majeure. Une cour aménagée avec des végétaux et des matériaux absorbants peut diminuer les pics de bruit de moitié.

      Insécurité : La présence constante de ballons fusant dans tous les sens génère un stress pour les enfants qui ne participent pas au jeu, les forçant à chercher des zones de refuge.

      2. Vers une Réinvention de la Cour : Principes et Stratégies

      La transformation des cours d'école repose sur l'idée que l'aménagement de l'espace peut répondre aux besoins multiples des enfants (physiques, mentaux et sociaux, selon la définition de la santé de l'OMS) et ainsi améliorer le climat scolaire.

      Diversifier les Espaces et les Usages

      La clé est de multiplier l'offre d'activités et de supports pour que chaque enfant trouve un espace qui lui convient.

      Zonage des Activités : Il est suggéré de matérialiser, même de manière temporaire, des espaces dédiés aux "jeux dynamiques", "jeux modérés" et "jeux calmes".

      Matériel et Aménagement : Il est crucial de faire un inventaire du matériel disponible et de l'enrichir.

      Les aménagements peuvent inclure des marquages au sol variés (marelles, escargots, cibles) et sur les murs, qui font écho au matériel proposé.

      L'idée est que ce qui est appris en cours d'EPS puisse être réinvesti durant la récréation.

      Cartographie par les Élèves : Un outil efficace pour la prise de conscience est de demander aux élèves de cartographier la cour, en se positionnant et en indiquant qui joue à quoi et où.

      Cet exercice, réalisé avant et après aménagement, permet de visualiser et de verbaliser les inégalités spatiales.

      L'Importance du Mouvement et de la Prise de Risque Mesurée

      La cour doit permettre aux enfants de bouger, mais aussi d'apprendre à gérer le risque dans un cadre sécurisé.

      Le Droit à l'Erreur : S'inspirant du concept belge du "droit au bleu", il est rappelé que se faire mal fait partie de l'apprentissage.

      Prendre des risques permet de grandir, de prendre confiance en soi et d'apprendre à évaluer ses propres capacités.

      Risque Mesuré vs Danger : Le but n'est pas de créer du danger, mais d'offrir une "prise de risque mesurée, raisonnée".

      Cela passe par des aménagements qui permettent de grimper, sauter, passer par-dessus des obstacles, etc.

      La Sécurité Objective : Cette prise de risque doit être encadrée par des conditions de sécurité objectives incontournables : sols souples (sable, copeaux), matériel aux normes, et présence d'un adulte à proximité.

      La phrase clé est de concevoir des cours "aussi sûr que nécessaires, mais pas aussi sûr que possible".

      Surveillance et Intimité : Trouver le Juste Équilibre

      Les enfants expriment un fort besoin de "cachettes", tandis que les enseignants ont besoin de tout voir. Il est possible de concilier ces deux demandes.

      Les "Cachettes" Perméables : Des solutions comme des cabanes en saule tressé ou des structures en bois ajourées permettent de créer un sentiment d'intériorité et d'intimité pour l'enfant, tout en restant visibles pour l'adulte surveillant.

      La Surveillance Mobile : Une cour richement aménagée ne permet plus une surveillance à 360° depuis un point fixe.

      Cela implique une surveillance mobile, avec un adulte qui se déplace dans l'espace. Idéalement, deux adultes seraient présents : un en surveillance globale ("embrasse du regard") et un autre en animation ou en interaction plus directe.

      Le Rôle Actif de l'Adulte : L'adulte peut endosser un rôle plus actif et moins intrusif que celui de simple "surveillant".

      L'exemple d'une enseignante qui ratisse les feuilles est donné : elle est présente, observe, mais participe à la vie de la cour sans être dans une posture de contrôle fixe.

      3. Les Cours Oasis : Une Approche Environnementale et Pédagogique

      Le programme des "Cours Oasis", initié à Paris et repris sous d'autres noms en France ("cours buissonnières" à Bordeaux), incarne cette nouvelle vision de la cour d'école.

      Origines et Objectifs : Né d'une volonté de lutter contre le changement climatique en créant des "îlots de fraîcheur" en ville, le projet a rapidement intégré l'enjeu central du bien-être des enfants. Il vise à désimperméabiliser les sols, ramener de la végétation et de la biodiversité.

      Processus Participatif : La transformation physique de l'espace est accompagnée d'un processus de sensibilisation et de co-conception avec toute la communauté scolaire pour s'assurer que les nouveaux usages soient appropriés par tous.

      Défis Pratiques (Boue et Entretien) : La suppression de l'asphalte soulève la question de la boue et de la propreté.

      Les Copeaux de Bois : Une solution efficace est l'utilisation de copeaux de bois, qui recouvrent la terre, évitent la boue, amortissent les chutes et enrichissent le sol. Ils permettent de courir et de jouer. 

      L'Entretien comme Pédagogie : La gestion des "saletés" (copeaux, sable) devient une routine pédagogique : "danse des copeaux" avant de rentrer, utilisation de paillassons, et participation des enfants au rangement de la cour, au même titre que n'importe quel autre jeu.

      4. Perspectives Internationales : Diversité des Approches

      L'analyse des cours d'école dans d'autres pays révèle une grande diversité de cultures et de pratiques, souvent plus en lien avec la nature.

      | Pays | Caractéristiques Principales | | --- | --- | | Pays du Nord | Pratique très naturelle, espaces tournés vers la nature, enfants bien équipés. | | Espagne | Sols naturels, peu de végétation mais beaucoup de sable (jusqu'à 90% de la surface). | | Suisse | Cours ouvertes le week-end, fonctionnant comme des parcs publics pour les familles. | | Allemagne | "Jardins d'enfants" très naturels avec des éléments manipulables (pierres, boue, sable, cailloux). | | Japon | Espaces très naturels avec beaucoup de sable et un rapport à l'eau très présent. | | États-Unis | Échelles très variables, avec des établissements pouvant avoir des cours de la taille de forêts. |

      5. La Transformation comme Projet Collectif

      La refonte de la cour de récréation ne peut être une décision individuelle.

      Elle doit être un projet d'équipe, un levier pour dynamiser l'ensemble de l'école.

      Un Point de Départ : Annie Sbir affirme que si elle était directrice, elle commencerait par la cour de récréation pour créer une dynamique d'équipe.

      Le processus d'observation, de diagnostic (avec des outils objectifs) et de réflexion commune est aussi important que le résultat final.

      Impliquer Tous les Acteurs : Le projet doit associer les enseignants, les animateurs périscolaires, le personnel d'entretien, les parents et surtout les élèves. L'implication des délégués de classe et des éco-délégués est une piste pertinente.

      Une Cour de Démocratie : En conclusion, une cour bien aménagée, riche en propositions, est une "cour de démocratie".

      Ne rien faire, c'est "contribuer à la perpétuation de ce à quoi on n'adhère pas forcément", c'est-à-dire une société où les faibles se retranchent et les stéréotypes perdurent.

      6. Ressources et Outils Mentionnés

      Plusieurs ressources ont été citées au cours de la discussion :

      Ouvrages :

      La cour d’école, un enjeu pour le bien-être des élèves (ouvrage de Canopé).  

      Qui veut jouer au ? de Myriam Gallot (sur l'utilisation de la cour par les élèves).   

      Faire jeu égal d'Edith Maruejouls (géographe travaillant sur les cours de récréation).

      Film :

      ◦ Un film documentaire de Claire Simon (sorti en 1998) sur les interactions dans les cours de maternelle.

      Outils Sociologiques :

      ◦ Le sociogramme de Moreno, un outil pour observer les interactions sociales entre élèves.

      Programmes de Financement et d'Accompagnement :

      CAUE (Conseil d'Architecture d'Urbanisme et de l'Environnement) : Présent dans chaque département, il peut accompagner les projets.  

      CNR (Conseil National de la Refondation) : Le programme "Notre école, faisons-la ensemble".  

      EduRenov : Programme de rénovation des cours porté par la Banque des Territoires.  

      Atelier Canopé Paris : Travaille sur l'aménagement des espaces scolaires.

    1. Reviewer #2 (Public review):

      Summary:

      In this study, Blanco-Ameijeiras and colleagues present the use of stem cells to create human spinal cord organoids that recapitulate anterior-posterior identity, with a large focus on posterior fates. In particular, the authors show robust transcriptional landscape specification that reflects certain anterior-posterior spinal cord development.

      Recapitulation of spinal cord development is essential to understand the fundamentals of developmental defects in a systematic manner. This work provides a broad approach to test certain aspects of neural tube morphogenesis, particularly posterior and dorsal identities. Perhaps the shorter protocol is an interesting upgrade for current standards, and the mechanical interpretation provides good proof of concept work that aligns with the need to better understand neural tube mechanobiology.

      Strengths:

      The manuscript addresses a major gap by focusing on posterior spinal cord identity and secondary neurulation, a phase that is less well captured by existing neural tube organoid models (although some do recapitulate that). The manuscript situates the approach within vertebrate development and human embryology.

      Morphometric quantifications are well described and provide a dynamic interpretation of cell-level interpretation, and that is a true strength of the work. This is important to develop important metrics that can later be used to compare modulations and pathway disruption.

      The protocols are well described and documented.

      Weaknesses:

      Some key data lacks proper quantification to robustly support the claims. For example, it is not clear how many organoids in total are counted in Figure 1D to derive the % of organoids expressing certain markers (e.g. SOX2 or BRA).

      Some claims are overstated. In the manuscript, the organoids show primarily dorsal and posterior identities under the current conditions, yet the discussion sometimes reads as if a more complete dorsoventral recapitulation is achieved. Therefore, one can either demonstrate ventral patterning (e.g., SHH / FOXA2) or reduce the claims about spinal cord identity, which, given the results, are more specific to a particular region.

      The mention of anterior organoids seems to distract the reader from the important work, which primarily focuses on the posterior identity. Further, it is not understood why SOX2 identity is reduced by Day 7 in Figure 1D. Since SOX2 in the manuscript is considered a neural marker (although also pluripotency along with NANOG, etc.), a further explanation should be provided. The author should also test the presence of PAX6, which is one of the earliest neuroectoderm markers in humans (Zhang X. et al., Cell Stem Cell 2010).

      The authors position the work as a substantial addition to the field. The work is very much welcomed; however, some claims align with an interpretation that leads the readers to understand a novelty that is beyond the work presented here. For example, in certain instances in the intro, the manuscript conveys that this work consists of the first recapitulation of spinal cord fates anterior or posterior, while other works (Rifes P. Nature Cell Biology 2020, Xue X. Nature 2024) recapitulate dorsoventral and anterior-posterior patterning and identity (albeit not of secondary neurulation) through controlled gradients of WNT and RA activity. To clearly position the importance of this work, the intro should focus on secondary neurulation and posterior identities.

      In a similar fashion, the claim that "Importantly though, to our knowledge these are the first neural organoids exhibiting a robust spinal cord transcriptome identity" is not very well understood when other neural tube organoid systems (including spinal cord identities) have been exhaustively profiled at the single cell level (Rifes P. Xue X. Abdel Fattah A.). Further explanation is therefore needed.

      The mechanical angle is important and adds to the large body of research that traces NT morphogenesis to mechanics. However, the YAP localization images can be much improved. Lower magnification images are needed to show the entire organoid to robustly convince the reader of the correct and varying localization of the YAP protein. The authors should also check for YAP-associated genes in their bulk RNA sequencing.

      The quantification of the YAP analysis in a total of 23 and 18 cells in the two conditions and in 7 organoids is by no means enough to draw a conclusion about YAP localization, and an increase in the number of cells is needed. Moreover, the use of dasatinib as an inhibitor for YAP is great, but there is no evidence shown that in this culture system, the inhibitor actually inhibits YAP. As such, IF images are required to confirm cytosolic YAP. Additionally, the authors can try other inhibitors (such as verteporfin) since most inhibitors are broadband.

      Given the mechanically oriented conclusions, other relevant works have shown posteriorized and ventralized neural tube organoids using RA and SHH activation, which were also mechanically stimulated via actuation, such as work done from the Ranga lab (Nature comm. 2021/2023). Although not strictly related to YAP, the therein molecular profiling, mechanical stimulation, lumen measurements, and NTD-like phenotype using PCP-mutated genes make these important relevant mentions since the current work adds important aspects with YAP analysis.

    1. Reviewer #1 (Public review):

      Summary

      In this study, the authors have performed tissue-specific ribosome pulldown to identify gene expression (translatome) differences in the anterior vs posterior cells of the C. elegans intestine. They have performed this analysis in fed and fasted states of the animal. The data generated will be very useful to the C. elegans community, and the role of pyruvate shown in this study will result in interesting follow-up investigations.

      However, several strong claims made in the study are solely based on in silico predictions and are not supported by experimental evidence.

      Strengths:

      Several studies in the past have predicted different functions of the anterior (INT1) vs posterior (INT2-9) epithelial cells of the C. elegans intestine based on their anatomy and ultrastructure, but detailed characterization of differences in gene expression between these cell types (and whether indeed these are different 'cell types') was lacking prior to this study. The genes and drivers identified to be exclusively expressed in the anterior vs posterior segments of the intestine will be very helpful to selectively modulate different parts of the C. elegans intestine in future studies.

      Another strength of this study is the careful experimental design to test how the anterior vs posterior cell types of the intestine respond differently to food deprivation and recovery after return to food. These comparisons between 'states' of a cell in different physiological conditions are difficult to pick up in single-cell analyses due to low sequencing depth, which can fail to identify subtle modulation of gene expression.

      The TRAP-associated bulk RNA-seq approach used in this study is more suitable for such comparisons and provides additional information on post-transcriptional regulation during metabolic stress.

      A key finding of this study is that pyruvate levels modulate the translation state of anterior intestinal cells during fasting. Characterization of pyruvate metabolism genes, especially of the enzymes involved in its mitochondrial breakdown, provides novel insights into how gut epithelial cells respond to the acute absence of food.

      Weaknesses:

      Unlike previous TRAP-seq studies (PMID: 30580965, 36044259, 36977417) that reported sequencing data for both input and IP samples, this study only reports the sequencing data for IP samples. Since biochemical pulldowns are variable across replicates, it is difficult to know if the observed differences between different conditions are due to biological factors or differences in IP efficiency. More importantly, since two different TRAP lines were utilized in this study and a large proportion of the results focus on the differences between the translational profiles of INT1 vs INT2-9 cells, it is essential to know if the IP worked with similar efficiency for both TRAP strains that likely have different expression levels of the HA-tagged ribosomal protein. One way to estimate this would be to perform qRT-PCR of genes that are known to be enriched in all intestinal cells and determine whether their fold-enrichment over housekeeping genes (normalized to input) is similar in INT1 vs INT2-9 TRAP strains and across the fed vs fasted conditions. The authors, in fact, mention variability across biological replicates, due to which certain replicates were excluded from their WGCNA analysis.

      It appears that GFP expression is also detectable in INT2 (in addition to strong expression in INT1 in Fig.1A). Compared to INT3-9, which looks red, INT2 cells appear yellow, suggesting that the expression patterns of the two TRAP drivers are not mutually exclusive, which changes the interpretation of many of the results described in the study.

      Some parts of the study overemphasize the differences between the INT1 vs INT2-9 cell types, which is a biased representation of the results. For example, the authors specifically point out that 270 genes are differentially expressed in opposite directions in INT1 vs INT2-9 cell types during acute (30 min) fasting without mentioning the 1,268 genes that are differentially expressed in the same direction. They also do not mention here that 96% of the genes are differentially expressed in the same direction in INT1 and INT2-9 cell types after prolonged (180 min) fasting, suggesting that the divergent translational responses of these cell types are only observed in the first 30 minutes of food deprivation. Similar results have also been reported for the effect of fasting on locomotory and feeding behaviors, where 30 min of fasting produces more variable effects, which become more consistent after longer periods of fasting (PMID: 36083280). Hence, the effects of brief food deprivation should be interpreted with caution.

      Many of the interpretations of this study primarily rely on pathway enrichment analyses, which are based on the known function of genes. The function of uncharacterized genes that were found to be differentially expressed in INT1 vs INT2-9 cell types, e.g., the ShKT proteins, was not explored in this study. In addition, overreliance on pathway enrichment tools (instead of functional validation) has resulted in several conflicting findings. For example, one of the main messages of this study is that INT1 cells specialize in immune and stress response in response to fasting, which relies on pathway analysis in Figs 5E and 5F. However, pathway analysis at a different time point (shown in Figure S5A) indicates that INT2-9 cells show a much stronger increase in translation of stress and pathogen-responsive genes compared to INT1 cells. Hence, some of the results should be interpreted as different translational effects in INT1 vs INT2-9 cells after different lengths of food deprivation, without making broad claims about selective pathways being affected only in specific cell types.

      The authors have compared their TRAP-seq results with genes enriched in the anterior and posterior intestine clusters from a previously published whole-animal adult scRNA dataset (PMID: 37352352). They claim that their TRAP-seq results are in agreement with the findings of the scRNA study. However, among the 10 genes from the 'posterior intestine' scRNA cluster in Fig.S1E, six are downregulated in the INT1 vs INT2-9 comparison, while four are upregulated. Hence, there is no clear agreement between the two studies in terms of the top enriched genes in the anterior vs posterior intestine, which should be considered for cross-study comparisons in the future.

      The authors describe in the manuscript that they have performed INT1-specific RNAi for two C-type lectin genes that are upregulated during fasting. Due to a recent expansion of C-type lectin genes in C. elegans, there is a high chance of off-target effects of RNAi that is designed for members of this gene family. More trustworthy results could have been obtained using CRISPR-based loss-of-function alleles for these genes, one of which is publicly available. Also, the authors do not provide any explanation for why knockdown of these stress-response genes, which are activated in INT1 cells in response to food deprivation, results in improved resistance to pathogens. This, in fact, suggests a role of INT1 cells in increasing pathogen susceptibility, and not pathogen resistance, during food deprivation.

      Many of the studies in this field (e.g., references 2-4 in this article) have investigated the effects of food deprivation ranging from 4 hr to 24 hr, which results in activation of starvation responses in C. elegans. In contrast, the authors have used shorter time periods of fasting (30 min and 180 min), and most of their follow-up experiments have used 30 min of food deprivation. Previous work has shown that the effects of food deprivation can either accumulate over time (i.e., the effect gets stronger with longer food deprivation) or can be transient (i.e., only observed briefly after removal of food and not observed during long-term food deprivation). Starvation-induced transcription factors such as DAF-16/FoxO and HLH-30 show strong translocation to the nucleus only after 30 min of fasting. Though gene expression changes in all stages of food deprivation are of biological relevance, the authors have missed the opportunity to explore whether increased INS-7 secretion from the anterior intestine is dependent on these starvation-induced transcription factors (which can be easily tested using loss-of-function alleles) or is due to other fast-acting regulatory mechanisms induced due to the absence of food contents in the gut lumen. A previous study (PMID: 40991693) has shown that DAF-16 activation during prolonged starvation shuts down insulin peptide secretion from the intestinal epithelial cells. Hence, it is not clear if increased INS-7 secretion is only a feature of short-term food deprivation or is also a signature of long-term starvation (e.g., at 8 hr or 16 hr timepoints). Since most of the INS-7 secretion data in this study are for 30 min of fasting, it remains unknown whether the discovered regulators of INS-7 secretion can be generalized for extended food deprivation that triggers major metabolic changes, such as fat loss (e.g., conditions shown in Figure 1D).

      Two previous studies (PMID: 18025456, 40991693) have shown a strong reduction in the expression of ins-7 in the anterior intestine using GFP-based reporters (both promoter fusions and endogenous CRISPR-generated) and in whole-animal RNA-seq data from starved animals. These results are in contrast to the increased INS-7 secretion from INT1 cells during fasting that is reported in this study. The authors here have reported that INS-7 translation is higher in INT1 compared to INT2-9 during fed, acute fasted, and chronic fasted conditions, but they have not shown whether INS-7 translation is upregulated during acute and chronic fasting in INT1 cells in their TRAP-seq analysis. Knowing whether increased INS-7 secretion during acute fasting is due to increased transcription, translation, or secretion of INS-7 is crucial to resolve the discrepancy between these studies.

    2. Reviewer #2 (Public review):

      Summary:

      In this study, the authors set out to understand whether the discrete segments of the C.elegans intestine were specialized to carry out distinct functions during an animal's exposure and adaptation to a fast-changing nutrient environment. To achieve this, the authors used a method called Translating ribosome affinity purification (TRAP), which provides a snapshot of what genes are being translated into proteins (and therefore functionally prioritized by the animal) under different fasting and re-feeding conditions. By expressing the TRAP constructs in two distinct segments of the intestine (INT1) and (INT2-9), the authors were able to identify how these segments responded to changing nutrient availability.

      Already under steady state nutrient conditions, the authors found that INT1 and INT2-9 appeared to have different 'tasks', with INT1 expressing more immune- and stress-response related genes. Exposing animals to different regimens of starvation and refeeding also showed marked differences between the intestinal segments, and the gene expression patterns in INT1 were consistent with INT1 cells playing an integrative role in linking nutrient cues to the secretion of insulin molecules that regulate fat metabolism with food intake. In summary, the data presented catalogue, for the first time, gene expression differences between two areas of the intestine, suspected to play different roles, and through clever experiments, links these gene expression changes to responses to nutrient availability.

      Strengths:

      The data presented catalogue - for the first time and in a careful manner - gene expression differences between two areas of the intestine. They strongly support the presence of intriguing differences between two areas of the intestine in immune, metabolic, and stress-response regulation, and link these gene expression changes to the responses of these regions to nutrient availability.

      Weaknesses:

      The conclusions of this paper are mostly well-supported by data, but the relevance of the changing gene expression patterns could be better clarified and extended in the discussion.

    3. Reviewer #3 (Public review):

      Summary:

      In this study, Liu and colleagues utilize TRAP-seq to profile the repertoire of actively translated mRNAs in different intestinal cell types (anterior INT1 vs. posterior INT2-9 cells) in C. elegans. A key goal of this study was to identify transcripts differentially expressed/translated between these intestinal cell subtypes in the context of animals being well fed or subjected to acute (30 minutes) or chronic (3 hours) starvation, followed by refeeding.

      The authors identify a number of differentially expressed genes across all of the conditions tested. They then provide an initial survey of the landscape of translatome changes through Weighted Gene Network Correlation Analysis (WGNA), and some high-level functional surveys via Gene Ontology (GO) term analysis and protein domain analysis. The authors validate the enriched expression patterns of some of their identified candidate genes using fluorescent promoter fusion reporters, confirming INT1-specific expression. The authors further implicate the role of several other candidate genes in pathogen avoidance and in response to nutritional cues by knocking them down specifically in INT1 cells by RNAi. Finally, the authors identify pyruvate as a major nutrient signal coming from the bacterial diet that suppresses the release of a key insulin peptide (INS-7), and identify some of the genes expressed in INT1 that are required for this response.

      Strengths:

      (1) Good use of and justification for TRAP-seq, because scRNA-seq would be difficult under the varied conditions used (starvation, refeeding).

      (2) The manuscript is generally clear to read, and the data are generally well-presented with good supporting data that includes replicates, sample sizes, error measurements, and associated statistics.

      (3) The dataset will be an interesting resource to mine for future studies focusing on mechanisms of how particular intestinal cell types respond to different environmental signals.

      Weaknesses:

      (1) A limitation of TRAP-seq, although powerful, is that only relative comparisons can be made between genotypes/conditions to identify differentially-expressed genes, rather than assessing whether a given gene is expressed at a certain level in a cell type under a certain condition. This limitation is due to the non-specific association of sticky RNA species with the beads during the immunoprecipitation step. This is a minor point, however, and the authors do a nice job of focusing their analysis on differentially expressed transcripts in the current study.

      (2) Another limitation of the current study is that the experiments testing the role of candidate genes identified by their profiling experiments do not delve a bit deeper into providing a mechanistic understanding of the phenotypes being studied. At present, the results are thus viewed more as a genomics-based screen with some limited follow-up on interesting hits. However, this reviewer appreciates that when placed in the context of the work presented, a presentation of the profiling data along with some validation is an excellent starting point for future mechanistic studies elaborating on these interesting candidates.

      Appraisal of whether the authors achieved their aims, and whether the results support their conclusions:

      The main goal of the study was to survey the dynamic responses at the level of actively translated mRNAs of the INT1 vs INT2-9 cells in response to metabolic challenge.

      Overall, the authors use established methods to perform their genome-wide analysis, and the set of differentially regulated genes is enriched for expected molecular functions and forms coherent networks in anticipated pathways.

      The validation experiments (promoter::GFP fusion reporters, INT1-specific knockdowns of highly regulated genes) further corroborate the quality of the TRAP-seq datasets generated.

      I have a few points for the authors that would further strengthen this work:

      (1) The authors rightfully focus on the top differentially-regulated candidates, but it's unclear at present how far down their fold change list would lead to expression pattern validations. It would be useful to test a few more promoter::GFP fusion reporters at different enrichment/fold-change/statistical cutoffs.

      (2) Although the INT1-specific RNAi provides a convenient strategy for rapidly perturbing and testing genes of interest for phenotypes, independently validating the knockdowns with genetic mutants, or alternatively (if genes are essential), degron alleles.

      Impact:

      The TRAP-seq data and list of differentially-expressed candidate genes will form an interesting set of high-priority candidates to study for their role in the reception and transduction of nutritional cues in response to food status and pathogens. This data will thus benefit the C. elegans community of researchers studying the mechanisms governing these phenomena.

    1. Reviewer #2 (Public review):

      Summary:

      The aim of the study by Hall et al. was to establish a generic method for the production of Snake Venom Metalloproteases (SVMPs). These have been difficult to purify in the mg quantities required for mechanistic, biochemical, and structural studies.

      Strengths:

      The authors have successfully applied the MultiBac system and describe with a high level of detail the downstream purification methods applied to purify the SVMP PI, PII, and PIII. The paper carefully presents the non-successful approaches taken (such as expression of mature proteins, the use of protease inhibitors, prodomain segments, and co-expression of disulfide-isomerases) before establishing the construct and expression conditions required. The authors finally convincingly describe various activity assays to demonstrate the activity of the purified enzymes in a variety of established SVMP assays.

      Weaknesses:

      The manuscript suffers from a lack of bottoming out and stringent scientific procedures in the methodology and the characterization of the generated enzymes.

      As an example, a further characterization of the generated protein fragments in Figure 3 by intact mass spectroscopy would have aided in accurate mass determination rather than relying on SEC elution volumes against a standard. Protein shape and charge can affect migration in SEC. Also, the analysis of N-linked glycosylation demonstrates some reactivity of PIII to PNGase F, but fails to conclude whether one or more sites are occupied, or whether other types of glycosylation is present. Again, intact mass experiments would have resolved such issues.

      The activity assays in Figure 4 are not performed consistently with kinetic assays and degradation assays performed for some, but not all, enzymes, and there is no Echis ocellatus comparison in Figure 4h. Overall, whilst not affecting the main conclusion, this leaves the reader with an impression of preliminary data being presented. For consistency, application of the same assays to all enzymes (high-grade purified) would have provided the reader with a fuller picture.

      Overall, the data presented demonstrates a very credible path for the production of active SVMP for further downstream characterization. The generality of the approach to all SVMP from different snakes remains to be demonstrated by the community, but if generally applicable, the method will enable numerous studies with the aim of either utilizing SVMPS as therapeutic agents or to enable the generation of specific anti-venom reagents, such as antibodies or small molecule inhibitors.

    2. Reviewer #3 (Public review):

      Summary:

      The presented study describes the long journey towards the expression of members' SVMP toxins from snake venom, which are toxins of major importance in a snakebite scenario. As in the past, their functional analysis relied on challenging isolation; the toxins' heterologous expression offers a potential solution to some major obstacles hindering a better understanding of toxin pathophysiology. Through a series of laborious and elegantly crafted experiments, including the reporting of various failed attempts, the authors establish the expression of all three SVMP subtypes and prove their activity in bioassays. The expression is carried out as naturally occurring zymogens that autocleave upon exposure to zinc, which is a novel modus operandi for yielding fusion proteins and sheds also some new light on the potential mechanism that snakes use to activate enzymatic toxins from zymogenic preforms.

      Strengths:

      The manuscript draws from an extensive portfolio of well-reasoned and hypothesis-driven experiments that lead to a stepwise solution. The wetlands data generated is outstanding, although not all experiments along this rocky road to victory were successful. A major strength of the paper is that, translationally speaking, it opens up novel routes for biodiscovery since a first reliable platform for expression of an understudied, yet potent toxin class is established. The discovered strategy to pursue expression as zymogens could see broad application in venom biotechnology, where several toxin types are pending successful expression. The work further provides better insights into how snake toxins are processed.

      Weaknesses:

      The manuscript contains several chapters reporting failed experiments, which makes it difficult to follow in places. The reporting of experimental details, especially sample sizes and replicates, could be optimised. At the time of writing, it remains unclear whether the glycosilations detected at a pIII SVMP could have an impact on the bioactivities measured, which is a major aspect, and future follow-ups should clarify this. Finally, the work, albeit of critical importance, would benefit from a more down-to-earth evaluation of its findings, as still various persistent obstacles that need to be overcome.

      Major comments to the manuscript:

      (1) Lines 148-149: "indicating that expressing inactivated SVMPs could be a viable, although inefficient, approach". I think this text serves a good purpose to express some thoughts on the nature of how the current draft is set up. It is quite established that various proteases cause extreme viability losses to their expression host (whether due to toxicity, but surely also because of metabolic burden), which is why their expression as inactive fusion proteins is the default strategy in all cases I have thus far seen. I believe that, especially in venom studies, this is of importance given the increased toxicity often targeting cellular integrity, and especially here, because Echis are known to feed on arthropods at younger life history stages, making it very likely that some venom components are especially active against insects and other invertebrates. With that in mind, I would argue that exploring their production in inactive form is the obvious strategy one would come up with and not really the conclusion of a series of (well-conducted and scientifically sound!) experiments. For me, the insight of inactive expression is largely confirmatory of what is established, unless I miss something in the authors' rationale. If yes, it would be important to clarify that in the online version.

      (2) Line 173: Here, Alphafold 3 was used, whereas in previous sections (e.g., line 153, line 210), it was Alphafold 2. I suggest using one release across the manuscript.

      (3) Line 252-254: I fully agree, the PIII SVMP is glycosylated. Glycosylation is an important mediator of snake venom activity, and several works have described their importance in the field. This raises the question, which glycosylations have been introduced here in the SVMP, and to verify that these are glycosylations that belong to those found in snakes. This is important as insects facilitate thousands of N- and O- O-glycosylations to modulate the activity of their proteome, of which many are specific to insects. If some of these were integrated into the SVMP, this could have an impact on downstream produced bioassays and also antigenicity (the surface would be somewhat different from natural toxins, causing different selection).

      (4) General comment for the bioassays: It would be good to specify the replicates again and report the data, including standard deviations.

      Discussion:

      I think the data generated in the study is very valuable and will be instrumental for pushing the frontiers in SVMP research, but still I would like to see a bit of modesty in their discussion. As I have pointed out above, it is unclear which effect the glycosilations may have (i.e., are the glycosilations found reminiscent of natural ones?), despite their being functionally important. Also, yes, isolation of SVMPs is challenging, but the reality is that their expression is equally challenging, as evidenced by the heaps of presented negative data (with which I have no problems, I think reporting such is actually important). So far, the "generic" protocol has been used to express one member per structural class of Echis SVMP, but no evidence is provided that it would work equally well on other members from taxonomically more distant snakes (e.g., the pIII known from Naja oxiana). It is very likely, but at the time of writing, purely speculative. Lastly, the reality is also that the expression in insect cells can only be carried out by highly specialized labs (even in the expression world, as most laboratories work with bacterial or fungal hosts), whereas the isolation can be attempted in most venom labs. That said, production in insect cells also has economic repercussions as it will be very challenging to generate yields that are economically viable versus other systems, which is pivotal because the authors talk about bioprospecting and the toxins used in snakebite agent research. Again, I believe the paper is highly important and excellently crafted, but I think especially the discussion should see some refinement to address the drawbacks and to evaluate the paper's findings with more modesty.

    1. Droits de l'enfant : Transformer l'École de l'intérieur

      Synthèse exécutive

      Ce document de synthèse analyse les stratégies et les impacts de l'intégration des droits de l'enfant au cœur du fonctionnement de l'école.

      Basée sur les témoignages d'experts et de praticiens, l'analyse révèle que si la France a ratifié la Convention Internationale des Droits de l'Enfant (CIDE) depuis 1990, son application reste inégale, notamment en ce qui concerne le droit à l'expression et à la participation des élèves.

      L'approche préconisée dépasse le simple enseignement théorique des droits pour les incarner dans la posture des adultes, les relations interpersonnelles et l'organisation même de l'établissement.

      Le programme "École amie des droits de l'enfant" de l'UNICEF sert de modèle central, illustrant une démarche qui vise un changement de culture profond et durable.

      Cette méthode s'appuie sur un diagnostic participatif, l'implication de toute la communauté éducative (enseignants, élèves, personnels, parents) et l'utilisation d'outils concrets comme la "marche exploratoire" pour évaluer l'environnement scolaire du point de vue de l'enfant.

      Les bénéfices identifiés sont significatifs : amélioration notable du climat scolaire, renforcement du respect de soi et des autres, et développement précoce des compétences citoyennes.

      Les données issues d'expériences internationales démontrent une augmentation du sentiment de sécurité et de l'écoute perçue par les élèves, ainsi que de leur capacité à influencer les décisions qui les concernent.

      Cependant, la mise en œuvre se heurte à des défis majeurs, tels que la prévalence de l' "adultisme" – la tendance des adultes à décider à la place des enfants – et la perception d'une surcharge de travail pour les enseignants.

      La clé du succès réside dans un engagement sur le temps long, considérant ces programmes non comme une initiative ponctuelle mais comme un investissement fondamental pour former des citoyens actifs et responsables.

      État des lieux des droits de l'enfant dans le système éducatif français

      La Convention Internationale des Droits de l'Enfant (CIDE) : Un cadre juridique sous-appliqué

      La CIDE, adoptée par les Nations Unies en 1989 et ratifiée par la France en 1990, constitue le socle juridique des droits de l'enfant.

      Ce texte de 54 articles protège les individus de 0 à 18 ans et couvre l'ensemble de leurs droits fondamentaux.

      Cependant, selon Valérie Becket, professeure en sciences de l'éducation, l'application de cette convention en France est "inégale selon les domaines".

      La France n'est pas considérée comme un "bon élève", particulièrement sur les enjeux d'expression et de participation.

      Des enquêtes comparatives à l'échelle européenne montrent que, malgré l'existence de dispositifs comme les conseils d'école ou les conseils d'enfants, un décalage persiste entre les droits permis et le ressenti réel des enfants, plaçant parfois la France en bas du classement.

      Julie Zarlot, de l'UNICEF France, précise que si la France est exemplaire dans certains domaines comme le droit global à la santé ou à l'éducation, des manques subsistent pour certains enfants qui n'ont pas un accès suffisant à l'école, à la santé ou à la protection.

      La perception des droits à l'école

      L'environnement scolaire présente des tensions inhérentes à l'application des droits de l'enfant. Richard Côtier, directeur d'école, souligne que "l'organisation prend le pas sur le respect de chacun".

      La focalisation sur les objectifs d'apprentissage peut parfois occulter la nécessité de garantir les droits fondamentaux des élèves.

      L'équilibre Droits/Devoirs : Une réaction fréquente des adultes (enseignants, parents) à l'évocation des droits de l'enfant est la question des devoirs.

      La réponse apportée est que le droit de l'un implique le devoir pour l'autre de le respecter. "Le devoir, c'est le devoir de respecter les droits de tous, y compris les siens propres et ceux des autres."

      Écart de perception : Les diagnostics menés en amont des projets révèlent souvent un décalage entre la perception de l'école par les élèves, qui la vivent de l'intérieur, et celle de leurs parents, qui sont à l'extérieur.

      Cette différence justifie la nécessité de recueillir le point de vue de toutes les parties prenantes.

      Le programme "École amie des droits de l'enfant" : Une approche transformative

      Philosophie et approche pédagogique

      Le programme de l'UNICEF est présenté comme une démarche de prévention positive.

      Plutôt que de se concentrer sur la lutte contre des problèmes (comme le harcèlement) par une approche "par la négative", il vise à "motiver tout le monde pour faire en sorte que les droits de tous soient respectés".

      L'approche pédagogique de l'UNICEF, qualifiée d'"approche par les droits", repose sur trois piliers :

      1. Apprendre sur les droits : Acquérir la connaissance de la CIDE.

      2. Apprendre par les droits : Expérimenter les droits dans la pratique quotidienne, via la posture de l'enseignant et le fonctionnement de l'école.

      3. Apprendre pour les droits : Devenir capable de défendre ses propres droits et ceux des autres.

      L'objectif est un "changement de comportement" et un "renforcement des capacités" des adultes comme des enfants. Il ne s'agit pas simplement d'un apport de connaissances, mais d'une transformation profonde du fonctionnement de l'école.

      Mise en œuvre concrète à l'école L. Martine

      L'école dirigée par Richard Côtier, engagée dans le programme depuis un an et demi, illustre cette mise en œuvre.

      Comité de Pilotage (Copil) : Un comité a été créé pour piloter le projet, rebaptisé "Conseil de vie citoyenne" pour préparer les élèves au collège.

      Sa particularité est d'inclure un large éventail d'acteurs : élèves, enseignants, AVS, personnel d'entretien, animateurs du périscolaire.

      Ce lieu permet de "penser tous avec nos regards différents le fonctionnement de l'école du point de vue des droits".

      La "Marche Exploratoire" : Cet outil concret consiste à parcourir l'école en se posant des questions spécifiques sous l'angle d'un droit (ex: la sécurité).

      Les élèves et adultes observent et analysent des lieux précis pour déterminer s'ils s'y sentent en sécurité, si les adultes sont perçus comme un secours potentiel, etc.

      Cette démarche permet d'objectiver le diagnostic initial en se basant sur la perception et le vécu de l'enfant.

      Une approche modeste et progressive : La première phase a consisté à assurer une formation sur les droits de l'enfant dans toutes les classes et à mettre en place les structures participatives.

      L'accent est mis sur la modestie des objectifs annuels pour assurer leur réalisation concrète et maintenir la confiance dans le processus.

      Le temps long comme condition du succès

      Richard Côtier insiste sur le fait que la transformation d'une culture scolaire est un processus long.

      Il considère la durée de trois ans du programme UNICEF comme "juste la piqûre, juste le vaccin". Selon lui, il faudra "peut-être encore 5, 10 ans derrière" pour qu'un établissement puisse affirmer avoir durablement intégré cette culture.

      Le but est de créer une dynamique pérenne où la communauté éducative constate un changement profond et irréversible dans son fonctionnement.

      Impacts, défis et généralisation

      Impacts mesurables sur le climat scolaire et les élèves

      L'expérience du Royaume-Uni, où le programme existe depuis plus de dix ans dans 4500 écoles, fournit des données quantitatives sur son impact.

      | Indicateur d'impact (Évaluation au Royaume-Uni) | Chiffres clés | | --- | --- | | Amélioration du respect de soi et des autres | 93 % des enfants | | Augmentation du sentiment de sécurité à l'école | \+ 5 % | | Augmentation du sentiment d'être écouté à l'école | \+ 5 % | | Augmentation de la capacité à influencer les décisions | \+ 14 % des enfants | | Augmentation du sentiment d'être respecté par les adultes/pairs | \+ 11 % | | Connaissance supérieure de leurs droits | \+ 37 % des enfants |

      Au-delà des chiffres, l'impact qualitatif est la formation de futurs citoyens qui ne sont pas "relativement passifs", mais qui ont expérimenté que leur parole peut avoir un effet sur le monde qui les entoure et que le changement nécessite un engagement collectif.

      Surmonter les freins et les obstacles

      L' "Adultisme" et la peur de la contestation : Valérie Becket identifie un frein majeur dans la tendance des adultes à voir les "risques" (désordre, contestation) de la participation des élèves plutôt que les "bénéfices" à long terme.

      Cette posture, qui consiste à décider "à la place de l'enfant", peut priver ce dernier d'expériences nécessaires à son développement.

      La charge de travail des enseignants : La crainte que ces programmes représentent une "couche" supplémentaire de travail est une objection fréquente. Julie Zarlot répond que les outils pédagogiques de l'UNICEF sont conçus pour être directement liés aux programmes scolaires, permettant aux enseignants de "piocher" dans différentes disciplines pour illustrer les droits "sans en avoir l'air".

      Au-delà du primaire : Application au collège et au lycée

      Valérie Becket note que l'enseignement secondaire dispose déjà de nombreuses structures de participation (Conseil de la Vie Collégienne, Conseil de la Vie Lycéenne, délégués). Cependant, leur existence ne garantit pas une meilleure application des droits ni une meilleure écoute des élèves.

      Elle suggère que des outils comme la "marche exploratoire" seraient très pertinents pour les adolescents afin d'analyser leur vécu de l'établissement.

      Surtout, elle insiste sur la nécessité de créer des "passerelles" entre le primaire et le secondaire pour assurer une continuité. Sans cela, un élève habitué à participer et à être écouté risque de subir un choc ("patatra") en arrivant dans un environnement où il "ne peut plus rien dire".

      Citations clés

      Valérie Becket, sur le droit le plus important à travailler à l'école : "Le droit d'avoir un point de vue."

      Richard Côtier, sur la nécessité d'un engagement sur le long terme : "Le programme UNICEF par exemple il est prévu sur 3 ans et moi je pense que 3 ans c'est juste la piqûre, c'est juste le vaccin.

      Ce n'est pas le temps qu'il va falloir pour construire un système où vraiment on aura pris en compte ce phénomène là."

      Julie Zarlot, sur l'intégration des droits dans le quotidien : "On peut parler des droits de l'enfant et les rendre quotidien, effectif, presque sans en avoir l'air."

      Richard Côtier, sur le risque de ne pas favoriser la participation : "Le risque de faire grandir des élèves qui sont pas dans la participation [...] ça veut dire que on fait des enfants qui sont relativement passifs, qui laissent prendre les autres des initiatives parce que finalement on leur demande pas leur avis."

    1. But one 4Chan user found 4chan to be too authoritarian and restrictive and set out to create a new “free-speech-friendly” image-sharing bulletin board, which he called 8chan.

      The interesting part about these spaces for me is the idea that as these users become more desensitized to this type of content there is always a need to "up the ante." A desire to forever push the edge until it eventually can't only exist on the bubble of these websites.

    2. 5.5.3. 8Chan (now 8Kun)# 8Chan (now called 8Kun) is an image-sharing bulletin board site that was started in 2013. It has been host to white-supremacist, neo-nazi and other hate content. 8Chan has had trouble finding companies to host its servers and internet registration due to the presence of child sexual abuse material (CSAM), and for being the place where various mass shooters spread their hateful manifestos. 8Chan is also the source and home of the false conspiracy theory QAnon

      Although it wasn't explicitly explained, the fact that 4chan was probably created because some 15 year old kid wanted less regulations when their platform already had a forum for "Anime Death Tentacle Rape Whorehouse" is already quite disturbing. But the fact that someone found 4chan to be TOO restrictive and made something with even WORSE regulation is truly disappointing. It gives too much space for people to feel comfortable saying dangerous things (such as ideology from mass shooters) and being encouraged/enabled for it.

    1. Briefing : Comprendre et Accompagner les Troubles Dys et le TDAH à l'École

      Résumé Exécutif

      Ce document de synthèse analyse les troubles neurodéveloppementaux — spécifiquement le Trouble Développemental de la Coordination (TDC ou dyspraxie), les dyscalculies et le Trouble du Déficit de l'Attention avec ou sans Hyperactivité (TDAH) — et présente des stratégies d'accompagnement en milieu scolaire.

      Les points critiques à retenir sont les suivants :

      1. Nature des Troubles : Ces troubles ne sont ni le fruit d'une paresse, ni d'un manque d'intelligence, mais des conditions neurodéveloppementales qui affectent la manière dont le cerveau traite l'information, automatise les compétences et régule le comportement.

      2. Impact Global : L'impact de ces troubles dépasse largement le cadre académique.

      Ils affectent la vie quotidienne, sociale et familiale de l'enfant, générant fatigue, anxiété et une estime de soi fragile dès le plus jeune âge.

      3. Dyspraxie (TDC) : Le Coût de la Double Tâche : La dyspraxie est un trouble de l'automatisation du geste.

      Chaque action, notamment l'écriture, requiert un contrôle attentionnel intense et coûteux, plaçant l'enfant en situation de double tâche permanente.

      La dysgraphie en est une conséquence directe et handicapante.

      4. Dyscalculies : Un Trouble Pluriel : Il n'existe pas une mais des dyscalculies (spatiale, linguistique, dysexécutive, etc.), chacune liée à des mécanismes cognitifs distincts.

      Le lien fondamental entre la représentation des nombres et l'espace est une clé de compréhension majeure. Un diagnostic précis est essentiel pour une remédiation ciblée.

      5. TDAH : Un Trouble de la Régulation : Le TDAH n'est pas un déficit d'attention mais un trouble de la régulation de l'attention, du comportement et des émotions.

      Il est sous-tendu par des difficultés au niveau des fonctions exécutives (inhibition, flexibilité, mémoire de travail).

      6. Stratégies et Posture Pédagogique : L'accompagnement efficace repose sur des aménagements pédagogiques qui contournent la difficulté (privilégier l'oral, fournir des supports adaptés, utiliser des outils numériques) et sur une posture bienveillante.

      Le rôle de l'enseignant est celui d'un observateur expert des manifestations du handicap, dont l'objectif est de valoriser les efforts, renforcer les comportements positifs et préserver à tout prix l'estime de soi de l'élève.

      --------------------------------------------------------------------------------

      1. Le Trouble Développemental de la Coordination (TDC) ou Dyspraxie

      Présenté par Emmanuel Ploie-Maës, psychologue clinicienne spécialisée en neuropsychologie, le TDC, ou dyspraxie, est un trouble moteur qui affecte profondément le parcours de l'enfant.

      1.1. Définition et Mécanismes Cognitifs

      Le geste est défini comme un "ensemble intentionnel de mouvements coordonnés dans le temps et dans l'espace en vue de réaliser une action finalisée".

      Chez un individu neurotypique, la planification et la programmation motrice d'un geste sont des processus non conscients et automatisés, ne nécessitant que peu de ressources cognitives.

      Chez l'enfant dyspraxique, cette automatisation ne se fait pas. Le TDC est un "trouble spécifique de la programmation et de la réalisation des gestes complexes".

      En conséquence :

      Le geste reste sous contrôle attentionnel : Chaque action, même simple, est laborieuse et fatigante.

      L'enfant est en situation de double tâche permanente : Il doit allouer une part considérable de ses ressources cognitives à la réalisation du geste, ce qui laisse très peu de ressources disponibles pour les tâches de plus haut niveau.

      Exemple : Un élève de CE2 avec une écriture automatisée utilise peu de ressources pour tracer les lettres et peut se concentrer sur l'orthographe.

      L'élève dyspraxique utilise l'essentiel de ses ressources pour former les lettres, ce qui entraîne des erreurs orthographiques non pas par méconnaissance, mais par manque de ressources attentionnelles disponibles.

      Une étude menée à l'hôpital Robert Debré a mis en évidence deux grands types de dyspraxie :

      Dyspraxie avec troubles gestuels purs.

      Dyspraxie mixte (gestuelle et visuospatiale), qui associe aux troubles du geste des difficultés dans les traitements visuospatiaux.

      1.2. Manifestations et Impacts

      Le TDC a des conséquences sévères sur l'ensemble du développement de l'enfant, car "l'enfant qui est en difficulté dans le développement de ses gestes, il est en difficulté dans sa vie tout le temps, dès le moment où il émet un pied sur le sol jusqu'au moment où il s'endort le soir".

      | Domaine d'Impact | Manifestations Concrètes | | --- | --- | | Parcours Scolaire | Dysgraphie sévère (cahiers "sales et brouillons", lenteur, fatigabilité), difficultés en géométrie, en arts plastiques, manipulation des outils (règle, compas). Les écrits sont souvent inutilisables pour l'apprentissage. | | Vie Quotidienne | Difficultés pour s'habiller (boutons, lacets), manger proprement, utiliser des couverts. Lenteur pour se préparer, ce qui peut entraîner des moqueries. | | Vie Sociale & Loisirs | Difficultés dans les jeux de construction, les sports collectifs. L'enfant peut être mis à l'écart ou être le "dernier choisi" dans les équipes. | | Développement Global | Atteinte de l'estime de soi très précoce (dès la maternelle), anxiété, troubles du sommeil, de l'alimentation. L'enfant a souvent conscience de ses difficultés, ce qui accroît sa souffrance. |

      1.3. Processus Diagnostique et Outils

      Le diagnostic doit être posé par un médecin spécialiste suite à une synthèse complète incluant :

      • L'anamnèse (parole des parents).

      • Les observations de l'école (cahiers, bulletins, écrits des enseignants).

      • Un bilan neuropsychologique (souvent basé sur le WISC-5, qui révèle un profil caractéristique avec de bonnes capacités verbales contrastant avec des difficultés graphiques et visuospatiales).

      • Des bilans complémentaires (ergothérapie, psychomotricité).

      Un outil simple, le questionnaire DCDQ-F, est accessible en ligne et peut être proposé par les équipes pédagogiques aux familles pour amorcer un dialogue et orienter vers une consultation spécialisée en cas de forte probabilité de TDC.

      1.4. Stratégies d'Aménagement Pédagogique

      L'objectif est de contourner la difficulté pour atteindre le même but par un autre chemin.

      Principes Généraux :

      Privilégier le canal auditivo-verbal : Utiliser l'oral pour l'apprentissage et la restitution des connaissances.

      Soulager de la tâche graphique : Limiter drastiquement la copie. Le geste graphomoteur n'est pas un outil d'apprentissage pour ces enfants.

      Adapter les supports : Utiliser des polices de caractères lisibles (Arial, Verdana), agrandir les interlignes, aérer la présentation, isoler les exercices sur la page.

      Tenir compte de la lenteur : Alléger la quantité de travail (supprimer des exercices) ou accorder du temps supplémentaire.

      Valoriser les efforts : Faire preuve d'indulgence sur la présentation et la propreté.

      Adaptations par Niveau et Matière :

      Cycle 2 (CP-CE1) :

      Lecture/Écriture : Épeler oralement les mots pour en apprendre l'orthographe plutôt que de les copier. Utiliser des lignages adaptés (type Gurvan).  

      Mathématiques :

      Éviter le comptage sur les doigts. Privilégier la manipulation où l'objet compté est déplacé ou barré.

      Utiliser des gabarits pour poser les opérations.

      Expliciter verbalement les symboles (< devient "plus petit que").

      Cycles 3 et 4 (Primaire et Collège) :

      Toutes matières : Fournir des supports de cours de qualité (photocopies, fichiers numériques sur l'ENT). Autoriser l'enregistrement audio des cours. Utiliser des surligneurs plutôt que de souligner à la règle.  

      Outils Numériques : L'ordinateur ou la tablette (plus pratique pour photographier le tableau) devient un outil de compensation indispensable, avec des logiciels de correction orthographique et des outils comme le ruban du Cartable Fantastique.  

      Mathématiques/Géométrie : Autoriser la calculatrice. Utiliser des logiciels comme GeoGebra. Faire tracer les figures par l'AESH ou un pair. Évaluer la connaissance des propriétés des figures à l'oral.   

      EPS : Proposer des rôles alternatifs (capitaine d'équipe, arbitre, organisateur) et évaluer la progression personnelle plutôt que la performance brute.

      Rôle de l'AESH : L'AESH est un soutien essentiel dont le rôle est de préparer les supports, lire les consignes, encourager la participation orale et manipuler les outils, mais non de "faire à la place" de l'élève.

      --------------------------------------------------------------------------------

      2. Les Dyscalculies : Un Trouble Pluriel

      Présentées par Michel Mazaux, les dyscalculies sont un ensemble hétérogène de troubles spécifiques du calcul et du traitement des nombres.

      2.1. Le Lien Fondamental entre Nombre et Espace

      Le cerveau traite les nombres en s'appuyant massivement sur des représentations spatiales.

      Les régions cérébrales dédiées au nombre et à l'espace sont étroitement intriquées.

      La Ligne Numérique Mentale : Nous organisons inconsciemment les nombres sur une ligne mentale, où les petits nombres sont à gauche et les grands à droite.

      Le calcul mental s'apparente à un déplacement sur cette ligne. Cette représentation se développe avec la scolarisation, passant d'une échelle "tassée" (logarithmique) à une échelle régulière (linéaire) pour les nombres maîtrisés.

      Procédures Visuospatiales : Le dénombrement (compter des objets) et l'écriture des nombres (système positionnel) sont des activités intrinsèquement visuospatiales.

      2.2. Les Différents Types de Dyscalculie

      Il est crucial de distinguer plusieurs types de dyscalculies, car elles n'ont pas la même origine et ne se traitent pas de la même manière.

      1. Le Trouble du Sens du Nombre : Atteinte du "petit réseau de neurones" inné dédié au traitement de la numérosité. L'enfant a du mal à estimer des quantités et à comprendre l'ordre de grandeur.

      2. La Dyscalculie Spatiale : Souvent associée au TDC avec troubles visuospatiaux. L'enfant a des difficultés avec le dénombrement, l'alignement des chiffres dans les opérations et la compréhension du système positionnel.

      3. La Dyscalculie Linguistique : Associée à un Trouble du Développement du Langage Oral (TDLO/dysphasie).

      La difficulté réside dans la maîtrise de la suite verbale des mots-nombres et le transcodage (passage de l'oral à l'écrit, ex: "soixante-dix-sept").

      4. La Dyscalculie Dysexécutive ou Attentionnelle : Associée à un TDAH.

      L'enfant fait des erreurs dues à un manque d'inhibition (une routine additive s'immisce dans une multiplication), une mauvaise planification des étapes ou des oublis (retenues).

      Il est essentiel de différencier ces troubles "dys" des troubles logico-mathématiques, qui relèvent d'une intelligence logique plus faible et s'apparentent à une déficience intellectuelle légère, et non à un trouble neurodéveloppemental spécifique.

      2.3. De la Difficulté au Trouble : Le Modèle de la Réponse à l'Intervention (RAI)

      Pour distinguer un élève en difficulté d'un élève présentant un trouble, une approche en trois niveaux est préconisée :

      Niveau 1 : Un enseignement explicite et validé (ex: méthode de Singapour) pour toute la classe.

      Niveau 2 : Pour les 15-20% d'élèves qui ne progressent pas assez, un renforcement pédagogique en petits groupes (plus de temps, plus de manipulations, plus d'exercices) pendant 3-4 mois.

      Niveau 3 : Si 5-8% des élèves sont toujours en grande difficulté malgré ce renforcement, un bilan complet (psychologique, neuropsychologique, orthophonique) est nécessaire pour poser un diagnostic de dyscalculie.

      2.4. L'Importance du Diagnostic Différentiel

      Savoir de quel type de dyscalculie souffre un enfant est fondamental car les pistes de remédiation seront différentes.

      Par exemple, un enfant avec une dyscalculie spatiale bénéficiera d'aides visuelles et de gabarits, tandis qu'un enfant avec une dyscalculie linguistique nécessitera un travail intensif sur le langage mathématique oral.

      --------------------------------------------------------------------------------

      3. Les Fonctions Exécutives et le Trouble du Déficit de l'Attention (TDAH)

      Présenté par Jessica Sav-Pebos, neuropsychologue, le TDAH est un trouble de l'autorégulation dont les racines se trouvent dans le fonctionnement des fonctions exécutives.

      3.1. Les Fonctions Exécutives : Le "Chef d'Orchestre" du Cerveau

      Les fonctions exécutives sont les processus de haut niveau qui nous permettent de réguler nos pensées, nos émotions et nos comportements pour atteindre un but. Elles sont essentielles à l'organisation, la planification et l'adaptation. Les principales sont :

      L'Initiation : La capacité à démarrer une tâche.

      La Planification : L'organisation des étapes pour atteindre un but.

      L'Inhibition : La capacité à freiner les impulsions et à résister aux distractions.

      La Flexibilité Mentale : La capacité à changer de stratégie, à s'adapter à l'imprévu et à voir les choses sous un autre angle.

      La Mémoire de Travail : La capacité à maintenir et manipuler plusieurs informations en tête simultanément.

      La Régulation Émotionnelle : La gestion de l'intensité et de l'expression des émotions.

      3.2. Le TDAH : Un Trouble de la Régulation

      Le TDAH n'est pas un "déficit" d'attention, mais une incapacité à la réguler efficacement.

      L'enfant a du mal à la diriger et à la maintenir sur une cible non stimulante. On distingue trois formes cliniques (DSM-5) : inattentive, hyperactive-impulsive, et mixte.

      Le diagnostic, posé par un médecin, est libérateur car il remplace des étiquettes négatives ("paresseux", "dans la lune") par une explication neurobiologique.

      3.3. Les Trois Axes de la Dysrégulation dans le TDAH

      1. La Dysrégulation Attentionnelle :

      Procrastination : Difficulté extrême à initier une tâche, non par manque de motivation mais par un fonctionnement cérébral atypique. Il faut "activer le corps pour que le cerveau suive".  

      Distractibilité : Manque d'inhibition face aux distractions internes (pensées) et externes (bruits).  

      Mémoire de travail "passoire" : Difficulté à retenir des consignes multiples, d'où l'importance de décomposer les tâches et d'utiliser des aides visuelles (post-it, schémas).

      2. La Dysrégulation Comportementale :

      Impulsivité : L'enfant agit sans réfléchir aux conséquences car le "frein" (inhibition) est défaillant. Il connaît la règle mais ne parvient pas à l'appliquer au bon moment.  

      Rigidité : Le manque de flexibilité mentale peut entraîner des réactions explosives face aux imprévus ou aux changements, car l'enfant ne parvient pas à ajuster son "plan A".

      3. La Dysrégulation Émotionnelle :

      Hypersensibilité : Les émotions sont vécues avec une grande intensité et peuvent "pirater" toute l'attention disponible.  

      Fenêtre de disponibilité étroite : L'enfant passe très rapidement de l'ennui (si la tâche n'est pas assez stimulante) à la surcharge (si la tâche est trop complexe), ce qui le fait sortir de sa zone d'apprentissage optimal.

      3.4. Pistes d'Intervention et Posture de l'Enseignant

      Renforcer plutôt que punir : La posture la plus efficace est de "prêter attention à ce qu'on veut voir davantage". Il faut systématiquement relever et verbaliser les efforts et les comportements positifs, même minimes.

      Structurer l'environnement : Aider l'enfant à organiser son temps, son matériel et ses tâches en apportant des aides externes (minuteurs, plannings visuels, consignes décomposées).

      Respecter la neurodiversité : Comprendre que le système nerveux d'un enfant hyperactif a besoin de se décharger avant de pouvoir se calmer.

      Proposer des pauses motrices (pousser un mur, s'étirer) est plus efficace que d'imposer une relaxation.

      Être un "détective" : Le rôle de l'enseignant n'est pas de diagnostiquer, mais d'observer précisément le retentissement fonctionnel du trouble ("le handicap") en classe.

      Ces observations concrètes sont extrêmement précieuses pour l'ensemble de l'équipe d'accompagnement.

    1. Store

      The store menu raises concerns regarding the Operable principle: the icons are clear, but users who rely on keyboard-only navigation might find it hard to navigate the menu.

    1. Synthèse sur l'Éducation à la Citoyenneté Numérique : S'appuyer sur les Pratiques des Jeunes

      Résumé Exécutif

      Ce document de synthèse analyse les perspectives et stratégies d'éducation à la citoyenneté numérique, basées sur les interventions d'experts en sociologie, en éducation au numérique et d'un praticien en milieu scolaire.

      L'idée centrale est un changement de paradigme : passer d'une approche "riscocentrée", focalisée sur la protection et l'interdiction, à une posture d'accompagnement qui s'appuie sur les pratiques réelles et les centres d'intérêt des jeunes.

      Les intervenants soulignent que les jeunes utilisent le numérique pour des raisons profondes liées à la construction identitaire, à la régulation du stress et à la recherche de réponses que les adultes ne fournissent pas toujours.

      Pour être efficaces, les éducateurs doivent adopter une posture d'empathie, de légitimation des cultures numériques des jeunes et de co-construction des savoirs.

      L'objectif final est de développer leur réflexivité, leur esprit critique et leur pouvoir d'agir, en les aidant à comprendre les mécanismes des plateformes, leurs droits, leurs devoirs et le potentiel émancipateur du numérique, plutôt que de se limiter à une posture de méfiance.

      --------------------------------------------------------------------------------

      1. Redéfinir la Citoyenneté Numérique au-delà des Risques

      Le point de départ de la discussion est le constat que la notion de citoyenneté numérique est souvent perçue par les adultes à travers le prisme de l'inquiétude et de la protection.

      Les intervenants s'accordent sur la nécessité d'élargir cette vision.

      Une Définition Élargie : La définition du Conseil de l'Europe est citée comme un modèle, incluant des dimensions positives telles que l'inclusion, la créativité, l'empathie et la participation active.

      Faire "avec eux" plutôt que "pour eux" : Il y a une prise de conscience croissante de l'importance d'impliquer les jeunes dans la construction de leur citoyenneté numérique.

      Un Vocabulaire Inadapté : Selon Nicolas Bourgeon, professeur documentaliste, le terme "citoyenneté numérique" est un jargon institutionnel qui ne résonne pas chez les élèves.

      L'approche efficace consiste à "utiliser leur mot à eux".

      Priorités pour l'Éducation au Numérique

      Chaque intervenant définit une priorité pour l'éducation à la citoyenneté numérique :

      | Intervenant | Organisation | Priorité | Citation Clé | | --- | --- | --- | --- | | Axel Dein | Directrice, Internet sans crainte | Comprendre | "Comprendre l'espace numérique dans lequel on évolue, comprendre les services qu'on utilise, comprendre les algorithmes pour être un utilisateur éclairé." | | Jocelyn Lachance | Sociologue, Crédat | Valoriser | "Ce qu'on oublie souvent, c'est que la plupart des jeunes se comportent quand même bien à l'heure du numérique et la question c'est en tant qu'adulte qu'est-ce qu'on est capable de valoriser les bonnes pratiques." | | Nicolas Bourgeon | Professeur Documentaliste | S'adapter | "Ce sont des mots qui appartiennent au vocabulaire plutôt institutionnel et l'approche que j'essaie d'avoir bah d'utiliser leur mot à eux." |

      2. Changer le Regard des Adultes sur les Pratiques Numériques des Jeunes

      Une critique fondamentale adressée à l'approche actuelle est le regard que les adultes portent sur les usages numériques des jeunes, souvent teinté de méconnaissance et de fantasmes.

      Le Regard "Riscocentré" et ses Limites

      Jocelyn Lachance identifie que l'intérêt des adultes pour les pratiques des jeunes est souvent "riscocentré", se concentrant sur les aspects délétères.

      Cette focalisation a plusieurs conséquences négatives :

      Elle occulte les bénéfices : Les jeunes utilisent le numérique pour des raisons essentielles à leur développement : construction de l'identité, gestion de questions existentielles, socialisation.

      Elle crée un décalage : Les jeunes ont l'impression que les adultes "passent à côté de ce qui est l'essentiel pour eux", à savoir le sens et les avantages qu'ils trouvent en ligne.

      La Solitude des Jeunes et l'Indisponibilité des Adultes

      Un thème récurrent est le sentiment de solitude des jeunes face au numérique.

      Manque d'accompagnement : Selon Axel Dein, les jeunes "sont extrêmement seuls" et "n'identifient pas les adultes autour d'eux comme des personnes qui sont susceptibles de les accompagner".

      Le numérique comme palliatif : Jocelyn Lachance confirme que les jeunes vont chercher en ligne ce qu'ils ne trouvent pas auprès des adultes.

      Une recherche sur l'usage de l'IA par les jeunes montre qu'ils s'en servent pour obtenir "une réponse structurée et rassurante" lorsqu'ils perçoivent les adultes comme indisponibles ou que le sujet est délicat (sexualité, mort).

      La Question de l'Interdiction

      L'interdiction est une pratique éducative structurante, mais son application au numérique soulève des questions complexes.

      Jocelyn Lachance met en garde contre une approche simpliste :

      1. Le Sens : Les adultes doivent s'interroger sur leurs motivations réelles derrière une interdiction.

      2. L'Efficacité et le Déplacement : Interdire l'accès à un espace peut pousser les jeunes vers un autre espace potentiellement moins sécurisé.

      3. La Perte de Bénéfices : L'interdiction peut supprimer des pratiques bénéfiques pour les jeunes, comme la régulation du stress.

      L'exemple d'un lycée québécois interdisant les smartphones est parlant : les élèves ont révélé qu'ils utilisaient leur téléphone pour écouter de la musique et s'isoler afin de gérer leur stress avant les examens.

      3. De la Prévention à l'Accompagnement : S'appuyer sur les Pratiques Réelles

      La deuxième partie de la discussion se concentre sur les méthodes pour passer d'une posture de simple prévention à un véritable accompagnement, en partant des usages concrets des jeunes.

      La Co-construction et l'Immersion

      Internet sans crainte, dirigé par Axel Dein, développe des ressources (serious games, scénarios interactifs) en impliquant directement les jeunes.

      Le rôle des panels de jeunes : Ils sont essentiels pour assurer la justesse et l'authenticité des ressources.

      Les jeunes poussent souvent les scénarios à être plus intenses pour refléter la réalité ("Mais là c'est trop tiède ce qu'on vit c'est plus intense c'est plus dur que ça.").

      Susciter l'esprit critique : L'objectif n'est pas de donner des leçons de manière descendante, mais de "les amener à prendre du recul, à se questionner".

      Ces séances collectives permettent une "autorégulation" bienveillante entre pairs.

      Partir des Centres d'Intérêt des Élèves

      Nicolas Bourgeon mène un projet avec des élèves de 6ème sur les influenceurs, un sujet qui les passionne. La démarche est la suivante :

      1. Point de départ : Les élèves choisissent un influenceur qu'ils apprécient.

      2. Analyse guidée : Ils décryptent le modèle économique (économie de l'attention, monétisation), les partenariats commerciaux (encadrés par la loi de 2023) et les techniques pour capter l'audience.

      3. Prise de conscience : Ce travail leur fait réaliser que lorsqu'ils consultent du contenu, "ils créent de la valeur". Les élèves identifient facilement les circuits financiers (produits, boutiques, microdons).

      4. La Citoyenneté Numérique en Action : Vers l'Émancipation

      La dernière partie explore les moyens de donner aux jeunes un réel pouvoir d'agir (empowerment) et de développer leur réflexivité.

      L'Expérience "Digital Practice Awareness" (DPA)

      Une recherche menée par Mélina Solari Landa auprès de lycéens offre des enseignements clés :

      La primauté du désir : "Le désir est le meilleur moteur de l'usage des adolescents."

      Le besoin de socialiser et les émotions l'emportent sur une évaluation rationnelle des risques, même lorsque les jeunes sont informés de l'utilisation de leurs données.

      Difficulté avec la temporalité et la distance : Les jeunes ont du mal à percevoir comment leurs actions en ligne actuelles peuvent avoir des conséquences à long terme ou affecter des personnes à une échelle globale.

      L'inefficacité des approches prescriptives : Les logiques restrictives ne permettent pas de développer la réflexivité.

      Développer la Réflexivité et la Confiance

      L'objectif de réflexivité : Pour Jocelyn Lachance, le but est d'amener les jeunes à réfléchir à ce qu'ils vivent et ressentent avant qu'une situation problématique ne survienne.

      Un "carnet de déconnexion" accompagné est plus efficace qu'un simple défi.

      Le risque de briser la confiance : Une approche trop axée sur les risques peut être contre-productive.

      Une jeune fille, après avoir reçu de la prévention, n'a pas osé parler à ses parents d'une expérience sur une application de rencontre par peur de se faire gronder.

      Légitimer leur culture : Pour Nicolas Bourgeon, établir la confiance passe par la reconnaissance de la légitimité de la "culture geek" des élèves.

      Éduquer aux Droits, aux Devoirs et au Pouvoir d'Agir

      Axel Dein insiste sur la nécessité de former les jeunes à la compréhension de leurs droits et devoirs en ligne, car leur première activité numérique est souvent sociale.

      Internet sans crainte a développé une mallette pédagogique qui aborde trois axes :

      1. Comprendre ses droits et devoirs.

      2. Comprendre le rapport à l'autre en ligne (limites public/privé, liberté d'expression).

      3. Comment le numérique donne le pouvoir d'agir.

      S'inspirer des Codes Numériques

      Jocelyn Lachance suggère de s'inspirer des raisons du succès des YouTubeurs et Twitchers auprès des jeunes. Ces créateurs donnent le sentiment de créer un espace sécurisant où :

      • Les jeunes sentent que les discussions partent d'eux ("on peut poser les vraies questions").

      • Leur parole est comprise et valorisée.

      • Leur culture n'est pas "délégitimée".

      L'enjeu pour l'adulte est de s'interroger sur sa propre posture :

      "Est-ce que moi je suis personnellement dans une posture qui délégitime les pratiques numériques et qui fait un espèce de mouvement de répulsion par rapport aux jeunes ?"

    1. Améliorer l'Engagement des Élèves en Collège et Lycée : Synthèse des Stratégies de Hassan Nassiri

      Résumé Exécutif

      Ce document synthétise les stratégies et les réflexions partagées par Hassan Nassiri, professeur et formateur, pour améliorer l'engagement des élèves dans le second degré.

      L'approche préconisée repose sur quatre leviers d'action fondamentaux :

      Ritualiser les cours pour créer un cadre sécurisant,

      Varier les supports et les modalités pour maintenir l'attention,

      Donner des responsabilités pour impliquer les élèves, et

      Valoriser la progression plutôt que la seule performance.

      Pour les élèves les plus réticents, la méthode consiste à identifier les causes de leur décrochage et à proposer des "entrées progressives" via des micro-tâches pour créer des premiers succès.

      La création d'une dynamique de classe collective, à travers des projets interdisciplinaires et une charte de classe co-construite, est également essentielle.

      La posture de l'enseignant est déterminante : elle doit incarner une alchimie entre exigence et bienveillance, en établissant un cadre clair tout en offrant des encouragements constants.

      La gestion de l'erreur doit être dédramatisée, celle-ci étant présentée comme une étape nécessaire à l'apprentissage.

      Enfin, il est crucial de ne pas rester isolé et de s'appuyer sur l'équipe pédagogique (collègues, CPE, direction) pour gérer les situations complexes et assurer une cohérence éducative.

      1. Introduction et Contexte

      Hassan Nassiri, professeur en établissement à mi-temps et formateur pour le Réseau Canopé et l'inspection académique, aborde la question centrale de l'engagement des élèves en collège et lycée.

      Fort de son expérience, notamment en lycée professionnel, il partage des gestes professionnels concrets et des retours de terrain destinés à aider les enseignants, particulièrement les débutants, à "ne laisser personne sur le bord de la route".

      La problématique principale est de savoir comment mettre en activité tous les élèves, y compris ceux qui semblent les moins coopératifs, afin de créer et de pérenniser une dynamique de classe positive tout au long de l'année.

      Ses conseils s'appliquent aussi bien aux classes dédoublées (12-15 élèves) qu'aux classes à effectif plus lourd (24-30 élèves et plus).

      2. Fondements Théoriques et Pédagogiques

      Pour nourrir sa réflexion, Hassan Nassiri s'appuie sur plusieurs références clés qui soulignent l'importance de la pédagogie et de l'organisation dans la gestion de classe :

      François Dubet ("Les lycéens") : Cet ouvrage analyse finement le rapport des élèves au travail scolaire, montrant la grande variété des profils et l'influence de leur histoire personnelle sur leur engagement.

      Philippe Merieu ("La pédagogie différenciée") : Nassiri retient de Merieu l'idée fondamentale d'adapter les dispositifs pédagogiques pour que chaque élève trouve sa place et que personne ne se sente exclu.

      Ressources Eduscol et Réseau Canopé : Ces ressources rappellent un principe essentiel : "la gestion de classe, ce n'est pas que de la discipline, c'est avant tout de l'organisation et de la pédagogie".

      3. Les Quatre Levier Fondamentaux de l'Engagement

      Hassan Nassiri identifie quatre leviers concrets pour transformer ces idées en actions en classe.

      a. Ritualiser

      Instaurer des rituels en début et en fin de cours permet de créer un cadre rassurant pour les élèves, notamment les plus effacés.

      Début de séance : Commencer par une "question flash" ou un "mot d'actualité".

      Fin de séance : Terminer par un rapide tour de table pour synthétiser ce qui doit être retenu.

      b. Varier

      Pour éviter la routine et l'ennui, il est crucial de varier les supports et les modalités de travail.

      Alternance des supports : Combiner des supports écrits traditionnels ("la bonne vieille méthode du papier") avec des outils numériques (quiz, etc.). Hassan Nassiri insiste sur l'importance de faire écrire les élèves, estimant qu'ils "n'écrivent pas assez".

      Alternance des activités : Le but est de casser la routine durant l'heure de cours pour capter l'attention.

      Travaux de groupe : Cette modalité est jugée "très intéressante" pour responsabiliser les élèves et impliquer ceux qui sont plus effacés ou timides.

      c. Donner (des responsabilités)

      Attribuer des rôles spécifiques aux élèves, notamment dans le cadre des travaux de groupe, modifie radicalement leur implication.

      Exemples de rôles : Gardien du temps, rapporteur, responsable du matériel.

      Impact : "Quand tes élèves se sentent utiles, leur implication change." Cela fonctionne particulièrement bien pour les élèves timides.

      Pédagogie de projet : Mettre les élèves en projet les rend "vraiment acteurs de leur formation".

      Des exemples concrets incluent la création d'une mini-entreprise ou l'organisation de mobilités internationales (Erasmus).

      d. Valoriser

      Ce levier est jugé "très, très important". Il s'agit de valoriser la progression des élèves et pas uniquement leur performance finale.

      Signal fort : Féliciter un élève en difficulté non seulement pour une bonne réponse, mais aussi pour une démarche claire ou une progression par rapport à la séance précédente.

      Message transmis : "Cela montre que l'effort compte autant que le résultat."

      Impact sur l'élève : L'encouragement régulier et la reconnaissance de l'effort boostent l'élève et renforcent sa confiance.

      4. Stratégies pour les Élèves Réticents et en Retrait

      Face aux élèves qui résistent à l'engagement, Hassan Nassiri propose une approche ciblée.

      Identifier la cause : Il faut se demander ce qui motive le refus (peur de l'échec, rejet de l'école, historique personnel). "Il y a toujours une explication."

      Proposer des "entrées progressives" : Commencer par une "micro-tâche" simple, par exemple en binôme, puis augmenter progressivement la difficulté.

      L'objectif est de "créer un premier succès, même petit", pour encourager l'élève.

      Accompagnement personnalisé : Profiter des moments en demi-groupe pour s'approcher physiquement des élèves les plus timides, s'asseoir à côté d'eux pour les rassurer et les accompagner de manière individualisée.

      Valoriser la participation : L'élève doit sentir qu'il a le droit à l'erreur. Il faut l'encourager pour sa tentative, même s'il se trompe.

      Lui confier une mission simple, comme expliquer une réponse au tableau, le rend visible et le valorise.

      5. La Force du Collectif : Créer une Dynamique de Classe

      Au-delà des actions individuelles, il est primordial de construire une culture de classe collective.

      Charte de classe : Élaborer une charte avec les élèves sur les valeurs et les attitudes à adopter.

      Cette démarche, bien que chronophage, les rend "complètement acteurs de leur apprentissage".

      Projets communs : Lancer des projets interdisciplinaires permet de travailler avec d'autres collègues, de ne pas rester seul, et de montrer aux élèves les liens entre les disciplines.

      Cela crée du lien tant pour les élèves que pour les enseignants.

      6. Thèmes Spécifiques Abordés (Session Q&A)

      | Thème | Stratégies et Conseils | | --- | --- | | Gestion des binômes | Il n'y a pas de "formule miracle" (faibles ensemble vs. mixité). L'enseignant connaît ses élèves et doit adapter la composition. Hassan Nassiri privilégie les groupes par affinité et insiste : "il ne faut jamais imposer les binômes", sauf cas exceptionnel. La supervision de l'enseignant est clé. | | Gestion de l'erreur | L'erreur doit être dédramatisée et valorisée comme un moteur d'apprentissage. Il faut affirmer aux élèves : "vous avez le droit de vous tromper. L'erreur n'est pas négative, justement l'erreur permet d'avancer". L'erreur peut aussi provenir d'un manque de clarté dans les consignes de l'enseignant. | | Engagement inter-matières | Pour contrer la tendance des élèves à négliger les matières à faible coefficient, il faut leur expliquer, en s'appuyant sur le référentiel du baccalauréat, que "toutes les matières comptent". Ce discours doit être porté par toute l'équipe pédagogique pour être efficace. | | Usage du numérique | L'alternance papier/numérique est essentielle, car le "tout numérique" peut lasser les élèves. Pour les outils comme Kahoot en collège (sans smartphone autorisé), l'enseignant doit fixer un cadre et des règles très claires avant de lancer l'activité et sanctionner si elles ne sont pas respectées pour garantir sa crédibilité. | | Gestion des classes agitées | Face à une classe très remuante (26 élèves et plus) : ne pas rester seul. S'appuyer sur l'équipe (collègues, CPE, direction), identifier les meneurs, les interroger en individuel pour comprendre leur comportement, et utiliser des outils comme le plan de classe. | | Intelligence Artificielle (IA) | Il est préconisé d'adopter une approche proactive : plutôt que d'interdire, il faut accompagner les élèves. Cela passe par une séance dédiée pour leur apprendre à "rédiger un prompt" et à utiliser l'IA de manière "raisonnée", en comprenant les réponses générées. La fixation d'un cadre par l'enseignant est impérative. |

      7. La Posture Enseignante : Clé de Voûte de l'Engagement

      La réussite de ces stratégies repose fondamentalement sur la posture de l'enseignant.

      L'alchimie de l'exigence et de la bienveillance : C'est le principe central. Il faut donner un cadre clair et être exigeant, mais toujours accompagné d'encouragements constants et d'une écoute bienveillante.

      Accessibilité et crédibilité : Soyez accessible et tenez parole. "Quand vous dites quelque chose, bah faites-le", car ne pas le faire fait perdre toute crédibilité.

      La gentillesse ne doit pas être perçue comme de la faiblesse, mais comme une partie d'un cadre respecté.

      S'appuyer sur l'équipe : Il est essentiel de collaborer avec les collègues expérimentés et surtout avec le ou la CPE, qui a une connaissance fine des élèves et peut apporter une aide précieuse sur l'aspect psychologique.

      La liberté pédagogique : Hassan Nassiri conclut en rappelant que la "fameuse liberté pédagogique" est un atout précieux qui permet aux enseignants de mettre en place ces stratégies et de donner tout son sens à leur métier.

    1. but inthe simple, unambiguous declaration of tawḥīd found in the Qur’an

      2 thoughts. First twhid is nowhere found in Qur’ān anymore than Trinity is in the Bible. Also, when one examines what "one" means in Islam we find the same complexity in Sunni theology as we do in the Jesus movement. God isn't simply one. His essence "dhat" is one but he has uncreated attritubes "sifat" that are disinguishable from Allah but not separable. While the Islam tradition disuptes "person" as a key difference the glaring correlation remains. God's oneness has complexity. He is one in essence and somehow distinguishable through his attributes. Of course, almost no Muslim or Christian considers the depth of these assertions but the correlation remains intact. God is one. God's oneness isn't as simple as some would like it to be. It is only in the New Testament where we find how useful the present-tense knowledge of salvation before death becomes the good news the world has always been waiting for. Paul, I leave you with John 5:24. Do you know you have eternal life, will not be judged and have crossed over from death to life? What 2 conditions must you fulfil according to Jesus the Messiah? If you fulfill these 2 conditions would know you have eternal life or not? If not, why not? When one has this knowledge EVERYTHING CHANGES.

    2. the Qur’an spoke with a unified, unwaveringdivine voice. The difference was not merely textual but ontological

      this is true in a simplistic sense but with more careful examination one is invited to face the qira'at which early Islamic tradition leaves room for multiple readings worthy of recitation for prayer. Combine this with the wide range of inquiry Qur’ān readers have had over the Islamic tradition one finds "unified" to be quite un-unified.

    3. I found myself in theologicalfreefall:

      Here I'd like to offer some questions. How do you, reader answer these?

      Have you the forgiveness of your sins? 2. Have you peace with God, through our Lord Jesus Christ?[^2] 3. Have you the witness of God's Spirit with your spirit that you are a child of God?[^3] 4. Is the love of God shed abroad in your heart?[^4] 5. Has no sin, inward or outward, dominion over you?[^5]

      [^1]: James 5:16 (NIV)[^ 5.16] Therefore confess your sins to each other and pray for each other so that you may be healed. The prayer of a righteous person is powerful and effective. [^2]: Romans 5:1 (NIV)[^ 5.1] Therefore, since we have been justified through faith, we have peace with God through our Lord Jesus Christ, [^3]: Romans 8:16 (NIV)[^ 8.16] The Spirit himself testifies with our spirit that we are God’s children. [^4]: Romans 5:5 (NIV)[^ 5.5] And hope does not put us to shame, because God’s love has been poured out into our hearts through the Holy Spirit, who has been given to us. [^5]: Romans 6:14 (NIV)[^ 6.14] For sin shall no longer be your master, because you are not under the law, but under grace.

    4. co-equality with God

      the New Testament asserts Jesus made himself equal with God multiple times. But notice how Paul says this on the back of the overly simplified axiom "Jesus is God." He explains him to not be "co equal" by saying how can he be God if he himself calls his Father, God? see John 5:18, Mark 2:5–7, John 5:22–23, etc.

    5. Mark

      see Mark 14:62 and ask yourself, if the High Priest seems confused about what Jesus just asserted. Is it better to say, how can Jesus be claiming these things? and could the High Priest be right in thinking Jesus is committing blasphemy? Remember, the Jews wanted to kill the Messiah because of blasphemy but couldn't get the Romans to do it on that basis. They had to turn to Jesus being a King.

    Annotators

    1. Analect 8.14 provides a good, brief summary of Confucius’s view on the political hierarchy. This seems like Confucius is emphasizing respecting boundaries and roles. He isn’t saying you should avoid thinking or caring about politics entirely, but rather that you shouldn’t interfere in matters outside your authority.

      This connects nicely with Analects 4.14, where he tells people to focus on being worthy of a position rather than obsessing over the position itself. Both passages emphasize focusing on self-cultivation and emphasizes staying out of matters that isn’t a part of your business.

      It makes me wonder: is this advice primarily practical (to avoid chaos in governance) or moral (to cultivate humility and propriety)? Tentatively, I think it’s both: Confucius blends ethical behavior with practical wisdom here.

    2. I am very intrigued by Confucius’s take on music as a moral instrument, shown in Analect 3.25. Confucius’s emphasis on music here seems less about aesthetic enjoyment and more about moral psychology. Music is treated as a way of ordering emotions, not suppressing them. Proper music cultivates harmony within the person, aligning emotions with ritual and virtue, whereas disordered music reflects or produces moral disorder. This suggests that for Confucius, emotions are not morally neutral; they need shaping. Music functions almost like ethical training for the emotions, analogous to how ritual (li) trains outward behavior. If this is right, then music is not optional cultural decoration but a core tool of moral cultivation.

      If Confucius is right that music both reflects and shapes emotional order, then he might interpret Modern hip-hop involving drugs and self indulgence as evidence of deeper moral disorder rather than merely changing tastes. Confucius often links cultural decay to the shortcomings of rulers; so one might ask whether he would see modern music as reflecting ethical failures at the level of institutions, elites, or cultural authorities rather than blaming individuals alone. Does his framework risk dismissing new forms of expression too quickly?

  2. drive.google.com drive.google.com
    1. The Master said, “As in piling up earth to erect a mountain,if, only one basketful short of completion, I stop, I havestopped. As in filling a ditch to level the ground, if, havingdumped in only one basketful, I continue, [ am progress-ing.”

      This passage seems to relate to Confucius' view of self-cultivation as a constant task. While it may drain one of energy, the importance of continuing to cultivate oneself cannot be overlooked. What would Confucius say about someone continuing to dumping basketfuls of dirt, but these basketfuls being much lighter than they can actually carry. Would he judge their slow progress and low effort? I think that he might, given that effort is necessary to Confucius in all aspects of a life worth living. Just as with rituals, perhaps he would stress the need for full and genuine effort in every action.

    1. I have a personal computer and/or smartphone with a data plan and internet access.

      I am privileged enough to have a Chromebook from school and computer at home to do work. These have both helped me in many ways not only as a student but in everyday life.

    2. I had access to books in my native language in high school.

      I have never heard of students being told to write a book in their native language, but I believe this can be a very good thing to do.

    1. Focus Indicating and managing focus is integral to keyboard accessibility Grids Visual layout should be efficient and consistent, without impeding document structure and accessibility Headings Headings provide a semantic and visual hierarchical structure to a document Iconography Iconography aids communication, but should not take precedence over text Routing Route changes in single-page applications need to emulate the conventions of page loads Typography Inclusive content must be readable, and for it to be readable it must first be legible

      These points show specific accessibility viewpoints and concerns that are a part of GEl's guidance.

    2. Iconography Iconography aids communication, but should not take precedence over text

      This shows the BBC doesn’t rely solely on icons. Icons without text can confuse people with cognitive disabilities or those unfamiliar with visual symbols.

    1. We used onemotion-activated camera-trap (Cuddeback Capture orCuddeback Capture-IR Plus; recovery time of 30 s perphotograph) in each forest site, placing it in locationsfavorable for mammal detection

      Doing this to not scare off any test subjects is a good idea, but I wonder how well this method of testing works with larger animals that might be territorial and not allow many other animals to walk around freely in the area

    1. An evaluation judges the value of something and determines its worth. Evaluations in everyday experiences are often dictated by both set standards but are also influenced by opinion and prior knowledge.

      judging, looking at every part and its worth

    1. You’ve probably heard that one quality found in good writing is voice. Voice refers to elements of the author’s tone, phrasing, and style that are recognizably unique to her or him. Having a distinctive, persuasive voice is crucial to engaging your audience — without it, your paper risks falling flat, no matter how much research you’ve compiled or how well you’ve followed other directions. Yes, academic writing has rules about format, style, and objectivity that you must follow, but this does not mean you can write boring, impersonal prose. You can — and should — develop an authorial voice no matter what subject you choose to write about.

      Having a unique writing style makes an author and their work more distinguishable and succinct as a whole