Cindy is an academic researcher and instructor, who is active in various anti-violence, health, and social justice movements.
Cindy's background
Cindy is an academic researcher and instructor, who is active in various anti-violence, health, and social justice movements.
Cindy's background
She is involved in anti-violence education, sex worker rights solidarity, local Indigenous arts, and has recently completed her Ph.D. focused on law and colonial violence.
Sarah's background
Kwakwaka’wakw nation
An indigenous group of the Pacific Northwest Coast, in southwestern Canada.
THE POLITICS OF EVERYDAY DECOLONIZATION: STARTING WHERE WE STAND
Subtitle
when we experience pain or where we when we see someone else in pain, our brain activates similar brain regions and that's how we refer to empathy.
for - empathy - neuroscience of - observing pain in others - triggers activations in same brain region as when we experience pain ourselves - observed across species
Most of the time categorization process, discrimination process, dehumanization processes
for - genocide - preceded by 10 preliminary stages - 1. classification - 2. symbolization - 3. discrmination - 4. dehumanization - 5. organization - 6. polarization - 7. preparation - 8. persecution - 9. extermination - 10. denial
When I asked them why did you commit a genocide while they were just civilians so not even trained military members 70% of former perpetrators in Randa said I just followed orders and in Cambodia they report exactly the same reason
for - genocide - why? just following orders - Rawanda and Cambodia interviewees reported the same results - just following orders
I just interviewed soldiers who just returned from Gaza and they said exactly the same
for genocide - Israeli soldiers - give same reason for committing atrocities - just following orders
historically the most ter genocide and slavery have resulted not from disobedience but from obedienc
for - genocide - stem from obedience
Following Orders: The Neuroscience of Obedience
for - neuroscience - of obedience - Emily Casper - book - the neuroscience of (dis)obedience
it require a radical reinvisioning of how we achieve status. What is status? And a radical balancing of power, a reconfiguration of power
for - quote - radical re-envisioning of how we achieve status - SRG comment - Deep Humanity - re-envision status
capital off $10 million.
for - inequality - advocate for - cap wealth at 10 million USD
we're at the 80th anniversary of the Trinity test. So 80 years ago, the very first atomic weapon was exploded in the sand of New Mexico. Beforehand, the scientist Enrico Fermy was taking bets
for - trivia - Trinity test 80th anniversary - Enrico Fermi took bets on Edward Teller's calculation
It's what my favorite archaeologist Bruce Trigger calls a thermodynamic explanation of symbolic behavior
for - paper - Monumental architecture: A thermodynamic explanation of symbolic behavior - Bruce Trigger
Another thing changes. Apart from a change from egalitarianism, we also see a change away from the principle of lease effort.
for - correlation - status competition - anti-least effort (grand monuments)
historically if you look at a history textbook, it's essentially a role called mass murderers.
for - explanation - why leaders are often psychopaths - history book is full of mass murderers
for men, there's rough agreement that amongst the general population, 1% of people would pass the clinical threshold for psychopathy
for - stats - psychopathy - 1% of male population
Very few people are willing to go for status through the use of violence, force, intimidation, and bullying.
for - stats - very few people willing to do violence for status
What's interesting is both these things dramatically change once we get to the holene
for - holocene - social shift - from - egalitarianism - to - power hierarchy
active social choice towards egalitarianism
for - anthropology - evidence that our ancestors were egalitarian and collective was prioritized
we can think of this as a long-term process going all the way from a thin and slow amphine to a thick and fast one, but one that happens over thousands of years
for - anthropocene - evolution of
destructive potential weapons over history, you see a very similar spike, very similar exponential curve as we've seen over and over again today
for - greal acceleration - exponential growth of destructiveness of weapons (in joules)
the phasing out is really a challenge and we need maybe collapse. Uh so how do we collapse things? That's the big question.
for - quote - phasing out is a challenge - maybe collapse is needed
transformations comes with transformation uh sort of deep emotional and psychological challenges. And we talk about in in the research also about transition pain and transition risk
for - definition - transition risk - definition - transition pain
I personally feel the decision was made in 2014 before we'd even put forward proposal. So it was already decided um by those with with power within ICS and IUGS where the where the where it was going because the actual data behind the submission wasn't the reason for rejection.
for - definition - anthropocene - rejection of the term - it was rejected on dogmatic grounds, not on the evidence provided
if Bill Gates took one of my trainings I would say Bill it's yes and when we're prioritizing in the in the scene it's not either or you don't get to pick one of the nodes, you have to you have to prioritize the network
for - Bill gates climate article - is AND, not EITHER OR
you might have seen this this article that Bill Gates published uh end of October this year. Donald Trump wrote this this particular artic article because Bill Gates basically wrote that hey look climate okay climate's a thing climate change is a thing but but don't get all hooked up about climate change because there's other stuff going on like like people's health
for - Bill Gates climate article - SRG Comment - need to look at it from multi-scale, multi-dimensional wellbeing framework to make sense of it
e need a to build a network image of the problem.
for - the network may look different to different people - new trailmark - SRG comment - distinguishes when I'm making a comment from comments from the source - SRG comment - perspectival knowing
As long as AI is only better at 90% of a given job, the other 10% will cause humans to become highly leveraged, increasing compensation and in fact creating a bunch of new human jobs complementing and amplifying what AI is good at, such that the “10%” expands to continue to employ almost everyone
If artificial intelligence is more effective at a given job than humans, even if it's not in all aspects, wouldn't companies choose artificial intelligence due to their overall proficiency, availability, and not needing to compensate them. Also, in some industries, such as art and entertainment, I find it hard to think of jobs that would be created if AI is fully used to generate the content.
Prevention of Alzheimer’s.
locating the cause vs solving the issue is very different. I am skeptical of this claim
s “finish the job”
An interesting thing to think about is if there is a vaccine for everything, then what will people who resort to biological weapons do. I assume if there are people who choose to continue to make biological weapons, we could come across things that vaccines cannot fix
Thus, it’s my guess that powerful AI could at least 10x the rate of these discoveries, giving us the next 50-100 years of biological progress in 5-10 years.
If artificial intelligence was used to enhance the rate at which biological discoveries are made, I do not think that this will improve biological progress by that much because of limitations on funding and research into new discoveries.
it is possible that AI-enabled biological science will reduce the need for iteration in clinical trials by developing better animal and cell experimental models (or even simulations) that are more accurate in predicting what will happen in humans
I think this might be his radical thinking, since being able to improve upon the experimentation models is not certain at all.
CRISPR was a naturally occurring component of the immune system in bacteria that’s been known since the 80’s, but it took another 25 years for people to realize it could be repurposed for general gene editing
I think this greatly highlights his point, if things that are discovered a lot earlier are analyzed thoroughly for insights, then we can develop these technologies much more faster.
For example, even incredibly powerful AI could predict only marginally further ahead in a chaotic system (such as the three-body problem) in the general case,99 In a chaotic system, small errors compound exponentially over time, so that even an enormous increase in computing power leads to only a small improvement in how far ahead it is possible to predict, and in practice measurement error may degrade this further. as compared to today’s humans and computers.
I wonder if he is making this claim because he believes there aren't solvable solutions to these problems or because the method of increasing intelligence doesn't allow for the solution to these problems
It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer;
Its interesting how despite the progress towards robots currently, he doesn't define a strong AI as needing physical componenets
Need for data. Sometimes raw data is lacking and in its absence more intelligence does not help.
I did not think about this limitation regarding complex concepts like particle acceleration, but it makes sense with the copious amounts of data needed to train large language models and other forms of artificial intelligence.
but with AI’s doing everything, how will humans have meaning
It's especially important to consider this alongside the use of AI for art/music. For creative tasks, I think that should solely be reserved for humans, as it's a genuine form of expression. Taking away this expression I think would take away meaning from human life.
AI simply offers an opportunity to get us there more quickly—to make the logic starker and the destination clearer.
This is the hinge of the ending: the author reframes AI not as a disruptor but as an accelerator of moral destiny. It’s an elegant story, but it rests on the assumption that human values won’t fracture under the pressure of rapid transformation.
Transparency would be important in any such system
Transparency behind AI decisions would significantly increase user trust in it - allowing for further understanding of how the AI made a decision and whether or not there were any biases involved.
The “arc of the moral universe” is another similar concept.
This suggests moral progress is directional. That’s a big philosophical stance. Critics would say moral trajectories aren’t laws of nature and could easily reverse, even with AI.
I am not suggesting that we literally replace judges with AI systems, but the combination of impartiality with the ability to understand and process messy, real world situations feels like it should have some serious positive applications to law and justice
I think if this were to happen, AI would need to be rid of all biases in order to allow for true justice. But also, the same way that judges have a human aspect in their decisions, should there be the same allowance for AI?
If all of this really does happen over 5 to 10 years—the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights—I suspect everyone watching it will be surprised by the effect it has on them.
This passage banks heavily on a best-case alignment scenario without acknowledging that even small misalignments or geopolitical conflict could slow things dramatically. It functions more as inspirational rhetoric than probabilistic forecast.
Repressive governments survive by denying people a certain kind of common knowledge,
An educated population is one of the biggest defenses against a tyrannical government. It's important that a population knows its rights, and if AI is the best way to distribute this knowledge, then it can definitely be helpful, so long as the knowledge is accurate.
blocking or delaying adversaries’ access to key resources like chips and semiconductor equipment
Reminds me of how nuclear weapons have been regulated and only certain countries are able to develop/possess them
if we want AI to favor democracy and individual rights, we are going to have to fight for that outcome.
I agree, I think that the overreliance of AI in daily life may lead to easier implementation of ethically bad use cases by people in power, and so because of this, it is important to consider how AI is being used in daily life and whether or not it is truly necessary.
AI-enhanced research will give us the means to make mitigating climate change far less costly and disruptive, rendering many of the objections moot and freeing up developing countries to make more economic progress.
I think that the effect of AI on the climate needs to be considered as well. Even in the U.S., there are so many lower-income communities that are being destroyed environmentally because huge datacenters are being built and stealing all of the resources available to the members of these communities, especially water. This impact is skewed towards those of lower socioeconomic status, which already mimics other technological advancements and the exploitation of less-developed countries.
the idea of an “AI coach” who always helps you to be the best version of yourself, who studies your interactions and helps you learn to be more effective
If this "AI coach" use case truly comes to fruition, there would need to be some safeguards and frameworks in place because even now, there's people who are unfortunately dependent on AI chatbots to relplace a lack of human connection in their lives. I feel like an "AI coach" would have to encourage real human interaction in one's life instead of being the first thing that a person turns to.
Once human lifespan is 150, we may be able to reach “escape velocity”, buying enough time that most of those currently alive today will be able to live as long as they want
While I can certainly appreciate Amodei's passion, I feel like this application is a bit far-fetched and kind of goes back into the rhetoric about these sci-fi claims that people have about AI. Also, with how gen AI has been causing so many environmental issues, I wonder what the environment would look like when human lifespan reaches 150.
An aligned AI would not want to do these things
This reminds me of our discussion about the movie Megan and how we would have to implement safeguards in AI in order to preserve the interests of the human race
This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.
I understand the excitement and push toward powerful artificial intelligence to solve complex problems and automate processes, but I don't understand the desire to use it to fully generate art, literature, and entertainment.
and also to learn88 This learning can include temporary, in-context learning, or traditional training; both will be rate-limited by the physical world.. But the world only moves so fast
The emphasis on "to learn" is important because with people just using AI without truly understanding it (or even understanding its output) might create a population where the level of intelligence that AI can produce is maxed out as it can just generally be accepted. (There's no desire for more intelligent AI)
I am sure I will get plenty of things wrong
I appreciate Amodei's humbleness in this statement because I feel like a lot of CEOs try to present themselves as "all-knowning" (or at the very least knowledgeable in the aspects that their technologies hope to be applicable in). However, I think that Amodei's approach is very human, which creates a sense of trust/connection with the reader.
The result often ends up reading like a fantasy for a narrow subculture, while being off-putting to most people.
I agree with this - I feel like there's already a stereotype of tech CEOs of being disconnected from the general population's goals (even just thinking about Musk and Zuckerberg), and language like this doesn't help clear this stereotype up.
The list of positive applications of powerful AI is extremely long (and includes robotics, manufacturing, energy, and much more), but I’m going to focus on a small number of areas that seem to me to have the greatest potential to directly improve the quality of human life.
I agree that artificial intelligence can be used to improve the quality of human life on many fronts, but I also believe that it can detriment our lives and be used maliciously in a plethora of ways. For example, two of the categories that he is most excited for AI to be applied to are peace and governance, and work and meaning. Applications such as lethal autonomous weapons and job displacement due to automation make me rather cynical about AI's usage in these areas.
On the other hand, the risks are not predetermined and our actions can greatly change their likelihood.
It's typically easier to identify the benefits of new technologies because they are created with specific purposes in mind. Innovators get caught up in how they will change an area for the better and are unable to recognize significant harms with those technologies. For example, with artificial intelligence, it was mainly created to automate tasks and solve complex problems, but overreliance on AI can lead to cognitive atrophy and diminished critical thinking skills.
We could summarize this as a “country of geniuses in a datacenter”.
This shows the scale of what he thinks AI will be capable of doing. It reframes AI not as a single tool, but as millions of expert-level workers operating at once, which explains why he believes the impact will change the world.
If AI further increases economic growth and quality of life in the developed world, while doing little to help the developing world, we should view that as a terrible moral failure
He calls out inequality as a real risk even in the "good" future. It shows that for him, success is not just technological, it's about whether the benefits reach everyone.
Fear is one kind of motivator, but it’s not enough: we need hope as well.
He's arguing that AI conversations shouldn't be all doom. People need something inspiring to work toward, not just warnings about what could go wrong.
Women’s rights have improved over the years, but continued progress is not guaranteed. In a time of escalating conflicts, rising authoritarianism and devastating climate change impacts, women face many issues related to education, work, healthcare, legal rights, violence and much more. By understanding these issues, the world can work together to achieve gender equality, stronger human rights protections and safety for all people. In this article, we’ll explore 20 of the most important issues affecting women and girls today.
Colleagues, I invite you to use this space to collaboratively annotate the module introduction and summary. Please highlight areas that are clear or engaging, note any points that may need clarification or stronger alignment with learning outcomes, and add suggestions for improvement. You are welcome to tag your comments (e.g., clarity, engagement, alignment, accessibility) and build on each other’s observations. Your insights and feedback will contribute to improving instructional clarity, learner support, and the overall quality of the module. Thank you for engaging in this shared learning process.
GitHub Copilot
Maybe it would be useful to link to this website where they explain and give recommendations on how to use Github Copilot: https://github.blog/developer-skills/github/how-to-use-github-copilot-in-your-ide-tips-tricks-and-best-practices/?ref_product=copilot&ref_type=engagement&ref_style=text
(EXTRA) Prompt Files Are Optional Tools, Not Documentation
I think this is a great idea! Here is some documentation about it: https://docs.github.com/en/copilot/tutorials/customization-library/prompt-files/your-first-prompt-file
This bundle provides everything reviewers need. It also ensures that anyone who maintains the code later won’t be flying blind.
We could include here my suggestion of documenting what functions generated by AI were "touch" and/or alter by the user and which are as suggested by AI. Just to make sure which functions the authors have more knowledge over because they modify them.
These tests must be included in your PR.
Maybe add test coverage in summary?
Testing and Edge Cases
I think before testing we need to create a section for efficiency check (the issue that Zander mentioned in the meeting). We could either create a protocol to ask AI to check if the objective can be done more efficient, or review it on our own and find places where it seems there is not needed code. I think the second option is better because it gives us the possibility to check if the author really review the code create by the AI (at least skimmed).
this code
I still fee this is a bit to vague. Maybe per function the testthat and the validation checklist per task?
how this code works
This might be a bit vague. We could decide if doing it per function or maybe per task. Also, it would be great that if it is per task, we ask to create a diagram of the new functions and how do they interact with old functions.
Ask Copilot to Critique Its Own Code
This is great and I would really put emphasis on this! (see suggestions next comment)
Please keep a concise running summary of our interaction, including:
I think one of the most important part of creating the prompts is what is the context the AI is using. These are the documents/files we attach when creating the prompts. So I think this could also go in the summary. Record the context used to produce the responses. (Also, maybe even what AI is answering, GTP vs Claude)
protocols and recommended guidelines
might be helpful to clarify here that protocols are mandatory steps that must always be followed and demonstrated, while guidelines are best-practice suggestions meant to improve quality but not strictly required
validation materials
This is a very strong list in my opinion! Two suggestions: - slightly differentiate “step-by-step explanation of the code” from “plain-language explanation,” since they can overlap (do we mean technical explanation vs high level rationale behind the code?) - maybe add a point on error handling expectations? for example, how the code responds to invalid or missing data
Explain this code step-by-step. Describe the purpose of each major block. List all assumptions you’re making. Identify any cases where this code might break.
Here is where I disagree somehow with the approach. I find it safer to ask ** Copilot how would it solve it first, show me the steps and the plan. Then modified its plan according to what you think is right. Then ask the agent ** to modify the code. I think testing the logic before the modifications makes it easier.
Once the task is done, ask Copilot:
We might need to be more clear on what is the level of specification of a Task. Would/Could we have many tasks summaries per PR? If we are, are we going to keep all summaries for all tasks? Or are we doing clean up of these summaries at some point? A suggestion: maybe one summary of the task summaries per PR.
Copilot is surprisingly good at pointing out its own flaws when prompted this way. Use its critique to improve the final version.
One improvement here could be to ask Copilot to evaluate the code incrementally and at execution level (at runtime), that is to evaluate the code based on how it actually runs, not just how it looks or reads. For example, verifying assumptions about inputs/outputs and testing components in isolation to prevent that individual errors/failures trigger cascading error that are very difficult to debug
At the end I’ll ask you to produce a markdown summary.
Maybe create the document right away and save it, in case the computer gets rebooted.
Colleagues, I invite you to use this space to collaboratively annotate the module introduction and summary. Please highlight areas that are clear or engaging, note any points that may need clarification or stronger alignment with learning outcomes, and add suggestions for improvement. You are welcome to tag your comments (e.g., clarity, engagement, alignment, accessibility) and build on each other’s observations. Your insights and feedback will contribute to improving instructional clarity, learner support, and the overall quality of the module. Thank you for engaging in this shared learning process.
Leg in eigen woorden uit hoe Fuller de geldigheid van deze wet zou beoordelen.
Volgens Fuller moeten wetten voldoen aan het innerlijk moraal en hiervoor had hij 8 formele eisen/regels opgesteld. (noem alle 8 de eisen op). Volgens fuller voldeed het nazirecht niet aan deze eisen zo was het o.a. niet algemeen omdat er onderscheid werd gemaakt tussen mensen.
Volgens fuller is het nazirecht dus geen geldend recht, waardoor de vrouw geen rechtsplicht had haar man aan te geven/
Leg uit in welke gevallen onrechtvaardige wetten volgens Radbruch buiten toepassing moeten blijven en in welke gevallen ze zelfs in het geheel niet als recht zijn aan te merken.
Volgens Radbruch heeft het recht drie beginselen: rechtszekerheid, rechtvaardigheid en doelmatigheid.
Volgens Radbruch gaat rechtszekerheid boven rechtvaardigheid. Tenzij de situatie zo onrechtvaardigd is dat de rechtszekerheid moet wijken voor rechtvaardigheid. Dit is volgens hem het geval indien mensen rechten worden geschaad. Dit recht valt volgens hem dan onder de Term "flawed law". Dit wordt ook wel het ondragelijkheidscriterium genoemd.
In enkele gevallen kan een wet volgens Radbruch uberhaupt niet als wet worden gezien. Dit is het geval wanneer een wet zo onrechtvaardig is dat er niet eens een poging tot rechtvaardigheid is gedaan,
Benoem in je antwoord hoe zowel de zwakke als de sterke overlapvariant deze wet beoordeelt.
Het idee dat de concepten recht en moraal op de een of andere manier met elkaar overlappen, wordt de Overlappingsthese genoemd.
De zwakke v
Leg in eigen woorden uit hoe Fuller de geldigheid van deze wet zou beoordelen.
Volgens Fuller moeten wetten voldoen aan het innerlijk moraal en hiervoor had hij 8 formele eisen/regels opgesteld. (noem alle 8 de eisen op). Volgens fuller voldeed het nazirecht niet aan deze eisen zo was het o.a. niet algemeen omdat er onderscheid werd gemaakt tussen mensen.
Volgens fuller is het nazirecht dus geen geldend recht, waardoor de vrouw geen rechtsplicht had haar man aan te geven/
Leg uit in welke gevallen onrechtvaardige wetten volgens Radbruch buiten toepassing moeten blijven en in welke gevallen ze zelfs in het geheel niet als recht zijn aan te merken.
Volgens Radbruch heeft het recht drie beginselen: rechtszekerheid, rechtvaardigheid en doelmatigheid.
Volgens Radbruch gaat rechtszekerheid boven rechtvaardigheid. Tenzij de situatie zo onrechtvaardigd is dat de rechtszekerheid moet wijken voor rechtvaardigheid. Dit is volgens hem het geval indien mensen rechten worden geschaad. Dit recht valt volgens hem dan onder de Term "flawed law". Dit wordt ook wel het ondragelijkheidscriterium genoemd.
In enkele gevallen kan een wet volgens Radbruch uberhaupt niet als wet worden gezien. Dit is het geval wanneer een wet zo onrechtvaardig is dat er niet eens een poging tot rechtvaardigheid is gedaan,
Leg in eigen woorden uit hoe Radbruch de geldigheid van deze wet zou beoordelen.
Volgens Radbruch heeft het recht drie beginselen: rechtszekerheid, rechtvaardigheid en doelmatigheid. Volgens Radbruch gaat rechtszekerheid boven rechtvaardigheid. Tenzij de situatie zo onrechtvaardigd is dat de rechtzekerheid moet wijken voor rechtvaardigheid. Dit is volgens hem het geval indien mensen rechten worden geschaad. Dit recht valt volgens hem dan onder de Term "flawed law". En moet buiten toepassing worden gelaten. Deze wet schaat mensen rechten dus is het flawed law en moet de wet volgens Radbruch buiten toepassing worden gelaten. Het ondragelijkheidscriterium is hier dus van toepassing
Op #2026/01/22 conf door MinBZK, bij BZK. Middag. Registered.
Bovendien geldt er nu een adequaatheidsbesluit voor de VS (Data Privacy Framework), waardoor doorgifte naar de VS, voor zover er aan het DPF wordt voldaan, ook voldoet aan de eisen van de AVG die gelden voor doorgifte.
DPF is ingegaan op 10 juli 2025. Daarvoor was er geen adequacy, nadat Schrems II er in juli 2020 een einde aan maakte.
Minister gaat uit van papieren werkelijkheid ipv daadwerkelijke. - Ook voor geaggregeerde statistieken worden de details wel vastgelegd en doorgegeven aan Google, alleen niet getoond aan de gebruiker van Analytics. - Je kunt Google Analytics ook gewoon uitzetten, ipv zeggen dat verbieden niet kan. - Al is de groep geinteresseerden niet hetzelfde als de groep sollicitanten, is het niettemin een kleinere en dus traceerbare groep dan 'random'
Analyse de l'Engagement Politique : Concepts, Paradoxes et Contexte
Ce document de synthèse analyse en profondeur les multiples facettes de l'engagement politique en s'appuyant sur les perspectives de la sociologie et de la science politique.
L'analyse révèle quatre axes majeurs.
Premièrement, une distinction conceptuelle fondamentale est établie entre la participation politique, qui inclut des actes peu coûteux comme le vote, et l'engagement, qui désigne des formes d'action plus intenses, publiques et risquées.
L'engagement se décline sur un continuum allant du simple sympathisant au militant permanent, avec des profils variés tels que les "militants par conscience" et les "bénéficiaires directs" de la lutte.
Deuxièmement, le document explore le paradoxe de l'action collective, tel que formulé par Mancur Olson.
Ce paradoxe explique pourquoi des individus rationnels peuvent s'abstenir de participer à une action collective même s'ils en partagent les objectifs, à cause de la tentation du "passager clandestin".
Les solutions à ce paradoxe résident dans les incitations sélectives et, de manière plus sociologique, dans les rétributions symboliques de l'engagement (reconnaissance, plaisir militant, fidélité à ses valeurs) théorisées par Daniel Gaxie.
Troisièmement, l'analyse aborde l'importance du contexte à travers la notion de Structure des Opportunités Politiques (SOP).
Ce concept macro-analytique soutient que le succès et les formes d'un mouvement social (pacifiques ou disruptives) dépendent de l'ouverture ou de la fermeture du système politique.
Bien qu'utile pour comprendre des dynamiques historiques comme le mouvement des droits civiques aux États-Unis, ce concept fait l'objet de critiques importantes pour son statisme et sa vision simplifiée des interactions entre l'État et les mouvements sociaux.
Enfin, le document souligne le rôle crucial des variables socio-démographiques et des socialisations individuelles.
L'engagement est fortement corrélé au capital culturel et à la "disponibilité biographique".
L'analyse met en lumière l'importance des émotions, notamment le "choc moral", en précisant que la capacité à ressentir une indignation face à une situation est elle-même socialement construite.
L'étude de cas du "Freedom Summer" de 1964 démontre de manière saisissante que l'engagement intense a des conséquences biographiques profondes et durables sur la trajectoire de vie des militants.
--------------------------------------------------------------------------------
Une première perplexité soulevée par l'analyse concerne la définition même de l'engagement politique.
Le terme, tel qu'il est parfois utilisé, tend à regrouper toutes les formes d'activité politique, y compris les moins exigeantes.
Cependant, la recherche en sociologie politique opère une distinction cruciale entre la participation et l'engagement.
La participation est la catégorie la plus large, englobant toutes les formes de contribution aux affaires de la cité.
Le vote, l'inscription sur les listes électorales ou la réponse à un sondage sont des formes minimales et peu coûteuses de participation.
Elles sont souvent individuelles, secrètes (comme le vote dans l'isoloir) et n'engagent l'individu que de manière très limitée.
L'engagement, en revanche, désigne des formes de participation plus intenses, exigeantes et coûteuses en temps, en énergie et parfois en ressources.
Il se caractérise par deux dimensions clés :
• L'exposition publique : S'engager, c'est s'exposer publiquement, que ce soit en manifestant, en signant une pétition nominative ou en prenant la parole pour une cause.
• La prise de risque : Cette exposition publique peut entraîner des rétorsions, des controverses, des sanctions professionnelles ou même des risques physiques (violences policières, par exemple).
La figure de l'intellectuel engagé, comme les signataires du Manifeste des 121 contre la guerre d'Algérie, illustre cette prise de risque.
L'engagement s'inscrit donc dans une démarche où l'individu accepte un coût personnel potentiellement élevé en échange de la défense d'une cause collective.
L'engagement peut être vu comme un continuum avec différents degrés d'implication.
• Le sympathisant : Il soutient une cause ou une organisation de l'extérieur, sans adhésion formelle.
Sa participation est souvent ponctuelle, comme le fait de se joindre à une manifestation pour montrer son soutien.
• L'adhérent : Il formalise son soutien en prenant sa carte dans un parti, un syndicat ou une association.
Cet acte implique souvent une contribution financière (cotisation) et marque une identification plus forte. L'adhérent peut dire "nous" en parlant de l'organisation, mais son implication active peut rester limitée.
• Le militant : Il est véritablement partie prenante des activités de l'organisation.
Il consacre du temps et de l'énergie de manière régulière, défend activement les positions du groupe, participe aux actions et s'identifie fortement à ses couleurs.
Au sein même du militantisme, les auteurs McCarthy et Zald distinguent plusieurs statuts au sein des "organisations de mouvement social".
Statut
Description
Volontaires
Militants bénévoles qui participent sur leur temps libre, sans rémunération. Ils constituent la base de nombreuses organisations.
Permanents
Militants salariés par l'organisation pour assurer son fonctionnement quotidien.
Leur statut peut parfois créer des tensions avec les bénévoles.
Cadres (Porte-parole)
Personnes qui incarnent et représentent l'organisation publiquement (président, secrétaire général).
Ils négocient avec les autorités et s'expriment dans les médias.
Leur sélection et leur légitimité sont des enjeux cruciaux au sein des collectifs.
Une autre distinction importante est celle proposée par McCarthy et Zald entre :
• Les bénéficiaires : Ce sont les personnes directement concernées par la lutte et qui en retireront un bénéfice personnel et immédiat en cas de succès (ex: les sans-papiers luttant pour leur régularisation).
• Les militants par conscience : Ce sont des personnes qui soutiennent la cause par conviction, sans attendre de bénéfice direct pour leur situation personnelle (ex: des citoyens français soutenant les sans-papiers).
Cette distinction est essentielle car les logiques d'engagement et les objectifs peuvent différer entre ces deux groupes, créant parfois des tensions au sein d'un même mouvement.
La thèse d'un déclin de l'engagement, souvent associée à la baisse du nombre d'adhérents dans les partis politiques, est nuancée.
Une hypothèse plus fructueuse est que les partis politiques dominants n'ont plus besoin de militants comme par le passé.
Transformés en "machines électorales" peuplées de professionnels de la politique, ils peuvent externaliser des tâches autrefois militantes (collage d'affiches, communication) à des entreprises spécialisées.
De plus, des mécanismes comme les primaires ouvertes ont réduit le rôle des militants dans la sélection des candidats.
Ce phénomène n'entraîne pas la fin de l'envie de s'engager, mais plutôt un report de l'engagement vers d'autres espaces, comme le secteur associatif ou les mouvements sociaux, perçus comme plus concrets et désintéressés par des militants déçus de la vie partisane.
L'un des défis théoriques majeurs pour comprendre l'engagement est d'expliquer pourquoi des actions collectives émergent, alors même que la rationalité individuelle pourrait y faire obstacle.
L'économiste Mancur Olson, dans son ouvrage Logique de l'action collective (1965), a rompu avec les théories antérieures qui postulaient l'irrationalité des foules (Gustave Le Bon) ou expliquaient la révolte par des facteurs psychologiques comme la "frustration relative" (Ted Gurr). Olson part du postulat d'un acteur rationnel et calculateur.
Le paradoxe qu'il met en évidence est le suivant :
1. Une action collective vise à obtenir un bien collectif, c'est-à-dire un avantage qui profitera à tous les membres d'un groupe, qu'ils aient participé à l'action ou non (ex: une augmentation de salaire pour tous les employés d'une entreprise).
2. Participer à l'action a un coût individuel (ex: perte de salaire pendant une grève, temps consacré, risques encourus).
3. L'acteur rationnel sera donc tenté d'adopter la stratégie du "passager clandestin" (free rider) : ne pas payer le coût de l'action tout en espérant bénéficier de ses retombées si les autres se mobilisent.
Si tout le monde suit ce calcul, l'action collective n'a jamais lieu, même si elle serait bénéfique pour tous.
Pour Olson, la solution au paradoxe réside dans les incitations sélectives : des bénéfices (ou des coûts) qui s'appliquent uniquement à ceux qui participent (ou ne participent pas) à l'action.
• Incitations sélectives négatives (coûts) : Rendre la non-participation plus coûteuse que la participation. Exemples : la pression sociale, la stigmatisation des "jaunes" lors d'une grève, voire les menaces physiques d'un piquet de grève.
• Incitations sélectives positives (bénéfices) : Offrir des avantages individuels réservés aux participants.
Olson évoque même des "incitations sélectives érotiques" (le plaisir de rencontrer des gens, de nouer des relations).
Le politiste Daniel Gaxie a sociologisé cette approche en développant le concept de rétributions de l'engagement.
Ces gratifications, qui motivent et soutiennent le militantisme, peuvent être de plusieurs natures :
• Matérielles : Obtention d'un logement social, d'un emploi via le réseau de l'organisation.
• Symboliques : Acquisition de responsabilités, de notoriété, de reconnaissance.
Le fait de passer dans les médias ou d'être le porte-parole d'une lutte est une gratification symbolique puissante.
• Identitaires et morales : Le plaisir d'agir en conformité avec ses valeurs, de "pouvoir se regarder dans la glace".
• Affectives et sociales : Le plaisir de la sociabilité militante, de partager des moments forts avec des camarades, de se sentir membre d'un collectif.
Ces rétributions expliquent pourquoi des "militants par conscience" ne sont pas totalement désintéressés : ils trouvent un intérêt (au sens sociologique) dans leur engagement.
Cette analyse, couplée aux critiques d'Albert Hirschman (qui note que le coût et le bénéfice de l'action peuvent se confondre, comme la fierté tirée d'une lutte difficile), permet de dépasser la vision purement utilitariste d'Olson.
Si le modèle d'Olson se concentre sur l'individu (micro), l'approche par la Structure des Opportunités Politiques (SOP) se place à un niveau macro-structurel pour analyser l'influence du contexte politique sur les mouvements sociaux.
La SOP désigne l'ensemble des éléments du contexte politique qui facilitent ou entravent l'émergence et le succès d'un mouvement social.
Le travail de Doug McAdam sur le mouvement pour les droits civiques aux États-Unis est l'exemple fondateur.
McAdam montre que les organisations noires existaient déjà dans les années 1930 mais piétinaient.
Leur succès dans les années 1950-60 s'explique par une ouverture de la SOP, due à plusieurs facteurs :
• Économiques : La crise du coton dans le Sud et la migration des Noirs vers les industries du Nord.
• Sociaux : Une "libération cognitive" où les Noirs, découvrant un racisme moins institutionnalisé dans le Nord, réalisent que la ségrégation n'est pas une fatalité.
• Électoraux : La population noire devient un enjeu électoral pour le Parti Démocrate dans le Nord.
• Géopolitiques : En pleine Guerre Froide, la ségrégation raciale fragilise l'image des États-Unis face à l'URSS.
Cette ouverture a rendu le système politique plus réceptif aux revendications, permettant au mouvement d'obtenir des succès par des actions largement pacifiques.
Lorsque la SOP s'est refermée dans les années 1970 (arrivée de Nixon, répression du FBI), les formes de protestation se sont radicalisées (Black Power).
L'idée centrale est que la forme de la SOP influence directement les stratégies des mouvements :
• SOP ouverte (système réceptif, procédures de consultation, etc.) : favorise des actions pacifiques, la négociation et le lobbying.
• SOP fermée (système bloqué, centralisé, peu réceptif) : contraint les mouvements à utiliser des répertoires d'action plus perturbateurs et disruptifs pour se faire entendre.
L'exemple comparatif entre la France et la Suisse sur la question des OGM est parlant.
En Suisse, dotée de mécanismes de démocratie directe (votation), les anti-OGM ont pu obtenir des moratoires par des voies institutionnelles.
En France, système plus centralisé et fermé, ils ont dû recourir à des actions illégales (faucheurs volontaires) pour politiser l'enjeu.
Malgré son utilité, le concept de SOP a fait l'objet de nombreuses critiques :
• Ambigüité : La notion est souvent une "auberge espagnole" où l'on peut trouver a posteriori n'importe quel facteur contextuel pour expliquer un résultat.
• Statisme : L'approche tend à figer les systèmes politiques dans des typologies statiques (ouvert/fermé), négligeant la dynamique et les fluctuations.
• Oxymore conceptuel : James Jasper souligne la contradiction entre "structure" (stable, durable) et "opportunité" (fugace, subjectivement perçue).
• Vision simpliste : Le modèle postule une séparation étanche entre les "insiders" (système politique) et les "outsiders" (mouvements), alors que les frontières sont poreuses (des militants peuvent être au sein de l'État).
• Déterminisme univoque : Il suggère que le système politique détermine les mouvements, alors que les mouvements sociaux peuvent eux-mêmes transformer et contraindre le système politique.
En raison de ces limites, le concept de SOP est aujourd'hui moins utilisé dans la recherche, qui privilégie des approches plus dynamiques des interactions.
Au-delà des modèles théoriques, l'engagement dépend fortement de variables socio-démographiques et de processus de socialisation qui prédisposent, ou non, les individus à s'engager.
La recherche confirme de manière constante que l'engagement politique est socialement situé.
• Le capital culturel et scolaire : L'intérêt pour la politique et la compétence politique perçue sont fortement corrélés au niveau de diplôme.
Les individus les plus diplômés sont souvent ceux qui votent le plus, mais aussi ceux qui manifestent et signent le plus de pétitions.
• La disponibilité biographique : L'engagement intense est plus fréquent chez les jeunes (moins de contraintes familiales et professionnelles) et les "jeunes retraités" (plus de temps libre).
Les personnes en milieu de carrière avec des responsabilités familiales sont souvent moins disponibles pour un militantisme chronophage.
Contre l'image d'un acteur purement rationnel, la recherche réintègre la dimension émotionnelle de l'engagement.
Le choc moral, théorisé par James Jasper, désigne l'indignation ou le scandale ressenti face à une situation qui pousse à l'action.
Cependant, il est crucial d'expliquer sociologiquement ce choc moral : tout le monde n'est pas choqué par les mêmes situations.
La capacité à ressentir cette indignation dépend de la socialisation, des valeurs et des expériences passées de l'individu.
• Un individu socialisé dans un environnement pro-corrida ne ressentira pas le même choc moral devant une mise à mort qu'un militant de la cause animale.
• Les militants de Réseau Éducation Sans Frontières (RESF) sont souvent des personnes qui ont elles-mêmes bénéficié de la promotion sociale par l'école ; leur attachement à cette institution les prédispose particulièrement à être indignés par l'expulsion d'enfants scolarisés.
Les émotions ne sont donc pas irrationnelles, mais socialement déterminées.
L'étude de Doug McAdam sur le Freedom Summer (1964) offre un aperçu exceptionnel des effets de l'engagement sur la vie des individus.
Durant cet été, de jeunes militants blancs sont allés dans le Mississippi pour aider les Noirs à s'inscrire sur les listes électorales, un engagement à très haut risque.
Grâce à des archives uniques, McAdam a pu comparer, 20 ans plus tard, le groupe de ceux qui ont participé et un groupe témoin de ceux qui avaient été acceptés mais ne s'y sont finalement pas rendus.
Les résultats sont frappants : les participants au Freedom Summer ont eu, en moyenne :
• Des carrières professionnelles plus chaotiques et des revenus plus faibles.
• Des vies familiales moins stables (plus de divorces, moins d'enfants).
• Un niveau d'engagement militant beaucoup plus élevé et durable.
Cette étude démontre que l'engagement intense n'est pas une simple parenthèse dans une vie, mais un événement fondateur qui a des conséquences biographiques profondes, façonnant durablement les trajectoires professionnelles, familiales et militantes.
C'est également de cette expérience que sont issues de nombreuses futures leaders du mouvement féministe américain, qui y ont pris goût à l'action collective tout en y découvrant la division sexiste du travail militant.
GoVolta heeft een optie op extra NMBS-rijtuigen en onderzoekt een uitbreiding naar Parijs in 2027. De beoogde Nederlandse stops zijn Amsterdam, Haarlem, Den Haag, Rotterdam, Lage Zwaluwe en Roosendaal. Lage Zwaluwe is bewust gekozen vanwege de gratis parkeermogelijkheden. In België hoopt GoVolta via Gent te kunnen rijden. De strategische samenwerking met het Franse Keolis – een dochteronderneming van de SNCF – moet de toegang tot de Franse markt vergemakkelijken.
Researching an Ams-Paris route for 2027. Through Gent in B, not Brussels. Keolis is French which should help in getting the space on F rail network.
Keolis verzorgt de tractie in zowel Nederland als Duitsland. GoVolta richt zich op commercie en pakketreizen; Brouwer op onderhoud. Eerder wilde GoVolta zelf spoorvervoerder worden, maar die rol wordt nu door Keolis ingevuld.
GoVolta no longer a rail provider itself , Keolis is the transporter.
GoVolta will open new trainservices Ams-B and Ams-HH in March 2026. A 6.5hr connection to B, a 5.5 hr to HH.
Stops in Amersfoort.
A temperature of 102.5° F is a disruption of that homeostasis, and the body will work to restore the temperature back to the normal temperature of 98.6° F.
I feel that this statement may cause confusion when discussing fevers. A temperature of 102.5 from exercise or exposure to a warm environment would be a disruption of homeostasis. A fever of 102.5 due to illness does not constitute a disruption of homeostasis, rather a resetting of the set point. In this case, the body will not try to restore a temperature of 98.6F, but will maintain the higher temperature. More info on this discussion here:
https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/209609
I would maybe add the phrase "due to external sources" or some other qualifier so that students don't later become confused about the relationship between homeostasis and fever.
straight forward
straightforward
It
In (and wouldn't a domain expert be needed in the development of any search strategy?)
suggested ollama models for code generation, 4 out of 5 are Chinese, one other from OpenAI
CodeGPT and this blog are run by Judini Inc, a Miami based US corporation
Qwen3-Coder Alibaba's performant long context models for agentic and coding tasks
Another Qwen model, without the focus on visual inputs. Alibaba. Listed in ollama
Qwen3-VL
Qwen models are by Alibaba. The VL versions are for visual inputs to generating code. Listed in ollama
DeepSeek-R1 DeepSeek-R1 i
DeepSeek, Chinese model. Listed in ollama
GLM-4.6 Advanced agentic, reasoning and coding capabilities
GLM is a Chinese model. I don't see it listed in Ollama though.
GPT-OSS OpenAI's open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases.
GPT-OSS is by OpenAI. It is available locally in Ollama it seems in various versions.
.
トル ですよね
最新バージョンのPython
ここへのコメントが適切か分かりませんが、uv python コマンドの紹介も欲しいなと思いました。 「uv で python 環境を管理する」みたいな
コマンドラインツールを作るためのプロジェクトを作成する
--app はコマンドラインツールを作るためのオプションではなさそうです
This project kind is for web servers, scripts, and command-line interfaces. https://docs.astral.sh/uv/reference/cli/#uv-init--app
。
uvで作成する仮想環境は、基本的にvenvで作るやつと同じだよ、みたいな説明がほしいかな
uv-example-script
.py は付けない?
as
at
tel
tell
.
cut
can
it can
a
as a
The AI toolkit adds LLM based options to VSCode. Installed it, as it allows me to access different models via ollama locally.
eLife Assessment
This manuscript uses adaptive-bandit simulations to describe the dynamics of the Pseudomonas-derived chephalosporinase PDC-3 β-lactamase and its mutants to better understand antibiotic resistance. The finding, that clinically observed mutations alter the flexibility of the Ω- and R2-loops, reshaping the cavity of the active site, is valuable to the field. The evidence is considered incomplete, however, with the need for analysis to demonstrate equilibrium weighting of adaptive trajectories and related measures of statistical significance.
Reviewer #2 (Public review):
Summary:
In the manuscript entitled "Ω-Loop mutations control dynamics 2 of the active site by modulating the 3 hydrogen-bonding network in PDC-3 4 β-lactamase", Chen and coworkers provide a computational investigation of the dynamics of the enzyme Pseudomonas-derived chephalosporinase 3 (PDC3) and some mutants associated with increased antibiotic resistance. After an initial analysis of the enzyme dynamics provided by RMSD/RMSF, the author conclude that the mutations alter the local dynamics within the omega loop and the R2 loop. The authors show that the network of hydrogen bonds in disrupted in the mutants. Constant pH calculations showed that the mutations also change the pKa of the catalytic lysine 67 and pocket volume calculations showed that the mutations expand the catalytic pocket. Finally, time-independent componente analysis (tiCA) showed different profiles for the mutant enzyme as compared to the wild type.
Strengths:
The scope of the manuscript is definitely relevant. Antibiotic resistance is an important problem and, in particular, Pseudomonas aeruginosa resistance is associated with an increasing number of deaths. The choice of the computational methods is also something to highlight here. Although I am not familiar with Adaptive Bandit Molecular Dynamics (ABMD), the description provided in the manuscript that this simulation strategy is well suited for the problem under evaluation.
Weaknesses:
In the revised version, the authors addressed my concerns regarding their use of the MSM, and in my view, their conclusions are now much more robust and well-supported by the data. While it would be very interesting to see a quantitative correlation between the effects of the mutations observed in the MD data and relevant experimental findings, I understand that this may be beyond the scope of the manuscript.
Reviewer #3 (Public review):
Summary:
This manuscript aims to explore how mutations in the PDC-3 3 β-lactamase alter its ability to bind and catalyse reactions of antibiotic compounds. The topic is interesting and the study uses MD simulations and to provide hypotheses about how the size of the binding site is altered by mutations that change the conformation and flexibility of two loops that line the binding pocket. Some greater consideration of the uncertainties and how the method choice affect the ability to compare equilibrium properties would strengthen the quantitative conclusions. While many results appear significant by eye, quantifying this and ensuring convergence would strengthen the conclusions.
Strengths:
The significance of the problem is clearly described the relationship to prior literature is discussed extensively.
Comments on revised version:
I am concerned that the authors state in the response to reviews that it is not possible to get error bars on values due to the use of the AB-MD protocol that guides the simulations to unexplored basins. Yet the authors want to compare these values between the WT and mutants. This relates to RMSD, RMSF, % H-bond and volume calculations. I don't accept that you cannot calculate an uncertainty on a time averaged property calculated across the entire simulation. In these cases you can either run repeat simulations to get multiple values on which to do statistical analysis, or you can break the simulation into blocks and check both convergence and calculate uncertainties.
I note that the authors do provide error bars on the volumes, but the statistics given for these need closer scrutiny (I cant test this without the raw data). For example the authors have p<0.0001 for the following pair of volumes 1072 {plus minus} 158 and 1115 {plus minus} 242, or for SASA p<0.0001 is given for 2 identical numbers 155+/- 3.
I also remain concerned about comparisons between simulations run with the AB-MD scheme. While each simulation is an equilibrium simulation run without biasing forces, new simulations are seeded to expand the conformational sampling of the system. This means that by definition the ensemble of simulations does not represent and equilibrium ensemble. For example, the frequency at which conformations are sampled would not be the same as in a single much longer equilibrium simulation. While you may be able to see trends in the differences between conditions run in this way, I still don't understand how you can compare quantitative information without some method of reweighing the ensemble. It is not clear that such a rewieghting exists for this methods, in which case I advise some more caution in the wording of the comparisons made from this data.
At this stage I don't feel the revision has directly addressed the main comments I raised in the earlier review, although there is a stronger response to the comments of Reviewer #2.
Author response:
The following is the authors’ response to the original reviews.
Reviewer #1 (Public review):
Summary:
This manuscript uses adaptive sampling simulations to understand the impact of mutations on the specificity of the enzyme PDC-3 β-lactamase. The authors argue that mutations in the Ω-loop can expand the active site to accommodate larger substrates.
Strengths:
The authors simulate an array of variants and perform numerous analyses to support their conclusions. The use of constant pH simulations to connect structural differences with likely functional outcomes is a strength.
Weaknesses:
I would like to have seen more error bars on quantities reported (e.g., % populations reported in the text and Table 1).
We appreciate this point. Here, the population we analyze is intended to showcase conformational differences across variants rather than to estimate equilibrium occupancies. Although each system includes 100 trajectories, they were generated using an adaptive-bandit protocol. The protocol deliberately guides towards underexplored basins, therefore conformational heterogeneity betweentrajectories is expected by design. For example, in E219K the MSM decomposition shows that in states 1, 6, and 7 the K67(NZ)–S64(OG) distance is almost entirely > 6 Å, whereas in states 2 and 3 it is almost entirely < 3.5 Å (Figure 5—figure supplement 12). These distances suggest that the hydrogen bond fraction is approximately zero in states 1, 6, and 7, and close to one in states 2 and 3. In addition, the mean first passage time of the Markov state models suggests that the formation and disruption of this hydrogen bond occur on the microsecond timescale, which is far longer than the length of each individual trajectory (300 ns). Consequently, across the 100 replicas, some trajectories exhibit very low fractions, while others display the opposite trend. Under such bimodal, protocol-induced heterogeneity, computing an error bar across trajectories mainly visualizes the protocol’s dispersion and risks being misread as thermodynamic uncertainty, which is not central to our aim of comparing conformational differences between wild-type PDC-3 and variants. We therefore do not include the error bars.
Reviewer #2 (Public review):
Summary:
In the manuscript entitled "Ω-Loop mutations control dynamics of the active site by modulating the 3 hydrogen-bonding network in PDC-3 4 β-lactamase", Chen and coworkers provide a computational investigation of the dynamics of the enzyme Pseudomonas-derived cephalosporinase 3 (PDC3) and some mutants associated with increased antibiotic resistance. After an initial analysis of the enzyme dynamics provided by RMSD/RMSF, the author concludes that the mutations alter the local dynamics within the omega loop and the R2 loop. The authors show that the network of hydrogen bonds is disrupted in the mutants. Constant pH calculations showed that the mutations also change the pKa of the catalytic lysine 67, and pocket volume calculations showed that the mutations expand the catalytic pocket. Finally, time-independent component analysis (tiCA) showed different profiles for the mutant enzyme as compared to the wild type.
Strengths:
The scope of the manuscript is definitely relevant. Antibiotic resistance is an important problem, and, in particular, Pseudomonas aeruginosa resistance is associated with an increasing number of deaths. The choice of the computational methods is also something to highlight here. Although I am not familiar with Adaptive Bandit Molecular Dynamics (ABMD), the description provided in the manuscript suggests that this simulation strategy is well-suited for the problem under evaluation.
Weaknesses:
In the description of many of their results, the authors do not provide enough information for a deep understanding of the biochemistry/biophysics involved. Without these issues addressed, the strength of the evidence is of concern.
We thank the reviewer for pointing out the need for deeper discussion of the biochemical and biophysical implications of our results. In our manuscript, we begin by examining basic structural metrics (e.g., RMSD and RMSF) which clearly indicate that the major conformational changes occur in the Ω-loop and the R2 loop. We have now added a paragraph to describe the importance of the Ωloop and highlighted it in the revised manuscript on lines 142-166 of page 6. This observation guided our subsequent focus on these regions, as well as on the catalytic site. Our analysis revealed notable alterations in the hydrogen bonding network—especially in interactions involving the K67-S64, K67N152, K67-G220, Y150-A292, and N287-N314 pairs. These observations led us to conclude that:
(1) Mutations E219K and Y221A facilitate the proton transfer of catalytic residues. This is consistent with prior experimental data showing that these substitutions produce the most pronounced increase in sensitivity to cephalosporin antibiotics (lines 210-212 in page 8 of the revised manuscript).
(2) Substitutions enlarge the active-site pocket to accommodate bulkier R1 and R2 groups of β-lactams.This is in line with MIC measurements reported by Barnes et al. (2018), which showed that mutants with larger active-site pockets exhibit markedly greater sensitivity to cephalosporins with bulky side chains than others (lines 249-259 in pages 10).
Furthermore, we applied Markov state models (MSMs) to explore the timescales of the transitions between these different conformational states. We believe that these methodological steps support our conclusions.
Reviewer #3 (Public review):
Summary:
This manuscript aims to explore how mutations in the PDC-3 3 β-lactamase alter its ability to bind and catalyse reactions of antibiotic compounds. The topic is interesting, and the study uses MD simulations to provide hypotheses about how the size of the binding site is altered by mutations that change the conformation and flexibility of two loops that line the binding pocket. However, the study doesn't clearly describe the way the data is generated. While many results appear significant by eye, quantifying this and ensuring convergence would strengthen the conclusions.
Strengths:
The significance of the problem is clearly described, and the relationship to prior literature is discussed extensively.
Weaknesses:
The methods used to gain the results are not explained clearly, meaning it was hard to determine exactly how some data was obtained. The convergence and uncertainties in the data were not adequately quantified. The text is also a little long, which obscures the main findings.
We thank the reviewer for the suggestion. We respectfully ask the reviewer to specify which aspects of the data-generation methods are unclear so that we can include the necessary details in the next revision. Moreover, all statistics that are reported in the manuscript are obtained from extensive analyses of 300,000 simulation frames. The Markov state models have been validated by the ITS plots and Chapman-Kolmogorov (CK) test. The two-sample t-tests were also carried out for the volume and SASA.
Reviewer #2 (Recommendations for the authors):
(1) Figure 1D focus on the PDC3 catalytic site. However, the authors mentioned before that the enzyme has two domains, an alpha domain and an alpha/beta domain. The reader would benefit from a more detailed description of the enzyme, its active site, AND the location of the mutants under investigation in the figure.
We have updated Figure 1D and marked the positions of all mutations (V211A/G, G214A/R, E219A/G/K and Y221A/H), which have now been highlighted as spheres.
(2) Since in the journal format, the results come before the methods. It would be interesting to add a brief description of where the results came from. For example, in the first section of the results, the authors describe the flexibility of the omega loop and the R2 loop. However, the reader won't know what kind of simulation was used and for how long, for example. A sentence would add the required context for a deeper understanding here.
At the beginning of the Results and Discussion section we now state: “To investigate how the mutations in the Ω-loop affect PDC-3 dynamics, adaptive-bandit molecular dynamics (AB-MD) simulations were carried out for each system. 100 trajectories of 300 ns each (totaling 30 μs per system) were run.”
(3) Still in the same section, the authors don't define what change in RMSF is considered significant. For example, I can't see a relevant change in the RMSF for the omega loop between the et enzyme and the E219 mutants in Figure 2D. A more objective definition would be of benefit here.
Our analysis reveals that while the wild-type PDC-3 and the G214A, G214R, E214G, and Y221A variants exhibit an average per-residue RMSF of around 4 Å in the Ω-loop, the V211A and V211G variants show markedly lower values (around 1.5 Å), and the E219K and Y221H variants exhibit intermediate values between 2 and 2.5 Å. In addition, the fluctuations around the binding site should be seen collectively along with the fluctuations in the R2-loop. Importantly, we urge the reviewer to focus on the MDLovofit analysis in Figure 2C, where the dynamic differences between the core and the fluctuating loops is clearly evident.
(4) In line 138, the authors state that "Therefore, the flexibility of these proteins is mainly caused by the fluctuations in the Ω-loops and R2-loop". This is quite a bold statement to be drawn at this point. First of all, there is no mention of it in the manuscript, but is there any domain movement? Figure 2C clearly shows that there is some mobility in omega and R2 loops. But there is no evidence shown in the manuscript that shows that "the flexibility of these proteins is mainly caused by the fluctuations in the" loops. Please consider rephrasing this sentence or adding more data, if available.
We have revised the wording to take the reviewer’s concern into account. The sentence now states: “Therefore, flexibility of PDC-3 is predominantly localized to the Ω- and R2-loops, whereas the remainder of the structure is comparatively rigid.” To further explain to the reviewer, the β lactamase enzymes are fairly rigid structures, where no large-scale domain motions occur. Instead, the enzyme communicates structurally via cross correlation of loop dynamics ( https://doi.org/10.7554/eLife.66567 ).
(5) I guess, the most relevant question for the scope of the paper is not answered in this section. The authors show that the mobility of the omega- and R2-loops is altered by some mutations. Why is that? I wish I could see a figure showing where the mutations are and where the loops are. This question will come back in other sections.
We have updated Figure 1D to mark the positions of all mutations (V211A/G, G214A/R, E219A/G/K and Y221A/H) as spheres. The Ω- and R2-loops are also highlighted. All mutations map to the Ω-loop, indicating that these substitutions directly perturb this region. Notably, K67 forms a hydrogen bond with the backbone of G220 within the Ω-loop and another with the phenolic hydroxyl of Y150. Y150, in turn, hydrogen-bonds with A292 in the R2 loop. Together, the residue interaction network (G220– K67–Y150–A292) suggest a pathway by which Ω-loop mutations propagate their effects to the R2 loop.
(6) The authors then analyze the network of polar residues in the active site and the hydrogen bonds observed there. For the K67-N152 hydrogen bond, for example, there is a reduction in the occupancy from ~70% in the wild-type enzyme to ~30% and 40% in the mutants E219K and Y221, respectively. This finding is interesting. The question that remains is "why is that"? From the structural point of view, how does the replacement of E219 with a Lysine alter the hydrogen bond formation between K67 and N152? Is it due to direct competition? Solvent rearrangement? The reader is left without a clue in this section. Also, Figure 3B won't help the reader, since the mutated residues are not shown there. Please consider adding some information about why the authors believe that the mutations are disrupting the active site hydrogen bond network and showing it in Figure 3B.
We appreciate the comment and have updated Figures 1D and 3B to highlight the mutation sites. The change from ~70% in the wild type to ~30–40% in the E219K and Y221T variants reported in Table 1 refers to the S64–K67 hydrogen bond. In the wild type, K67 forms an additional hydrogen bond with G220 on the Ω-loop, which helps anchor the K67 side chain in a geometry that favors the S64–K67 interaction. In the variants, the mutations reshape the Ω-loop and frequently disrupt the K67–G220 contact. The loss of this local anchor increases the conformational dispersion of K67, which is consistent with the observed reduction of the S64–K67 occupancy. Furthermore, our observation that the mutations are disrupting the active-site hydrogen-bond network is a data-driven conclusion rather than a subjective inference. Across ten systems, our AB-MD simulations provided 30 µs of sampling per system. Saving one frame every nanosecond yielded 30,000 conformations per system and 300,000 in total. All hydrogen-bond and salt-bridge statistics were computed over this full ensemble. Thus, the conclusion that the mutations disrupt the active-site hydrogen-bond network follows directly from these ensemble statistics.
(7) The pKa calculations and the pocket volume calculations show that the mutations expand the volume of the catalytic site and alter the microenvironment. Is there any change in the solvation associated with these changes? If the volume expands and the environment becomes more acidic, are there more water molecules in the mutants as compared to the wt enzyme? If so, can changes in solvation be associated with the changes in the hydrogen bond network? Would a simulation in the presence of a substrate be meaningful here? ( I guess it would!).
Regarding solvation, we observe a modest increase in transient water occupancy associated with the increase in volume of the pocket. The conserved deacylation water molecule is the most important and is always present throughout the simulation. Additional waters enter and leave the pocket but do not form persistent interactions that measurably perturb the hydrogen-bond network of the Ω- and R2-loops. We agree that simulations with a bound substrate would be informative. However, our study focuses on how Ω-loop mutations modulate the active site of apo PDC-3 and its variants. Within this scope, we find: (i) Amino acid substitutions change the flexibility of Ω-loops and R2-loops; (ii) E219K and Y221A mutations facilitate the proton transfer; (iii) Substitutions enlarge the active-site pocket to accommodate bulkier R1 and R2 groups of β-lactams.
(8) I have some concerns regarding the Markov State Modeling as shown here. After a time-independent component analysis, the authors show the projections on the components, which is different between wild wild-type enzyme and the mutants, and draw some conclusions from these changes. For example, the authors state that "From the metastable state results, we observe that E219K adopts a highly stable conformation in which all the tridentate hydrogen-bonding interactions (K67(NZ)-S64(OG), K67(NZ)N152(OD1) and K67(NZ)-G220(O) mentioned above are broken". This is conclusion is very difficult to draw from Figure 5 alone. Unless the macrostates observed in the MSM can be shown (their structures) and could confirm the broken interactions, I really don't believe that the reader can come to the same conclusion as drawn by the authors here. I would recommend the authors to map the macrostates back to the coordinates and show them (what structure corresponds to what macrostate). After showing that, it makes sense to discuss what macrostate is being favored by what mutation. Taking conclusions from tiCA projections only is not recommended. I very strongly suggest that the authors revisit this entire section, adding more context so that the reader can draw conclusions from the data that is shown.
We appreciate the reviewer’s concern. In the Markov state modeling section, our objective is to quantify the timescales (via mean first passage times) associated with the formation and disruption of the critical hydrogen bonds (K67(NZ)-S64(OG), K67(NZ)-N152(OD1), K67(NZ)-G220(O), Y150(N)A292(O), N287(ND2)-N314(OD1)) mentioned above. Representative structures illustrating these interactions are shown in Figures 3B and 4A. We agree that the main Figure 5 alone does not convey structural information. Accordingly, we provide Figure 5—figure supplements 12–16. Together, Figure 5B and Figure 5—figure supplements 12–16 map structures to metastable states, whereas Figures 3B and 4A supply atomistic detail of the interactions. Author response image 1 presents selected subplots from Figure 5— figure supplements 12–14. Together with the free-energy landscape in Figure 5A, these data indicate that E219K adopts a highly stable conformation in which all three K67-centered hydrogen bonds (K67(NZ)–S64(OG), K67(NZ)–N152(OD1), and K67(NZ)–G220(O)) are broken.
Author response image 1.
TICA plot illustrates the distribution of E219K with the colour indicating the K67(NZ)-S64(OG), K67(NZ)-N152(OD1) and K67(NZ)-G220(O) distance.
(9) As a very minor issue, there are a few typos in the manuscript text. The authors might want to take some time to revisit their entire text. Examples in lines 70, 197, etc.
Thank you for your comment. We have corrected these typos.
Reviewer #3 (Recommendations for the authors):
This manuscript aims to explore how mutations in the PDC-3 3 β-lactamase alter its ability to bind and catalyse reactions of antibiotic compounds. The topic is interesting, and the study uses MD simulations to provide hypotheses about how the size of the binding site is altered by mutations that change the conformation and flexibility of two loops that line the binding pocket.
However, the study doesn't clearly describe the way the data is generated and potentially lacks statistical rigour, which makes it uncertain if the key results are significant. As such, it is difficult to judge if the conclusions made are supported by data.
All necessary data-acquisition methods are described in the Methods section. The Markov state models have been validated by the ITS plot and the Chapman-Kolmogorov (CK) test (Figure 5—figure supplement 2–11) . The two-sample t-tests were also carried out for the volume and SASA (Table 2).
The results section jumps straight to reporting RMSD and RMSF values; however, it is not clear what simulations are used to generate this information. Indeed, the main text does not mention the simulations themselves at all. The methods section mentions that 10 independent MD simulations were set up for each system, but no information is given as to how long these were run or the equilibration protocol used. Then it says that AB-MD simulations were run, but it is not clear what starting coordinates were used for this or how the 10 replicates were fed into these simulations. Most importantly, are the RMSD and RMSF calculations and later distance distribution information derived from the equilibrium MD runs or from the AB-MD simulations?
Thank you for pointing this out. We have added “To investigate how the mutations in the Ω-loop affect PDC-3 dynamics, adaptive-bandit molecular dynamics (AB-MD) simulations were carried out for each system. 100 trajectories of 300 ns each (totaling 30 μs per system) were run.” to the Results and Discussion section. We didn’t run 10 independent MD simulations per system. We regret the typo in the Methods section that confused the reviewer. The sentence should have read – ‘All-atom MD simulations of wild-type PDC-3 and its variants were performed.’ Each system was equilibrated for 5 ns at 1 atmospheric pressure using Berendsen barostat. AB-MD simulations were initiated from these equilibrated structures. All analyses, apart from CpHMD, are based on the AB-MD trajectories.
If these are taken from the equilibrium simulations, then it is critical that the reproducibility and statistical significance of the simulations is established. This can be done by calculating the RMSD and RMSF values independently for each replicate and determining the error bars. From this, the significance of differences between WT and mutant simulations can be determined. Without this, I have no data to judge if the main conclusions are supported or not. If these are derived from the AB-MD simulations, then I want to know how the independent simulations were combined and reweighted to generate overall RMSD, RMSF, and distance distributions. Unless I misunderstand the approach, the individual simulations no longer sample all regions of conformational space the same relative amount you would see in a standard MD simulation - specific conformational regions are intentionally run more to enhance sampling, then the overall conformational distributions cannot be obtained from these simulations without some form of reweighting scheme. But no such scheme is described. In addition, convergence of the data is required to ensure that the RMSD, RMSF, and distances have reached stable values. It is possible that I am misunderstanding the approach here. But in that case, I hope the authors can clarify the method and provide a means of ensuring that the data presented is converged. Many of the differences are clear by eye, but it is important to know they are not random differences between simulations and rather reflect differences between them.
Thank you for raising this important point. In our AB-MD workflow, the adaptive bandit is used only for starting-structure selection (adaptive seeding). After each epoch, it chooses new starting snapshots from previously sampled conformations and launches the next runs. Each trajectory itself is standard, unbiased MD with no biasing potentials and no modification of the Hamiltonian. In other words, AB decides where we start, but does not alter the physics or sampling dynamics within an individual trajectory. In addition, our goal in this work is to compare variants under the same adaptive-bandit (AB) protocol, rather than to estimate equilibrium (Boltzmann) populations. Hence, we did not apply equilibrium reweighting to RMSD, RMSF, or distance distributions. However, MSM section provides reweighted reference results based on the MSM stationary distribution.
In the response to reviews, the authors state that the "RMSF is a statistical quantity derived from averaging the time series of atomic displacements, resulting in a fixed value without an inherent error bar." But normally we would run multiple replicates and get an error bar from the different values in each. To dismiss the request for uncertainties and error bars seems to miss the point. I strongly agree with the prior reviewer that comparisons between RMSF or other values should be accompanied by uncertainties and estimates of statistical significance.
Regarding the reviewers’ suggestion to present the data as a bar graph with error bars, we would like to note that RMSF is calculated as the time average of the fluctuations of each residue’s Cα atom over the entire simulation. As such, RMSF is a statistical quantity derived from averaging the time series of atomic displacements, resulting in a fixed value without an inherent error bar. We believe that our current presentation clearly and accurately reflects the local flexibility differences among the variants. Nearly all published studies report RMSF in this way, as indicated by the following examples:
Figure 3a in DOI: https://doi.org/10.1021/jacsau.2c00077
Figure 2 in DOI: https://doi.org/10.1021/acs.jcim.4c00089
Supplementary Fig. 1, 2, 5, 9, 12, 20, 22, 24, and 26 in DOI: https://doi.org/10.1038/s41467-022-293313
However, in response to the reviewers’ strong request, we present RMSF plots with error bars in our response letter.
Author response image 2.
The root-mean-square fluctuation (RMSF) profiles of wild-type PDC-3 and its variants. Blue lines show the mean RMSF across 100 independent MD trajectories for each system; red translucent bands denote the standard deviation across trajectories. The Ω-loop (residues G183 to S226) is highlighted in yellow, and the R2-loop (residues L280 to Q310) is highlighted in blue.
It was good to see that convergence of the constant-pH simulations was shown. While it can be challenging to get absolute pH values from the implicit solvent-based simulations, the differences between the systems are large and the trends appear significant. I was not clear how the starting coordinates were chosen for these simulations. Is the end point of the classical simulations, or is a representative snapshot chosen somehow?
To ensure comparison, all systems used the X-ray crystal structure (PDB ID: 4HEF) with T79A substitution as the initial structure. The E219K and Y221A mutants were generated in silico using the ICM mutagenesis module. We have added the clarification in Methods section: “The starting structures were identical to those used for AB-MD.”
Significant figures: Throughout the text and tables, the authors present data with more figures than are significant. 1071.81+-157.55 should be reported as 1100 +/ 160 or 1070 =- 160 . See the eLife guidelines for advice on this.
Thank you for your suggestion. We have amended these now.
The manuscript is very long for the results presented, and I feel that a clearer story would come across if the authors shortened the text so that the main conclusions and results were not lost.
We appreciate the suggestion. We examined the twenty most recent research articles published in eLife and found that they are either longer than or comparable in length to our manuscript.
This is a question that the music industry faced head-on, and they came up with EULAs
huh?
Claude Code, cli tool
Claude Code plugin for Visual Studio Code
At one point the Permanent Secretary himself took on the task of fixing the lifts, so infuriated had he become. He retired licking his wounds. ‘It’s impossible, impossible!’ It turned out that fixing an appointment is much easier than fixing a lift.
culture of impossible. myth of complexity
At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet.
Cool!
eLife Assessment
This study makes a valuable contribution by elucidating the genetic determinants of growth and fitness across multiple clinical strains of Mycobacterium intracellulare, an understudied non-tuberculous mycobacterium. Using transposon sequencing (Tn-seq), the authors identify a core set of 131 genes essential for bacterial adaptation to hypoxia, providing a convincing foundation for anti-mycobacterial drug discovery.
Reviewer #1 (Public review):
Summary:
In this descriptive study, Tateishi et al. report a Tn-seq based analysis of genetic requirements for growth and fitness in 8 clinical strains of Mycobacterium intracellulare Mi), and compare the findings with a type strain ATCC13950. The study finds a core set of 131 genes that are essential in all nine strains, and therefore are reasonably argued as potential drug targets. Multiple other genes required for fitness in clinical isolates have been found to be important for hypoxic growth in the type strain.
Strengths:
The study has generated a large volume of Tn-seq datasets of multiple clinical strains of Mi from multiple growth conditions, including from mouse lungs. The dataset can serve as an important resource for future studies on Mi, which despite being clinically significant, remains a relatively understudied species of mycobacteria.
Weaknesses:
The primary claim of the study that the clinical strains are better adapted for hypoxic growth is yet to be comprehensively investigated. However, this reviewer thinks such an investigation would require a complex experimental design and perhaps form an independent study.
Comments on revisions:
The revised paper has satisfactorily addressed my previous concerns, and I have no further issues with this paper.
Author response:
The following is the authors’ response to the previous reviews
Reviewer #1 (Public review) :
Comments on revisions:
The revised manuscript has responded to the previous concerns of the reviewers, albeit modestly. The overemphasis on hypoxic adaptation of the clinical isolates persist as a key concern in the paper. The authors have compared the growth-curve of each of the clinical and ATCC strains under normal and hypoxic conditions (Fig. 8), but don't show how mutations in some of the genes identified in Tn-seq would impact the growth phenotype under hypoxia. They largely base their arguments on previously published results.
As I mentioned previously, the paper will be better without over-interpreting the TnSeq data in the context of hypoxia.
Thank you for the comment on the issue of not determining the impact of individual gene mutations identified in TnSeq on the growth phenotypes under hypoxia.
We agree that the lack of validation of TnSeq results is a limitation of this study. Without evidence of growth pattern of each gene-deletion mutant under hypoxia there might be a risk of over-interpretating the data, even though the data are carefully interpreted based on previous reports. We consider that it is necessary to confirm the phenomenon by using knockout mutants.
We have just recently succeeded in constructing the vector plasmids for making knockout mutants of M intracellulare (Tateishi. Microbiol Immunol. 2024). We will proceed to the validation experiment of TnSeq-hit genes by constructing knockout mutants. We already mentioned this point as a limitation of this study in the Discussion (pages 35-36 lines 630-640 in the revised manuscript).
Reference.
Tateishi, Y., Nishiyama, A., Ozeki, Y. & Matsumoto, S. Construction of knockout mutants in Mycobacterium intracellulare ATCC13950 strain using a thermosensitive plasmid containing negative selection marker rpsL+. Microbiol Immunol 68, 339-347 (2024).
Other points:
The y-axis legends of plots in Fig.8c are illegible.
Following the comment, we have corrected Figure 8c and checked the uploaded PDF
The statements in lines 376-389 are convoluted and need some explanation. If the clinical strains enter the log phase sooner than ATCC strain under hypoxia, then how come their growth rates (fig. 8c) are lower? Aren't they expected to grow faster?
Thank you for the comment on the interpretation of the difference in bacterial growth under hypoxia between MAC-PD strains and the ATCC type strain. The growth curve consists of the onset of logarithmic growth and its growth speed. In this study, we evaluated the former as timing of midpoint and the latter as growth rate at midpoint. Timing of midpoint and growth rate at midpoint are individual parameters. The early entry to log-phase does not mean the fast growth rate at midpoint.
Our results demonstrated that 5 (M.i.198, M.i.27, M003, M019 and M021) out of 8 clinical MAC-PD strains entered log-phase early and continued to grow logarithmically long time (slow growth). This data suggests the capacity for MAC-PD to continue replication long time under hypoxic conditions. By contrast, the ATCC type strain showed delayed onset of logarithmic growth caused by long-term lag phase. The duration of logarithmic growth was short even once after it started. The log phase soon transited to the stationary phase. This data suggests the lower capacity for the ATCC strain to continue replication under hypoxic conditions.
Following the comment, we have added the interpretation of the growth curve pattern as follows (page 22 lines 379-392 in the revised manuscript): “The growth rate at midpoint under hypoxic conditions was significantly lower in these 5 clinical MAC-PD strains than in ATCC13950. The early entry to log phase followed by long-term logarithmic growth (slow growth rate at midpoint) suggests the capacity for these 5 clinical MAC-PD strains to continue replication long time under hypoxic conditions. On the other hand, the rest 3 clinical MAC-PD strains (M018, M001 and MOTT64) did not show significant change in the growth rate between aerobic and hypoxic conditions, suggesting that there are different levels of capacity in maintaining long-term replication under hypoxia among clinical MAC-PD strains. In ATCC13950, the entry to log phase was significantly delayed under 5% oxygen compared to aerobic conditions, and the growth rate at midpoint was significantly increased under hypoxic conditions compared to aerobic conditions in ATCC13950. Such long-term lag phase followed by short-term log phase suggests lower capacity for ATCC13950 to continue replication under hypoxic conditions compared to clinical MAC-PD strains.”
Reviewer #4 (Public review):
Comments on revisions:
The revised version has satisfactorily addressed my initial comments in the discussion section.
The authors thank the Reviewer for understanding our reply.
Reviewer #5 (Public review):
Comments on revisions:
There is quite a lot of data and this could have been a really impactful study if the authors had channelized the Tn mutagenesis by focusing on one pathway or network. It looks scattered. However, from the previous version, the authors have made significant improvements to the manuscript and have provided comments that fairly address my questions.
The authors thank the Reviewer for understanding our reply. And the authors thank the Reviewer for the comments suggesting the future studies of TnSeq that focus on one pathway or network.
eLife Assessment
This is an important study that utilized in vivo optical measurements of the cortical metabolic rate of O2 and blood flow, as well as measurements in isolated mitochondria to assess the uncoupling of the oxidative phosphorylation due to hypoxia-ischemia injury of the neonatal brain, and effects of the hypothermia treatment. The combination of state-of-the-art optical measurements, mitochondrial assays, and the use of various control experiments provides convincing evidence for the derived conclusions. This work will be of interest to those in the mitochrondrial metabolomics, brain injury and hypoxia fields.
Reviewer #1 (Public review):
Summary:
This manuscript addresses the important problem of the uncoupling of oxidative phosphorylation due to hypoxia-ischemia injury in the neonatal brain and provides insight into the neuroprotective mechanisms of hypothermia treatment.
Strengths:
The authors used a combination of in vivo imaging of awake P10 mice and experiments on isolated mitochondria to assess various key parameters of brain metabolism during hypoxia-ischemia with and without hypothermia treatment. This unique approach resulted in a comprehensive data set that provides solid evidence to support the derived conclusions.
Weaknesses:
Several potential weaknesses were identified in the original submission, which the authors subsequently addressed in the revised manuscript. Here is the brief list of the questions:
(1) Is it possible that the observed relatively low baseline OEF and trends of increased OEF and CBF over several hours after the imaging start were partially due to slow recovery from anesthesia?
(2) What was the pain management, and is there a possibility that some of the observations were influenced by the pain-reducing drugs or their absence?
(3) Were P10 mice significantly stressed during imaging in the awake state because they didn't have head-restraint habituation training?
(4) Considering high metabolism and blood flow in the cortex, it could be potentially challenging to predict cortical temperature based on the skull temperature, particularly in the deeper part of the cortex.
(5) The map of estimated CMRO2 looks quite heterogeneous across the brain surface. Could this be partially resulting from the measurement artefact?
(6) It would be beneficial to provide more detailed justification for using P10 mice in the experiments.
Reviewer #3 (Public review):
Sun et al. present a comprehensive study using a novel photoacoustic microscopy setup and mitochondrial analysis to investigate the impact of hypoxia-ischemia (HI) on brain metabolism and the protective role of therapeutic hypothermia. The authors elegantly demonstrate three connected findings: (1) HI initially suppresses brain metabolism, (2) subsequently triggers a metabolic surge linked to oxidative phosphorylation uncoupling and brain damage, and (3) therapeutic hypothermia mitigates HI-induced damage by blocking this surge and reducing mitochondrial stress.
The study's design and execution are great, with a clear presentation of results and methods. Data is nicely presented, and methodological details are thorough.
However, a minor concern is the extensive use of abbreviations, which can hinder readability. As all the abbreviations are introduced in the text, their overuse may render the text hard to read to non-specialist audiences. Additionally, sharing the custom Matlab and other software scripts online, particularly those used for blood vessel segmentation, would be a valuable resource for the scientific community. In addition, while the study focuses on the short-term effects of HI, exploring the long-term consequences and definitively elucidating HI's impact on mitochondria would further strengthen the manuscript's impact.
Despite these minor points, this manuscript is very interesting.
Comments on revisions:
All addressed.
Author response:
The following is the authors’ response to the original reviews.
Reviewer #1 (Public review)
(1) This manuscript addresses an important problem of the uncoupling of oxidative phosphorylation due to hypoxia-ischemia injury of the neonatal brain and provides insight into the neuroprotective mechanisms of hypothermia treatment.
The authors used a combination of in vivo imaging of awake P10 mice and experiments on isolated mitochondria to assess various key parameters of the brain metabolism during hypoxia-ischemia with and without hypothermia treatment. This unique approach resulted in a comprehensive data set that provides solid evidence for the derived conclusions
We thank the reviewer for the positive feedback.
(2) The experiments were performed acutely on the same day when the surgery was performed. There is a possibility that the physiology of mice at the time of imaging was still affected by the previously applied anesthesia. This is particularly of concern since the duration of anesthesia was relatively long. Is it possible that the observed relatively low baseline OEF (~20%) and trends of increased OEF and CBF over several hours after the imaging start were partially due to slow recovery from prolonged anesthesia? The potential effects of long exposure to anesthesia before imaging experiments were not discussed.
We thank the reviewer for this important comment and for pointing out the potential influence of anesthesia on the physiological state of the animals. We apologize for any confusion. To clarify, all PAM imaging experiments were conducted in awake animals. Isoflurane anesthesia was used only during two brief surgical procedures: (1) the installation of the head-restraint plastic head plate and (2) the right common carotid artery (CCA) ligation. Each anesthesia session lasted less than 20 minutes.
We have revised the Methods section to provide additional details:
For the subsection Procedures for PAM Imaging on page 17, we clarified the sequence of procedures during the head plate installation, as well as the corresponding anesthesia duration:
“After the applied glue was solidified (~20 min), the animal was first returned to its cage for full recovery from anesthesia, and then carefully moved to the treadmill and secured to the metal arm-piece with two #4–40 screws for awake PAM imaging. The total duration of anesthesia, including preparation and glue solidification, was approximately 20 minutes.”
For the subsection Neonatal Cerebral HI and Hypothermia Treatment on page 19, we also clarified the CCA ligation procedure:
“Briefly, P10 mice of both sexes anesthetized with 2% isoflurane were subjected to the right CCA-ligation. To manage pain, 0.25% Bupivacaine was administered locally prior to the surgical procedures, which took less than 10 minutes. After a recovery period for one hour, awake mice were exposed to 10% O<sub>2</sub> for 40 minutes in a hypoxic chamber at 37 °C.”
Regarding the reviewer’s concern about the observed trends in OEF and CBF, we agree that residual effects of anesthesia could, in principle, influence physiological parameters. However, we believe this is unlikely in this study for the following reasons. First, all imaging was conducted in awake animals after a clearly defined recovery period. Second, the trend of increasing OEF and CBF over time was consistent across animals and aligned with expected physiological responses following hypoxic-ischemic injury. In particular, the relatively low baseline OEF (0.21 at 37°C) is consistent with our previous study (0.25; (Cao et al., 2018)). The gradual increase in CBF and OEF reflects metabolic compensation and reperfusion following hypoxia-ischemia, as previously described (Lin and Powers, 2018). Therefore, we believe the observed changes are of physiological origin rather than anesthesia-related artifacts.
(3) The Methods Section does not provide information about drugs administered to reduce the pain. If pain was not managed, mice could be experiencing significant pain during experiments in the awake state after the surgery. Since the imaging sessions were long (my impression based on information from the manuscript is that imaging sessions were ~4 hours long or even longer), the level of pain was also likely to change during the experiments. It was not discussed how significant and potentially evolving pain during imaging sessions could have affected the measurements (e.g., blood flow and CMRO<sub>2</sub>). If mice received pain management during experiments, then it was not discussed if there are known effects of used drugs on CBF, CMRO<sub>2</sub>, and lesion size after 24 hr.
We thank the reviewer for this valuable comment regarding pain management. We confirm that local analgesia was administered to all animals prior to surgical procedures. Specifically, 0.25% Bupivacaine was applied locally before both the head-restraint plate installation and the CCA ligation. These details have now been clarified in the Methods section:
For the subsection Procedures for PAM Imaging on page 16, we added:
“To manage pain, 0.25% Bupivacaine was administered locally prior to the surgical procedures.”
For the subsection Neonatal Cerebral HI and Hypothermia Treatment on page 18, we added:
“To manage pain, 0.25% Bupivacaine was administered locally prior to the surgical procedures, which took less than 10 minutes.”
To our knowledge, Bupivacaine has minimal systemic effects at the dose used and is unlikely to significantly alter CBF, CMRO<sub>2</sub>, or lesion development (Greenberg et al., 1998). No other analgesics (e.g., NSAIDs or opioids) were administered unless distress symptoms were observed—which did not occur in this study.
Additionally, although imaging sessions were extended (up to 2 hours), animals remained calm and showed no signs of pain or distress during or after the procedures. Throughout the experimental period (up to 24 hours post-surgery), animals were monitored for signs of discomfort (e.g., abnormal activity, breathing, or weight gain), but no additional analgesia was required. The neonatal HI procedures are considered minimally invasive, and based on our protocol and prior experience, local Bupivacaine provides effective analgesia during and after the brief surgeries. We have added a corresponding note in the Discussion section (newly added subsection: Limitations in this study, the last paragraph) on page 15:
“We observed no signs of distress or pain and did not use stress- or pain-reducing drugs during imaging. However, potential effects of stress or residual pain on CBF and CMRO<sub>2</sub> cannot be fully ruled out. Future studies could incorporate more detailed pain assessment and stress-mitigation strategies to further enhance physiological reliability.”
(4) Animals were imaged in the awake state, but they were not previously trained for the imaging procedure with head restraint. Did animals receive any drugs to reduce stress? Our experience with well-trained young-adult as well as old mice is that they can typically endure 2 and sometimes up to 3 hours of head-restrained awake imaging with intermittent breaks for receiving the rewards before showing signs of anxiety. We do not have experience with imaging P10 mice in the awake state. Is it possible that P10 mice were significantly stressed during imaging and that their stress level changed during the imaging session? This concern about the potential effects of stress on the various measured parameters was not discussed.
We thank the reviewer for this important comment regarding the potential effects of stress during awake imaging. The neonatal mice used in our study were P10, a stage at which animals are still physiologically immature and relatively inactive. Due to their small size and limited mobility, these animals did not struggle or show signs of distress during the imaging sessions. All animals remained calm and stable throughout the procedure, and no stress-reducing drugs were administered.
We agree that, unlike older animals, P10 mice are not amenable to prior behavioral training. However, their underdeveloped motor activity and natural docility at this stage allowed for stable head-restrained imaging without inducing overt stress responses. Although no behavioral signs of stress were observed, we acknowledge that subtle physiological effects cannot be entirely excluded. We have added a brief discussion in the Discussion section (newly added subsection: Limitations in this study, the last paragraph) on page 15:
“Lastly, for awake imaging, the small size of neonatal mice at P10 aids stability during awake PAM imaging, though it limits the feasibility of prior training, which is typically possible in older animals.”
(5) The temperature of the skull was measured during the hypothermia experiment by lowering the water temperature in the water bath above the animal's head. Considering high metabolism and blood flow in the cortex, it could be challenging to predict cortical temperature based on the skull temperature, particularly in the deeper part of the cortex.
We thank the reviewer for this helpful comment and for highlighting an important technical consideration. We acknowledge that we did not directly measure intracortical tissue temperature during the hypothermia experiments. While we recognize that relying on skull temperature may have limitations—particularly in reflecting temperature changes in deeper cortical regions—this approach is consistent with clinical practice, where intracortical temperature is typically not measured. Moreover, prior studies have shown that skull or brain surface temperature generally reflects cortical thermal dynamics to a reasonable extent under controlled conditions (Kiyatkin, 2007). We have added the following note in the Discussion section (newly added subsection: Limitations in this study, the 2<sup>nd</sup> paragraph) on page 14:
“A technical limitation is the absence of direct intracortical temperature measurements during hypothermia; we relied on skull temperature, which may not fully capture temperature dynamics in deeper cortical layers. However, this approach aligns with clinical practice, where intracortical temperature is not typically measured. Future studies could benefit from more precise intracortical assessments.”
(6) The map of estimated CMRO<sub>2</sub> (Fig. 4B) looks very heterogeneous across the brain surface. Is it a coincidence that the highest CMRO<sub>2</sub> is observed within the central part of the field of view? Is there previous evidence that CMRO<sub>2</sub> in these parts of the mouse cortex could vary a few folds over a 1-2 mm distance?
We appreciate the reviewer’s insightful observation regarding the spatial heterogeneity observed in the estimated CMRO<sub>2</sub> map (Fig. 4B). This heterogeneity is not a result of scanning bias, as uniform contour scanning was performed across the entire field of view. The higher CMRO<sub>2</sub> values observed in the central region are unlikely to be artifacts and more likely reflect underlying physiological variability.
Our CMRO<sub>2</sub> estimation is based on an algorithm we previously developed and validated in other tissues. Specifically, we have successfully applied this algorithm to assess oxygen metabolism in the mouse kidney (Sun et al., 2021) and to monitor vascular adaptation and tissue oxygen metabolism during cutaneous wound healing (Sun et al., 2022). These studies demonstrated the algorithm's capability to capture spatial variations in oxygen metabolism. Although the current application to the brain is novel, the algorithm has been validated in controlled experimental settings and shown to produce consistent results. We acknowledge that the observed range of CMRO<sub>2</sub> appears relatively broad across a 1–2 mm distance; however, such heterogeneity may arise from local differences in vascular density, metabolic demand, or tissue oxygenation — all of which can vary across cortical regions, even within small spatial scales. We have added a brief note in the Discussion (Subsection: Optical CMRO<sub>2</sub> detection in neonatal care) on page 13 to acknowledge this point:
“Additionally, the spatial heterogeneity in estimated CMRO<sub>2</sub> observed in our data may reflect underlying physiological variability, including differences in vascular structure or metabolic demand across cortical regions. Future studies will aim to further validate and interpret these spatial patterns.”
(7) The justification for using P10 mice in the experiments has not been well presented in the manuscript.
We thank the reviewer for pointing out the need to clarify our choice of developmental stage. We chose P10 mice for our hypoxia-ischemia injury model because this stage is widely recognized as developmentally comparable to human term infants in terms of brain maturation. This approach has been validated by several previous studies (Clancy et al., 2007; Mallard and Vexler, 2015; Sheldon et al., 2018). We have added the following clarification to the Methods section (Subsection: Neonatal Cerebral HI and Hypothermia Treatment) on page 18:
“P10 mice were chosen for our experiments as they are widely used to model near-term infants in humans. At this developmental stage, the brain maturation in mice closely parallels that of near-term infants, making them an appropriate model for studying neonatal brain injury and therapeutic interventions (Clancy et al., 2007; Mallard and Vexler, 2015; Sheldon et al., 2018).”
(8) It was not discussed how the observations made in this manuscript could be affected by the potential discrepancy between the developmental stages of P10 mice and human babies regarding cellular metabolism and neurovascular coupling.
We thank the reviewer for raising this important point regarding developmental differences between P10 mice and human infants. We have discussed this issue by adding the following statement to the Discussion section (newly added subsection: Limitations in this study, the 1<sup>st</sup> paragraph) on page 15, where we summarize the overall study design and model selection:
“While P10 mice are widely used to model near-term human infants, developmental differences in cellular metabolism and neurovascular coupling may affect the observed outcomes and limit direct clinical translation (Clancy et al., 2007; Mallard and Vexler, 2015; Sheldon et al., 2018). Nevertheless, the P10 model remains a valuable and widely accepted tool for studying neonatal hypoxia-ischemia mechanisms and evaluating therapeutic interventions.”
(9) Regarding the brain temperature measurements, the authors should use a new cohort of mice, implant the miniature thermocouples 1 mm, 0.5 mm, and immediately below the skull in different mice, and verify the temperature in the brain cortex under conditions applied in the experiments. The same approach could be applied to a few mice undergoing 4-hr-long hypothermia treatment in a chamber, which will provide information about the brain temperature that resulted in observed protection from the injury.
We thank the reviewer for this helpful recommendation. We fully agree that direct intracortical temperature measurement would provide more accurate insight into thermal dynamics during hypothermia treatment. However, the primary aim of this study was not to characterize the precise intracortical temperature response under hypothermic conditions, but rather to examine the effects of hypothermia on CMRO<sub>2</sub> and mitochondrial function. Due to the substantial time and resources required to perform direct intracortical temperature monitoring—and considering the technical focus of the current work—we respectfully suggest reserving such investigations for a future study specifically focused on thermal dynamics in hypoxia-ischemia models.
We have acknowledged this limitation in the subsection Limitations in this study of the Discussion on page 15, noting that skull temperature was used as an approximation of brain temperature and that this approach is consistent with clinical practice, where intracortical temperature is typically not measured. We also note that future studies may benefit from more precise assessments using intracortical probes.
(10) The mean values presented in Fig. 4G are much lower than the peak values in the 2D panels and potentially were calculated as the average values over the entire field of view. Please provide more details on how CMRO<sub>2</sub> was estimated and if the validity of the measurements is expected across the entire field of view. If there are parts of the field of view where the estimation of CMRO<sub>2</sub> is more reliable for technical reasons, maybe one way to compute the mean values is to restrict the usable data to the more centralized part of the field of view.
We thank the reviewer for this thoughtful comment. We confirm that CMRO<sub>2</sub> values shown in Figure 4G were calculated as spatial averages over the entire field of view (FOV; ~5 × 3 mm<sup>2</sup>) encompassing both hemicortices, as shown in Figure 1C. Regarding the observed CMRO<sub>2</sub> values, The apparent difference likely reflects a comparison between two different post-HI time points. Specifically, the ~0.5 value shown for the 37°C ipsilateral group in Figure 4G reflects the average CMRO<sub>2</sub> measured 24 hours after HI, while the ~1.5 value in Figure 2D (red line) corresponds to CMRO<sub>2</sub> during the early 0–2 hour post-HI period. The temporal difference accounts for the apparent discrepancy in magnitude. We understand the importance of consistency across the field of view and have clarified this point in the subsection Procedures for PAM Imaging in the Methods on page 17 “For the imaging field covering both hemicortices between the Bregma and Lambda of the neonatal mouse (5 × 3 mm<sup>2</sup> as shown in Figure 1C, with each hemicortex measuring 2.5 × 3 mm<sup>2</sup>)”, as well as in the Figure 4 legend on page 34 “Correlation of CMRO<sub>2</sub> and post-HI brain infarction in mouse neonates at 24 hours”.
In our model and setup, CMRO<sub>2</sub> estimation is spatially robust across the FOV under standard imaging conditions. We recognize, however, that certain peripheral regions may be more prone to signal attenuation. Future refinement of region selection could further improve spatial averaging strategies. For the current study, full-FOV averaging was used consistently across all groups to maintain comparability.
(11) Minor: Results presented in Supplementary Tables have too many significant digits.
Thank you for the helpful suggestion. We have revised Supplementary Tables S1 and S2 to reduce the number of significant digits and improve clarity.
Reviewer #2 (Public review)
(1) In this study, authors have hypothesized that mitochondrial injury in HIE is caused by OXPHOS-uncoupling, which is the cause of secondary energy failure in HI. In addition, therapeutic hypothermia rescues secondary energy failure. The methodologies used are state-of-the art and include PAM technique in live animal, bioenergetic studies in the isolated mitochondria, and others.
The study is comprehensive and impressive. The article is well written and statistical analyses are appropriate.
We thank the reviewer for the positive feedback.
(2) The manuscript does not discuss the limitation of this animal model study in view of the clinical scenario of neonatal hypoxia-ischemia.
We thank the reviewer for this valuable feedback. In response, we have added a dedicated “Limitations in this study” subsection in the Discussion, where we address the potential limitations of this animal model in the context of the clinical scenario of neonatal hypoxia-ischemia in the first paragraph on page 14, including the developmental differences between P10 mice and human infants.
(3) I see many studies on Pubmed on bioenergetics and HI. Hence, it is unclear what is novel and what is known.
We thank the reviewer for this important comment regarding the novelty of our study in the context of existing research on bioenergetics and hypoxia-ischemia (HI). To better clarify the novel aspects of our work, we have highlighted the relevant content in the Abstract (page 4) and Introduction (page 5). Specifically, while many studies have explored HI-related bioenergetic dysfunction, the mechanisms by which therapeutic hypothermia modulates CMRO<sub>2</sub> and mitochondrial function post-HI remain poorly understood.
Abstract on page 4: “However, it is unclear how post-HI hypothermia helps to restore the balance, as cooling reduces CMRO<sub>2</sub>. Also, how transient HI leads to secondary energy failure (SEF) in neonatal brains remains elusive. Using photoacoustic microscopy, we examined the effects of HI on CMRO<sub>2</sub> in awake 10-day-old mice, supplemented by bioenergetic analysis of purified cortical mitochondria.”
Introduction on page 5: “The use of awake mouse neonates avoided the confounding effects of anesthesia on CBF and CMRO<sub>2</sub> (Cao et al., 2017; Gao et al., 2017; Sciortino et al., 2021; Slupe and Kirsch, 2018). In addition, we measured the oxygen consumption rate (OCR), reactive oxygen species (ROS), and the membrane potential of mitochondria that were immediately purified from the same cortical area imaged by PAM. This dual-modal analysis enabled a direct comparison of cerebral oxygen metabolism and cortical mitochondrial respiration in the same animal. Moreover, we compared the effects of therapeutic hypothermia on oxygen metabolism and mitochondrial respiration, and correlated the extent of CMRO<sub>2</sub>-reduction with the severity of infarction at 24 hours after HI. Our results suggest that blocking HI-induced OXPHOS-uncoupling is an acute effect of hypothermia and that optical detection of CMRO<sub>2</sub> may have clinical applications in HIE.”
In this study, we propose that uncoupled oxidative phosphorylation (OXPHOS) underlies the secondary energy failure observed after HI, and we demonstrate that hypothermia suppresses this pathological CMRO<sub>2</sub> surge, thereby protecting mitochondrial integrity and preventing injury. Additionally, our use of photoacoustic microscopy (PAM) in awake neonatal mice represents a novel, non-invasive approach to track cerebral oxygen metabolism, with potential clinical relevance for guiding hypothermia therapy.
(4) What are the limitations of ex-vivo mitochondrial studies?
We thank the reviewer for this insightful comment. We acknowledge that ex-vivo mitochondrial assays do not fully replicate in vivo physiological conditions, as they lack systemic factors such as blood flow, cellular interactions, and intact tissue architecture. However, these assays are well-established and widely accepted in the field for evaluating mitochondrial function under controlled conditions (Caspersen et al., 2008; Niatsetskaya et al., 2012). Despite their limitations, they enable direct comparisons of mitochondrial activity across experimental groups and provide valuable mechanistic insights that complement in vivo observations.
(5) PAM technique limits the resolution of the image beyond 500-750 micron depth. Assessing basal ganglia may not be possible with this approach?
We thank the reviewer for this important comment. We agree that the imaging depth of PAM is limited and may not allow assessment of deeper brain structures such as the basal ganglia. However, in our neonatal HI model—as in many clinical cases of HIE—cortical injury is typically more severe and represents a major focus for mechanistic and therapeutic investigations. The cortical regions assessed with PAM are thus highly relevant to the pathophysiology of neonatal HI. We have now acknowledged this depth limitation in the third paragraph of the newly added Limitations in this study subsection of the Discussion on page 15:
“Another limitation of this study is the restricted imaging depth of the PAM technique, which is typically less than 1 mm and therefore does not allow assessment of deeper brain structures such as the basal ganglia. However, in both our neonatal HI model and most clinical cases of neonatal hypoxia-ischemia, cortical injury tends to be more prominent and functionally significant. As such, our cortical measurements remain highly relevant for investigating the mechanisms of injury and evaluating therapeutic interventions.”
(6) Hypothermia in present study reduces the brain temperature from 37 to 29-32 degree centigrade. In clinical set up, head temp is reduced to 33-34.5 in neonatal hypoxia ischemia. Hence a drop in temperature to 29 degrees is much lower relative to the clinical practice. How the present study with greater drop in head temperature can be interpreted for understanding the pathophysiology of therapeutic hypothermia in neonatal HIE. Moreover, in HIE model using higher temperature of 37 and dropping to 29 seems to be much different than the clinical scenario. Please discuss.
We thank the reviewer for raising this important point regarding temperature ranges in our study. In Figure 1, we used a broader temperature range (down to 29°C) to explore the general relationship between temperature and CMRO<sub>2</sub> in uninjured neonatal mice. This experiment was not intended to model therapeutic hypothermia directly, but rather to characterize the baseline physiological responses.
For all experiments involving hypothermia as a therapeutic intervention following HI, we consistently maintained a brain temperature of 32°C, which falls within the clinically accepted mild hypothermia range for neonatal HIE (typically 33–34.5°C). We believe this temperature closely mimics clinical practice and supports the translational relevance of our findings.
(7) NMR was assessed ex-vivo. How does it relate to in vivo assessment. Infants admitted in Neonatal intensive Care Unit, frequently get MRI with spectroscopy. How do the MRS findings in human newborns with HIE correlate with the ex-vivo evaluation of metabolites.
We thank the reviewer for this insightful question. While our study assessed brain metabolites ex vivo, similar metabolic changes have been observed in vivo using proton magnetic resonance spectroscopy (¹H-MRS) in infants with HIE. Specifically, reductions in N-acetylaspartate (NAA) — a marker of neuronal integrity — have been reported in neonates with severe brain injury, aligning with our ex vivo findings. This correlation between in vivo and ex vivo assessments supports the translational relevance of our model for studying metabolic disruption in neonatal HIE. We have added this point to the subsection Using Optically Measured CMRO<sub>2</sub> to Detect Neonatal HI Brain Injury of the Results on page 8, along with a supporting clinical reference (Lally et al., 2019):
“In addition, in vivo proton MRS in infants with HIE has also shown a reduction in NAA, particularly in cases of severe injury (Lally et al., 2019). This reduction in NAA, observed in neonatal intensive care settings, reflects neuronal and axonal loss or dysfunction and serves as a biomarker for injury severity. The alignment between our ex vivo observations and in vivo MRS findings in clinical studies reinforces the translational relevance of our model for investigating metabolic disturbances in neonatal HIE.”
Reviewer #3 (Public review)
(1) In Sun et al. present a comprehensive study using a novel photoacoustic microscopy setup and mitochondrial analysis to investigate the impact of hypoxia-ischemia (HI) on brain metabolism and the protective role of therapeutic hypothermia. The authors elegantly demonstrate three connected findings: (1) HI initially suppresses brain metabolism, (2) subsequently triggers a metabolic surge linked to oxidative phosphorylation uncoupling and brain damage, and (3) therapeutic hypothermia mitigates HI-induced damage by blocking this surge and reducing mitochondrial stress.
The study's design and execution are great, with a clear presentation of results and methods. Data is nicely presented, and methodological details are thorough.
We thank the reviewer for the positive feedback.
(2) However, a minor concern is the extensive use of abbreviations, which can hinder readability. As all the abbreviations are introduced in the text, their overuse may render the text hard to read to non-specialist audiences. Additionally, sharing the custom Matlab and other software scripts online, particularly those used for blood vessel segmentation, would be a valuable resource for the scientific community. In addition, while the study focuses on the short-term effects of HI, exploring the long-term consequences and definitively elucidating HI's impact on mitochondria would further strengthen the manuscript's impact.
We thank the reviewer for these valuable suggestions. Please find our point-by-point responses below:
Abbreviations: To improve readability, we have added a List of Abbreviations on page 3 to help readers, especially non-specialists, navigate the terminology more easily.
MATLAB Code Availability: The methodology for blood vessel segmentation was described in detail in our previous publication (Sun et al., 2020). We have now updated the subsection Quantification of Cerebral Hemodynamics and Oxygen Metabolism by PAM of the Methods on page 18 to provide additional details and have indicated that the MATLAB scripts are available upon request.
“Briefly, this process involves generating a vascular map using signal amplitude from the Hilbert transformation, selecting a region slightly larger than the vessel of interest, and applying Otsu’s thresholding method to remove background pixels. Isolated or spurious boundary fragments are then removed to improve boundary smoothness. The customized MATLAB code used for vessel segmentation is available upon request.”
Long-Term Effects of Hypothermia: We agree that exploring long-term outcomes would enhance the broader impact of this research. While our study focuses on the acute phase following HI, prior studies have shown long-term neuroprotective benefits of therapeutic hypothermia, such as enhanced white matter development (Koo et al., 2017). We have added this point to the fourth paragraph in the subsection Limitations in this study of the Discussion on page 15:
“While our study focuses on the acute effects of hypothermia, previous research has shown long-term neuroprotective benefits, including improved white matter development post-injury (Koo et al., 2017). These findings highlight hypothermia's potential for both immediate and extended recovery, warranting further study of long-term outcomes.”
(3) Extensive use of abbreviations.
Thank you for the helpful suggestion. To improve readability for a broader audience, we have added a List of Abbreviations on page 3 of the manuscript to assist readers in navigating terminology used throughout the text. This has been included as Response #2 to Reviewer #3.
(4) Share code used to conduct the study.
Thank you for the suggestion. The methodology for vessel segmentation was previously published (Sun et al., 2020), and we have noted in the subsection Quantification of Cerebral Hemodynamics and Oxygen Metabolism by PAM of the Methods on page 18 that the MATLAB code is available upon request. This has also been included as Response #2 to Reviewer #3.
Reference:
Cao R, Li J, Kharel Y, Zhang C, Morris E, Santos WL, Lynch KR, Zuo Z, Hu S. 2018. Photoacoustic microscopy reveals the hemodynamic basis of sphingosine 1-phosphate-induced neuroprotection against ischemic stroke. Theranostics 8:6111–6120. doi:10.7150/thno.29435
Caspersen CS, Sosunov A, Utkina-Sosunova I, Ratner VI, Starkov AA, Ten VS. 2008. An Isolation Method for Assessment of Brain Mitochondria Function in Neonatal Mice with Hypoxic-Ischemic Brain Injury. Developmental Neuroscience 30:319–324. doi:10.1159/000121416
Clancy B, Kersh B, Hyde J, Darlington RB, Anand KJS, Finlay BL. 2007. Web-based method for translating neurodevelopment from laboratory species to humans. Neuroinformatics 5:79–94. doi:10.1385/ni:5:1:79
Greenberg RS, Zahurak M, Belden C, Tunkel DE. 1998. Assessment of oropharyngeal distance in children using magnetic resonance imaging. Anesth Analg 87:1048–1051. doi:10.1097/00000539-199811000-00014
Kiyatkin EA. 2007. Brain temperature fluctuations during physiological and pathological conditions. Eur J Appl Physiol 101:3–17. doi:10.1007/s00421-007-0450-7
Koo E, Sheldon RA, Lee BS, Vexler ZS, Ferriero DM. 2017. Effects of therapeutic hypothermia on white matter injury from murine neonatal hypoxia-ischemia. Pediatr Res 82:518–526. doi:10.1038/pr.2017.75
Lally PJ, Montaldo P, Oliveira V, Soe A, Swamy R, Bassett P, Mendoza J, Atreja G, Kariholu U, Pattnayak S, Sashikumar P, Harizaj H, Mitchell M, Ganesh V, Harigopal S, Dixon J, English P, Clarke P, Muthukumar P, Satodia P, Wayte S, Abernethy LJ, Yajamanyam K, Bainbridge A, Price D, Huertas A, Sharp DJ, Kalra V, Chawla S, Shankaran S, Thayyil S, MARBLE consortium. 2019. Magnetic resonance spectroscopy assessment of brain injury after moderate hypothermia in neonatal encephalopathy: a prospective multicentre cohort study. Lancet Neurol 18:35–45. doi:10.1016/S1474-4422(18)30325-9
Lin W, Powers WJ. 2018. Oxygen metabolism in acute ischemic stroke. J Cereb Blood Flow Metab 38:1481–1499. doi:10.1177/0271678X17722095
Mallard C, Vexler Z. 2015. Modeling ischemia in the immature brain: how translational are animal models? Stroke 46:3006–3011. doi:10.1161/STROKEAHA.115.007776
Niatsetskaya ZV, Sosunov SA, Matsiukevich D, Utkina-Sosunova IV, Ratner VI, Starkov AA, Ten VS. 2012. The Oxygen Free Radicals Originating from Mitochondrial Complex I Contribute to Oxidative Brain Injury Following Hypoxia–Ischemia in Neonatal Mice. J Neurosci 32:3235–3244. doi:10.1523/JNEUROSCI.6303-11.2012
Sheldon RA, Windsor C, Ferriero DM. 2018. Strain-Related Differences in Mouse Neonatal Hypoxia-Ischemia. Dev Neurosci 40:490–496. doi:10.1159/000495880
Sun N, Bruce AC, Ning B, Cao R, Wang Y, Zhong F, Peirce SM, Hu S. 2022. Photoacoustic microscopy of vascular adaptation and tissue oxygen metabolism during cutaneous wound healing. Biomed Opt Express, BOE 13:2695–2706. doi:10.1364/BOE.456198
Sun N, Ning B, Bruce AC, Cao R, Seaman SA, Wang T, Fritsche-Danielson R, Carlsson LG, Peirce SM, Hu S. 2020. In vivo imaging of hemodynamic redistribution and arteriogenesis across microvascular network. Microcirculation 27:e12598. doi:10.1111/micc.12598
Sun N, Zheng S, Rosin DL, Poudel N, Yao J, Perry HM, Cao R, Okusa MD, Hu S. 2021. Development of a photoacoustic microscopy technique to assess peritubular capillary function and oxygen metabolism in the mouse kidney. Kidney International 100:613–620. doi:10.1016/j.kint.2021.06.018
/hyperpost/~/indyweb/now/2025-12/=/IPFS/
TopiQuest
This is a systematic corruption of the culture of academia’s researchers, with the active complicity and encouragement of the administration. In the time it took to write this article, I passively stumbled across more examples of consequential research fraud by high-profile field-leading academics than I have space to describe here. If you want more examples, you can find them all day long. How many similar frauds remain undiscovered? One study found that, between 2000 and 2021, the fraction of European biomedical papers which were later retracted quadrupled.
perverse incentives...
Mujer
Hombre
Hombre
Mujer
eLife Assessment
This valuable study presents a well-designed set of experiments demonstrating how a planthopper salivary carbonic anhydrase can promote rice stripe virus infection by modulating callose deposition in the host plant. The authors provide solid data for the proposed protein-protein interactions, including strengthened evidence for the LssaCA-NP-OsTLP complex and clarified dynamics of LssaCA presence in planta. Overall, the work reveals a mechanistic link whereby a vector salivary protein enhances a plant β-1,3-glucanase to suppress callose-based defense, thereby facilitating early viral establishment.
Reviewer #2 (Public Review):
There is increasing evidence that viruses manipulate vectors and hosts to facilitate transmission. For arthropods, saliva plays an essential role for successful feeding on a host and consequently for arthropod-borne viruses that are transmitted during arthropod feeding on new hosts. This is so because saliva constitutes the interaction interface between arthropod and host and contains many enzymes and effectors that allow feeding on a compatible host by neutralizing host defenses. Therefore, it is not surprising that viruses change saliva composition or use saliva proteins to provoke altered vector-host interactions that are favorable for virus transmission. However, detailed mechanistic analyses are scarce. Here, Zhao and coworkers study transmission of rice stripe virus (RSV) by the planthopper Laodelphax striatellus. RSV infects plants as well as the vector, accumulates in salivary glands and is injected together with saliva into a new host during vector feeding.
The authors present evidence that a saliva-contained enzyme - carbonic anhydrase (CA) - might facilitate virus infection of rice by interfering with callose deposition, a plant defense response. In vitro pull-down experiments, yeast two hybrid assay and binding affinity assays show convincingly interaction between CA and a plant thaumatin-like protein (TLP) that degrades callose. Similar experiments show that CA and TLP interact with the RSV nuclear capsid protein NT to form a complex. Formation of the CA-TLP complex increases TLP activity by roughly 30% and integration of NT increases TLP activity further. This correlates with lower callose content in RSV-infected plants and higher virus titer. Further, silencing CA in vectors decreases virus titers in infected plants. Interestingly, aphid CA was found to play a role in plant infection with two non-persistent non-circulative viruses, turnip mosaic virus and cucumber mosaic virus (Guo et al. 2023 doi.org/10.1073/pnas.2222040120), but the proposed mode of action is entirely different.
Editors' note: this version was assessed by the editors, without further input from the reviewers.
Author response:
The following is the authors’ response to the previous reviews
Reviewer #1 (Public review):
In this study, the authors identified an insect salivary protein LssaCA participating viral initial infection in plant host. LssaCA directly bond to RSV nucleocapsid protein and then interacted with a rice OsTLP that possessed endo-β-1,3-glucanase activity to enhance OsTLP enzymatic activity and degrade callose caused by insects feeding. The manuscript suffers from fundamental logical issues, making its central narrative highly unconvincing.
(1) These results suggested that LssaCA promoted RSV infection through a mechanism occurring not in insects or during early stages of viral entry in plants, but in planta after viral inoculation. As we all know that callose deposition affects the feeding of piercing-sucking insects and viral entry, this is contradictory to the results in Fig. S4 and Fig. 2. It is difficult to understand callose functioned in virus reproduction in 3 days post virus inoculation. And authors also avoided to explain this mechanism.
We appreciate your insightful comment and acknowledge that our initial description may not have been sufficiently clear.
(1) Based on the EPG results, we found that LssaCA deficiency did not significantly affect total feeding time, time to first non-phloem phase, or time to first phloem feeding (Fig. S8A-D in the revised manuscript). However, the continuity of sap ingestion was disturbed—the N4 waveform of dsLssaCA SBPHs was occasionally interrupted for brief periods (newly added Fig. S8E in the revised manuscript), likely due to phloem blockage. In the revised manuscript, we have added this analysis to the Result section (Lines 285-291 and 578-587) and provided the EPG procedure in Material and Methods section (Lines 670-680).
(2) We assessed RSV titers immediately post-feeding to confirm the inoculation viral loads (Fig. 2G) and at 3 dpf (Fig. 2H-I) to assess the in-planta effects following viral inoculation. This did not mean that callose functions in virus reproduction at 3 days post viral inoculation. Rather, callose deposition typically occurs immediately in response to insect feeding and virus inoculation. When measuring callose deposition, we allowed insects to feed for 24 h and quantified the callose levels immediately post feeding. The EPG results showed that sap ingestion continuity was disrupted—the N4 waveform of dsLssaCA-treated SBPHs was occasionally interrupted for brief periods (newly added Fig. S8E in the revised manuscript), likely due to phloem blockage. We have reorganized the description to avoid confusion. Please see Lines 139-144 and Fig. S8E for detail.
(1) Missing significant data. For example, the phenotypes of the transgenic plants, the RSV titers in the transgenic plants (OsTLP OE, ostlp). The staining of callose deposition were also hard to convince. The evidence about RSV NP-LssaCA-OsTLP tripartite interaction to enhance OsTLP enzymatic activity is not enough.
We thank the reviewer for this insightful comment.
(1) We constructed OsTLP overexpression and mutant transgenic plants (OsTLP OE and ostlp) and assessed their phenotypes regarding RSV infection levels. Compared with wild-type plants, OsTLP OE plants exhibited accelerated growth, while ostlp plants showed growth inhibition. Following feeding by viruliferous L. striatellus, OsTLP OE plants had significantly higher RSV titers compared with wild-type plants, whereas ostlp mutant plants exhibited significantly lower RSV titers (Lines 221-228 and new Fig. 3I). These results indicate that OsTLP facilitates RSV infection in planta.
(2) The images showing callose deposition staining are representative of 15 images from 3 independent insect treatments. In addition to the staining images, we quantified fluorescence intensity and measured callose concentration by ELISA.
(2) Figure 4a, there was the LssaCA signal in the fourth lane of pull-down data. Did MBP also bind LsssCA? The characterization of pull-down methods was rough a little bit. The method of GST pull-down and MBP pull-down should be characterized more in more detail.
We thank the reviewer for this helpful comment. MBP did not bind LssaCA. We have repeated the pull-down experiment and provide clearer figure with improved results. We have also revised and provided more detailed descriptions of the GST pull-down and MBP pull-down methods. Please refer to Lines 744-774 and Figure 4A for details.
The only obstacles to executing or permanently imprisoning them are legal and procedural
doh, what else??
Postwar Western European countries were among the safest on Earth
yes, "postwar" is the key here
Mid-century England had about four times the homicide rate of modern Japan4, which, given advances in medical care, implies it had similar levels of crime and disorder5. This with an average age of 34, 15 years younger than the median Japanese person today!
after wars - there's always less crime
this would be expected to drive crime down
Would be expected by whom? A gut feeling?
On technical grounds, it should be much harder to get away with crimes today than in 1960, and since the vast majority of crime is committed by repeat criminals who could be much more easily apprehended near the beginning of their sprees, one would naively expect this alone to significantly reduce crime. But clearance rates have instead plummeted2; it’s much easier for the typical criminal to get away with it. How much worse would this be without these advances?
Less money goes towards solving crimes. There are no strong incentives to reduce it.
If someone asks - how come and why - it's difficult to see what are the models that are being used by the politicians to decide whether something is harmful and to what extent for their ideology or chances for being elected / re-elected. That's because systems are multi-variant - it's the effect of something in the mix rather than stand alone.
Over the past 40 years, average BMI among young adults (18-25) increased by 4.5 points in the US. Without this, it’s reasonable to assume crime rates would’ve increased further.
you either base it on evidence or you don't. It's not a single variant system.
iii
Liverpool FC without English20262026202620262026202620
Football Focus (24h Football + 8h Workshops)
Liverpool FC Standard with English
Football + English (24h Football + 13h English)
Liverpool FC without English
Football Focus (24h Football + 8h Workshops)
Liverpool FC without English
Football Focus (24h Football + 8h Workshops)
Liverpool FC without English
Football Focus (24h Football + 8h Workshops)
Liverpool FC without English
Football Focus (24h Football + 8h Workshops)
Liverpool FC Standard with English
Football + English (24h Football + 13h English)
Liverpool FC Standard with English
Football + English (24h Football + 13h English)
Liverpool FC Standard with English
Football + English (24h Football + 13h English)
Liverpool FC Standard with English
Football + English (24h Football + 13h English)
eLife Assessment
The medicinal leech preparation is an amenable system in which to understand the neural basis of locomotion. Here a previously identified non-spiking neuron was studied in leech and found to alter the mean firing frequency of a crawl-related motoneuron, which fires during the contraction phase of crawling. The findings are valuable and the experiments were diligently done and considered solid. The results lay a foundation for additional studies in this system.
Reviewer #1 (Public review):
The medicinal leech preparation is an amenable system in which to understand how the underlying cellular networks for locomotion function. A previously identified non-spiking neuron (NS) was studied and found to alter the mean firing frequency of a crawl-related motoneuron (DE-3), which fires during the contraction phase of crawling. The data are solid. Identifying upstream neurons responsible for crawl motor patterning is essential for understanding how rhythmic behavior is controlled.
Reviewer #2 (Public review):
This study by Radice et al., takes advantage of the very well-established leach preparation to investigate questions related to motor control, more precisely the question of how the activity of motoneurons taking part in leach crawling behavior are finely tuned.
The paper is overall well written. The findings are clearly presented, and the data seems solid overall.
Author response:
The following is the authors’ response to the previous reviews
Reviewer #1 (Public review):
The medicinal leech preparation is an amenable system in which to understand how the underlying cellular networks for locomotion function. A previously identified non-spiking neuron (NS) was studied and found to alter the mean firing frequency of a crawl-related motoneuron (DE-3), which fires during the contraction phase of crawling. The data are mostly solid. Identifying upstream neurons responsible for crawl motor patterning is essential for understanding how rhythmic behavior is controlled.
Review of Revision:
On a positive note, the rationale for the study is clearer to me now after reading the authors' responses to both reviewers, but that information, as described in the authors' responses, is minimally incorporated into the current revised paper. Incorporating a discussion of previous work on the NS cell has, indeed, improved the paper.
I suggested earlier that the paper be edited for clarity but not much text has been changed since the first draft. I will provide an example of the types of sentences that are confusing. The title of the paper is: "Phase-specific premotor inhibition modulates leech rhythmic motor output". Are the authors referring to the inhibition created by premotor neurons (e.g., on to the motoneurons) or the inhibition that the premotor neurons receive?
In this case, this is an interesting ambiguity: NS is inhibited and that inhibition is directly transmitted to the motoneurons because both cells are electrically coupled. We believe that the title does not disguise the findings conveyed by the manuscript.
I also find the paper still confusing with regard to the suggested "functional homology" with the vertebrate Renshaw cells. When the authors set up this expectation of homology (should be analogy) in the introduction and other sections of the paper, one would assume that the NS cell would be directly receiving excitation from a motoneuron (like DE-3) and, in turn, the motoneuron would then receive some sort of inhibitory input to regulate its firing frequency. Essentially, I have always viewed the Renshaw cells as nature's clever way to monitor the ongoing activity of a motoneuron while also providing recurrent feedback or "recurrent inhibition" to modify that cell's excitatory state. The authors present their initial idea below on line 62. Authors write: "These neurons are present as bilateral pairs in each segmental ganglion and are functional homologs of the mammalian Renshaw cells (Szczupak, 2014). These spinal cord cells receive excitatory inputs from motoneurons and, in turn, transmit inhibitory signals to the motoneurons (Alvarez and Fyffe, 2007)."
We agree with Reviewer #2: the correct term is "analogous," not "homologous." Thanks for pointing this out. We changed the term throughout the text.
The Reviewer is also right in the appreciation of the role of Renshaw cells. NS plays exactly the role that the Reviewer expresses. The ONLY difference is that NS is inhibited by the motoneurons, and in turn transmits this inhibition to the motoneurons via the rectifying electrical junctions. Attending the confusion that our description caused in the Reviewer, we have modified the cited sentence accordingly now in lines 65-67.
Minor note:
I suggest re-writing this last sentence as "these" is confusing. Change to: 'In the spinal cord, Renshaw interneurons receive excitatory inputs from motoneurons and, in turn, transmit inhibitory signals to them (Alvarez and Fyffe, 2007).']
Please, see the changes mentioned above.
Furthermore, the authors note that (line 69 on): "In the context of this circuit the activity of excitatory motoneurons evokes chemically mediated inhibitory synaptic potentials in NS. Additionally, the NS neurons are electrically coupled......In physiological conditions this coupling favors the transmission of inhibitory signals from NS to motoneurons." Based on what is being conveyed here, I see a disconnect with the "functional homology" being presented earlier. I may be missing something, but the Renshaw analogy seems to be quite different compared to what looks like reciprocal inhibition in the leech. If the authors want to make the analogy to Renshaw cells clearer, then they should make a simple ball and stick diagram of the leech system and visually compare it to the Renshaw/motoneuron circuit with regard to functionality. This simple addition would help many readers.
We have simplified the description regarding the Renshaw cell (lines 65-67) to avoid the “details” of the connectivity between the two circuits.
This report focuses on NS neurons and their role in crawling; we mention the analogy with Renshaw cells to widen the interest of the results. We do not think that making a special diagram to compare how the two neurons play a similar role via different connections among the players is useful in the context of this manuscript.
The Abstract, Authors write (line 19), "Specifically, we analyzed how electrophysiological manipulation of a premotor nonspiking (NS) neuron, that forms a recurrent inhibitory circuit (homologous to vertebrate Renshaw cells)...."
First, a circuit would not be homologous to a cell, and the term homology implies a strict developmental/evolutionary commonality. At best, I would use the term functionally analogous but even then I am still not sure that they are functionally that similar (see comments above).
Reviewer #2 is right. We changed the sentence in line 20.
Line 22: "The study included a quantitative analysis of motor units active throughout the fictive crawling cycle that shows that the rhythmic motor output in isolated ganglia mirrors the phase relationships observed in vivo." This sentence must be revised to indicate that not all of the extracellular units were demonstrated to be motor units. Revise to: "The study included a quantitative analysis of identified and putative motor units active throughout the fictive crawling cycle that shows.....'
Line 187 regarding identifying units as motoneurons: Authors write, "While multiple extracellular recordings have been performed previously (Eisenhart et al., 2000), these results (Figure 4) present the first quantitative analysis of motor units activated throughout the crawling cycle in this type of recordings." The authors cannot assume that the units in the recorded nerves belong only to motoneurons. Based on their first rebuttal, the authors seem to be reluctant to accept the idea that the extracellularly recorded units might represent a different class of neurons. They admit that some sensory neurons (with somata located centrally) do, indeed, travel out the same nerves recorded, but go on to explain why they would not be active.
The leech has a variety of sensory organs that are located in the periphery, and some of these sensory neurons do show rhythmic activity correlated with locomotor activity (see Blackshaw's early work). The numerous stretch receptors, in fact, have very large axons that pass through all the nerves recorded in the current paper.
In Fig. 4, it is interesting that the waveforms of all the units recorded in the PP nerve exhibit a reversal in waveform as compared to those in the DP nerve, which might indicate (based on bipolar differential recording) that the units in the PP nerve are being propagated in the opposite direction (i.e., are perhaps afferent). Rhythmic presynaptic inhibition and excitation is commonly seen for stretch receptors within the CNS (see the work of Burrows) and many such cells are under modulatory control.
Most likely, the majority of the units are from motoneurons, but we do not really know at this point. The authors should reframe their statements throughout the paper as: 'While multiple extracellular recordings have been performed previously (Eisenhart et al., 2000), these results (Figure 4) present the first quantitative analysis of multiple extracellular units, using spike sorting methods, which are activated throughout the crawling cycle.' In cases where the identity of the unit is known, then it is fine to state that, but when the identity of the unit is not known, then there should be some qualification and stated as 'putative motor units'
We understand the concern of Reviewer #2 regarding the type of neurons active during dopamine-induced crawling in isolated ganglia. However, we believe there is sufficient evidence to support that the recorded spikes originate from motoneurons. As readers may share the same concern, we have added a paragraph explaining why spikes from somatic sensory neurons such as P or T cells, or from stretch receptors, are unlikely to contribute (lines 206-214). We included the term putative in the abstract.
The Methods section:
Needs to include the full parameters that were used to assess whether bursting activity was qualified in ways to be considered crawling activity or not. Typically, crawl-like burst periods of no more than 25 seconds have been the limit for their qualification as crawling activity. In Fig 2F, for example, the inter-burst period is over 35 seconds; that coupled with an average 5 second burst duration would bring the burst period to 40 seconds, which is substantially out of range for there to be bursting relevant to crawl activity. Simply put, long DE-3 burst periods are often observed but may not be indicative of a crawl state as the CV motoneurons are no longer out of phase with DE-3. A number of papers have adopted this criterion.
We now indicate in the methods the range of period values measured in our experiments. For the reviewer informatio we show here histograms depicting the variability of period and duty cycle values recorded in our experiments (control conditions). The Reviewer can see that the bursting activity of DE-3 fall within what has been published.
Author response image 1.
Crawling in isolated ganglia. A. Histogram of periods end-to-end during crawling in isolated ganglia. The dotted line indicates the mean obtained from the averages of all experiments. The solid black line represents the mean of all cycles across all experiments. B. As in A, for the duty cycle calculated using end-to-end periods. (n = 210 cycles from 45 ganglia obtained from 32 leeches in all cases).
Reviewer #1 (Recommendations for the authors):
Minor comments-
Line 100: "In the frame of the recurrent inhibitory circuit, NS is the target of inhibitory signals". Suggestion: 'Within the framework of the recurrent inhibitory circuit, NS is the target of inhibitory signals.'
Changed as suggested (line 107).
Line 163: "This series of experiments proves that, as predicted based on the known circuit (Figure 164 1C), inhibitory signals onto NS premotor neurons were transmitted to DE-3 motoneurons and counteracted their excitatory drive during crawling, limiting their firing frequency". I think this sentence is too strong plus needs some editing. Suggestion: 'As predicted based on the known circuit (Figure 164 1C), this series of experiments indicates that inhibitory signals onto NS premotor neurons are transmitted to DE-3 motoneurons, thus limiting their firing frequency and counteracting their excitatory drive during crawling."
Changed as suggested.
Lines 86, 292 and 304 and Fig 4 legend: "Different from DE-3, In-Phase units showed a marked decrease in the maximum bFF along time." Suggestion: Replace the word "along" with 'across' time. Also replace those words in the Fig 4 legend and Line 80...."along" (replace with 'across') the different stages of crawling.
Changed as suggested.
Line 311: "bursts and a concurrent inhibitory input via NS (Figure 7). Coherent with this interpretation, the activity level of the Anti- Phase units was not influenced by these inhibitory signals". Suggestion: Replace the word "coherent" with 'consistent'.
Changed as suggested.
Line 332: "...offer the particular advantage of allowing electrical manipulation of individual neurons in wildtype adults," I am unsure what the authors are attempting to convey. Not sure what they mean by "wildtype" in this context and why that would matter.
“wildtype” was eliminated
We thank Reviewer #2 for the suggested edits to the text.
The real hero is already home because she figured out a faster way to getthings done.
Working smarter is more important.
Insurance for your camp
This is included in the camp fee
Surrey Sports Park
Winchester College
12.5
13
Foundation
delete
Foundation
Academy
Surrey Sports Park, located in the picturesque town of Guildford on the University of Surrey campus, is just 40 minutes from London. Since opening, this elite sports complex has established itself as a leading training center in the southeast of England. It has hosted numerous sports teams and high-performance athletes. With state-of-the-art facilities and modern on-site accommodation options, Surrey Sports Park provides an ideal environment for players looking to improve their performance.
Founded in 1382, Winchester College is one of Britain’s oldest and most prestigious independent schools, set within 40 acres of historic grounds in the picturesque town of Winchester. The school has elite-level on-site sports facilities, with immaculate natural grass football pitches and a state-of-the-art sports centre, including a strength and conditioning gym.
With its remarkable architecture and outstanding sporting resources, Winchester College offers an inspiring environment for players on the Performance Camp to take their game to the next level.
Surrey Sports Park
Winchester College
the Chelsea FC Foundation
Chelsea FC
Foundation
delete 'Foundation'
Intensive29h football
Total Football (24h Football + 8h Workshops)
Standard with English18h football + 12.5h English2026202620262026202620
Football + English (24h Football + 13h English)
Intensive (29h football)
Total Football (24h Football + 8h Workshops)
Intensive (29h football)
Total Football (24h Football + 8h Workshops)
Intensive (29h football)
Total Football (24h Football + 8h Workshops)
Intensive (29h football)
Total Football (24h Football + 8h Workshops)
Standard with English (18h football + 12.5h English)
Football + English (24h Football + 13h English)
Standard with English (18h football + 12.5h English)
Football + English (24h Football + 13h English)
Standard with English (18h football + 12.5h English)
Football + English (24h Football + 13h English)
Standard with English (18h football + 12.5h English)
Football + English (24h Football + 13h English)
Sports session
Football training
the Chelsea FC Foundation
Chelsea FC
Chelsea FC Foundation Football Camp 2026
Chelsea FC Football Camp 2026
the Chelsea FC Foundation
Chelsea FC
(delete - Foundation)
computation is about functions
computation is about functions functions are encoded and code is data so computation is about data and how
data moves how it transfers in the network what properties it has how fault-tolerant that system is has vast
implications into what our computation can do what our software does what our applications do and therefore what we as
humans are capable of doing
how we move that computation really matters in a very deep ethical way
matter deep ethical


eLife Assessment
This important Research Advance builds on the authors' previous work delineating the roles of the rodent perirhinal cortex and the basolateral amygdala in first- and second-order learning. The convincing results show that serial exposure of non-motivationally relevant stimuli influences how those stimuli are encoded within the perirhinal cortex and basolateral amygdala when paired with a shock. This manuscript will be interesting for researchers in cognitive and behavioral neuroscience.
Reviewer #1 (Public review):
Summary:
This study advances the lab's growing body of evidence exploring higher-order learning and its neural mechanisms. They recently found that NMDA receptor activity in the perirhinal cortex was necessary for integrating stimulus-stimulus associations with stimulus-shock associations (mediated learning) to produce preconditioned fear, but it was not necessary for forming stimulus-shock associations. On the other hand, basolateral amygdala NMDA receptor activity is required for forming stimulus-shock memories. Based on these facts, the authors assessed: 1. why the perirhinal cortex is necessary for mediated learning but not direct fear learning and 2. the determinants of perirhinal cortex versus basolateral amygdala necessity for forming direct versus indirect fear memories. The authors used standard sensory preconditioning and variants designed to manipulate the novelty and temporal relationship between stimuli and shock and, therefore, the attentional state under which associative information might be processed. Under experimental conditions where information would presumably be processed primarily in the periphery of attention (temporal distance between stimulus/shock or stimulus pre-exposure), perirhinal cortex NMDA receptor activation was required for learning indirect associations. On the other hand, when information would likely be processed in focal attention (novel stimulus contiguous with shock), basolateral amygdala NMDA activity was required for learning direct associations. Together, the findings indicate that the perirhinal cortex and basolateral amygdala subserve peripheral and focal attention, respectively. The authors provide support for their conclusions using careful, hypothesis-driven experimental design, rigorous methods, and integrating their findings with the relevant literature on learning theory, information processing, and neurobiology. Therefore, this work will be highly interesting to several fields.
Strengths:
(1) The experiments were carefully constructed and designed to test hypotheses that were rooted in the lab's previous work, in addition to established learning theory and information processing background literature.
(2) There are clear predictions and alternative outcomes. The provided table does an excellent job of condensing and enhancing the readability of a large amount of data.
(3) In a broad sense, attention states are a component of nearly every behavioral experiment. Therefore, identifying their engagement by dissociable brain areas and under different learning conditions is an important area of research.
(4) The authors clearly note where they replicated their own findings, report full statistical measures, effect sizes, and confidence intervals, indicating the level of scientific rigor.
(5) The findings raise questions for future experiments that will further test the authors' hypotheses; this is well discussed.
Reviewer #2 (Public review):
This paper continues the authors' research on the roles of the basolateral amygdala (BLA) and the perirhinal cortex (PRh) in sensory preconditioning (SPC) and second order conditioning (SOC). In this manuscript, the authors explore how prior exposure to stimuli may influence which regions are necessary for conditioning to the second-order cue (S2). The authors perform a series of experiments which first confirm prior results shown by the author - that NMDA receptors in the PRh are necessary in SPC during conditioning of the first-order cue (S1) with shock to allow for freezing to S2 at test; and that NMDA receptors in the BLA are necessary for S1 conditioning during the S1-shock pairings. The authors then set out to test the hypothesis that the PRh encodes associations in a peripheral state of attention whereas the BLA encodes associations in a focal state of attention, similar to the A1 and A2 states in Wagner's theory of SOP. To do this, they show that BLA is necessary for conditioning to S2 when the S2 is first exposed during a serial compound procedure - S2-S1-shock. To determine whether pre-exposure of S2 will shift S2 to a peripheral focal state, the authors run a design in which S2-S1 presentations are given prior to the serial compound phase. The authors show that this restores NMDA receptor activity within the PRh as necessary for fear response to S2 at test. They then test whether the presence of S1 during the serial compound conditioning allows the PRh to support the fear responses to S2 by introducing a delay conditioning paradigm in which S1 is no longer present. The authors find that PRh is no longer required and suggest that this is due to S2 remaining in the primary focal state.
Strengths:
As with their earlier work, the authors have performed a rigorous series of experiments to better understand the roles of the BLA and PRh in the learning of first- and second-order stimuli. The experiments are well-designed and clearly presented, and the results show definitive differences in functionality between the PRh and BLA. The first experiment confirms earlier findings from the lab (and others), and the authors then build on their previous work to more deeply reveal how these regions differ in how they encode associations between stimuli. The authors have done a commendable job on pursuing these questions.
Table 1 is an excellent way to highlight the results and provide the reader with a quick look-up table of the findings.
Reviewer #3 (Public review):
Summary:
This manuscript presents a series of experiments that further investigate the roles of the BLA and PRH in sensory preconditioning, with a particular focus on understanding their differential involvement in the association of S1 and S2 with shock.
Strengths:
The motivation for the study is clearly articulated, and the experimental designs are thoughtfully constructed. I especially appreciate the inclusion of Table 1, which makes the designs easy to follow. The results are clearly presented, and the statistical analyses are rigorous.
During the revision, the authors have adequately addressed my minor suggestions from the original version.
Author response:
The following is the authors’ response to the original reviews.
Reviewer #1 (Public review):
Summary:
This study advances the lab's growing body of evidence exploring higher-order learning and its neural mechanisms. They recently found that NMDA receptor activity in the perirhinal cortex was necessary for integrating stimulus-stimulus associations with stimulus-shock associations (mediated learning) to produce preconditioned fear, but it was not necessary for forming stimulus-shock associations. On the other hand, basolateral amygdala NMDA receptor activity is required for forming stimulus-shock memories. Based on these facts, the authors assessed: (1) why the perirhinal cortex is necessary for mediated learning but not direct fear learning, and (2) the determinants of perirhinal cortex versus basolateral amygdala necessity for forming direct versus indirect fear memories. The authors used standard sensory preconditioning and variants designed to manipulate the novelty and temporal relationship between stimuli and shock and, therefore, the attentional state under which associative information might be processed. Under experimental conditions where information would presumably be processed primarily in the periphery of attention (temporal distance between stimulus/shock or stimulus pre-exposure), perirhinal cortex NMDA receptor activation was required for learning indirect associations. On the other hand, when information would likely be processed in focal attention (novel stimulus contiguous with shock), basolateral amygdala NMDA activity was required for learning direct associations. Together, the findings indicate that the perirhinal cortex and basolateral amygdala subserve peripheral and focal attention, respectively. The authors provide support for their conclusions using careful, hypothesis-driven experimental design, rigorous methods, and integrating their findings with the relevant literature on learning theory, information processing, and neurobiology. Therefore, this work will be highly interesting to several fields.
Strengths:
(1) The experiments were carefully constructed and designed to test hypotheses that were rooted in the lab's previous work, in addition to established learning theory and information processing background literature.
(2) There are clear predictions and alternative outcomes. The provided table does an excellent job of condensing and enhancing the readability of a large amount of data.
(3) In a broad sense, attention states are a component of nearly every behavioral experiment. Therefore, identifying their engagement by dissociable brain areas and under different learning conditions is an important area of research.
(4) The authors clearly note where they replicated their own findings, report full statistical measures, effect sizes, and confidence intervals, indicating the level of scientific rigor.
(5) The findings raise questions for future experiments that will further test the authors' hypotheses; this is well discussed.
Weaknesses:
As a reader, it is difficult to interpret how first-order fear could be impaired while preconditioned fear is intact; it requires a bit of "reading between the lines".
We appreciate the Reviewer’s point and have attempted to address on lines 55-63 of the revised paper: “In a recent pair of studies, we extended these findings in two ways. First, we showed that S1 does not just form an association with shock in stage 2; it also mediates an association between S2 and the shock. Thus, S2 enters testing in stage 3 already conditioned, able to elicit fear responses (Wong et al., 2019). Second, we showed that this mediated S2-shock association requires NMDAR-activation in the PRh, as well as communication between the PRh and BLA (Wong et al., 2025). These findings raise two critical questions: 1) why is the PRh engaged for mediated conditioning of S2 but not for direct conditioning of S1; and 2) more generally, what determines whether the BLA and/or PRh is engaged for conditioning of the S1 and/or S2?”
Reviewer #2 (Public review):
Summary:
This paper continues the authors' research on the roles of the basolateral amygdala (BLA) and the perirhinal cortex (PRh) in sensory preconditioning (SPC) and second-order conditioning (SOC). In this manuscript, the authors explore how prior exposure to stimuli may influence which regions are necessary for conditioning to the second-order cue (S2). The authors perform a series of experiments which first confirm prior results shown by the author - that NMDA receptors in the PRh are necessary in SPC during conditioning of the first-order cue (S1) with shock to allow for freezing to S2 at test; and that NMDA receptors in the BLA are necessary for S1 conditioning during the S1-shock pairings. The authors then set out to test the hypothesis that the PRh encodes associations in a peripheral state of attention, whereas the BLA encodes associations in a focal state of attention, similar to the A1 and A2 states in Wagner's theory of SOP. To do this, they show that BLA is necessary for conditioning to S2 when the S2 is first exposed during a serial compound procedure - S2-S1-shock. To determine whether pre-exposure of S2 will shift S2 to a peripheral focal state, the authors run a design in which S2-S1 presentations are given prior to the serial compound phase. The authors show that this restores NMDA receptor activity within the PRh as necessary for the fear response to S2 at test. They then test whether the presence of S1 during the serial compound conditioning allows the PRh to support the fear responses to S2 by introducing a delay conditioning paradigm in which S1 is no longer present. The authors find that PRh is no longer required and suggest that this is due to S2 remaining in the primary focal state.
Strengths:
As with their earlier work, the authors have performed a rigorous series of experiments to better understand the roles of the BLA and PRh in the learning of first- and second-order stimuli. The experiments are well-designed and clearly presented, and the results show definitive differences in functionality between the PRh and BLA. The first experiment confirms earlier findings from the lab (and others), and the authors then build on their previous work to more deeply reveal how these regions differ in how they encode associations between stimuli. The authors have done a commendable job of pursuing these questions.
Table 1 is an excellent way to highlight the results and provide the reader with a quick look-up table of the findings.
Weaknesses:
The authors have attempted to resolve the question of the roles of the PRh and BLA in SPC and SOC, which the authors have explored in previous papers. Laudably, the authors have produced substantial results indicating how these two regions function in the learning of first- and second-order cues, providing an opportunity to narrow in on possible theories for their functionality. Yet the authors have framed this experiment in terms of an attentional framework and have argued that the results support this particular framework and hypothesis - that the PRh encodes peripheral and the BLA encodes focal states of learning. This certainly seems like a viable and exciting hypothesis, yet I don't see why the results have been completely framed and interpreted this way. It seems to me that there are still some alternative interpretations that are plausible and should be included in the paper.
We appreciate the Reviewer’s point and have attempted to address it on lines 566-594 of the Discussion: “An additional point to consider in relation to Experiments 3A, 3B, 4A and 4B is the level of surprise that rats experienced following presentations of the familiar S2 in stage 2. Specifically, in Experiments 3A and 3B, S2 was followed by the expected S1 (low surprise) and its conditioning required activation of NMDA receptors in the PRh and not the BLA. By contrast, in Experiments 4A and 4B, S2 was followed by omission of the expected S1 (high surprise) and its conditioning required activation of NMDA receptors in the BLA and not the PRh. This raises the possibility that surprise, or prediction error, also influences the way that S2 is processed in focal and peripheral states of attention. When prediction error is low, S2 is processed in the peripheral state of attention: hence, learning under these circumstances requires NMDA receptor activation in the PRh and not the BLA. By contrast, when prediction error is high, S2 is preserved in the focal state of attention: hence, learning under these circumstances requires NMDA receptor activation in the BLA and not the PRh. The impact of prediction error on the processing of S2 could be assessed using two types of designs. In the first design, rats are pre-exposed to S2-S1 pairings in stage 1 and this is followed by S2-S3-shock pairings in stage 2. The important feature of this design is that, in stage 2, the S2 is followed by surprise in omission of S1 and presentation of S3. Thus, if a large prediction error maintains processing of the familiar S2 in the BLA, we might expect that its conditioning in this design would require NMDA receptor activation in the BLA (in contrast to the results of Experiment 3B) and no longer require NMDA receptor activation in the PRh (in contrast to the results of Experiment 3A). In the second design, rats are pre-exposed to S2 alone in stage 1 and this is followed by S2-[trace]-shock pairings in stage 2. The important feature of this design is that, in stage 2, the S2 is not followed by the surprising omission of any stimulus. Thus, if a small prediction error shifts processing of the familiar S2 to the PRh, we might expect that its conditioning in this design would no longer require NMDA receptor activation in the BLA (in contrast to the results of Experiment 4B) but, instead, require NMDA receptor activation in the PRh (in contrast to the results of Experiment 4A). Future studies will use both designs to determine whether prediction error influences the processing of S2 in the focus versus periphery of attention and, thereby, whether learning about this stimulus requires NMDA receptor activation in the BLA or PRh.”
Reviewer #3 (Public review):
Summary:
This manuscript presents a series of experiments that further investigate the roles of the BLA and PRH in sensory preconditioning, with a particular focus on understanding their differential involvement in the association of S1 and S2 with shock.
Strengths:
The motivation for the study is clearly articulated, and the experimental designs are thoughtfully constructed. I especially appreciate the inclusion of Table 1, which makes the designs easy to follow. The results are clearly presented, and the statistical analyses are rigorous. My comments below mainly concern areas where the writing could be improved to help readers more easily grasp the logic behind the experiments.
Weaknesses:
(1) Lines 56-58: The two previous findings should be more clearly summarized. Specifically, it's unclear whether the "mediated S2-shock" association occurred during Stage 2 or Stage 3. I assume the authors mean Stage 2, but Stage 2 alone would not yet involve "fear of S2," making this expression a bit confusing.
We apologise for the confusion and have revised the summary of our previous findings on lines 55-63. The revised text now states: “In a recent pair of studies, we extended these findings in two ways. First, we showed that S1 does not just form an association with shock in stage 2; it also mediates an association between S2 and the shock. Thus, S2 enters testing in stage 3 already conditioned, able to elicit fear responses (Wong et al., 2019). Second, we showed that this mediated S2-shock association requires NMDAR-activation in the PRh, as well as communication between the PRh and BLA (Wong et al., 2025). These findings raise two critical questions: 1) why is the PRh engaged for mediated conditioning of S2 but not for direct conditioning of S1; and 2) more generally, what determines whether the BLA and/or PRh is engaged for conditioning of the S1 and/or S2?”
(2) Line 61: The phrase "Pavlovian fear conditioning" is ambiguous in this context. I assume it refers to S1-shock or S2-shock conditioning. If so, it would be clearer to state this explicitly.
Apologies for the ambiguity - we have omitted the term “Pavlovian” which may have been the source of confusion: The revised text on lines 60-63 now states: “These findings raise two critical questions: 1) why is the PRh engaged for mediated conditioning of S2 but not for direct conditioning of S1; and 2) more generally, what determines whether the BLA and/or PRh is engaged for conditioning of the S1 and/or S2?”
(3) Regarding the distinction between having or not having Stage 1 S2-S1 pairings, is "novel vs. familiar" the most accurate way to frame this? This terminology could be misleading, especially since one might wonder why S2 couldn't just be presented alone on Stage 1 if novelty is the critical factor. Would "outcome relevance" or "predictability" be more appropriate descriptors? If the authors choose to retain the "novel vs. familiar" framing, I suggest providing a clear explanation of this rationale before introducing the predictions around Line 118.
We have incorporated the suggestion regarding “predictability” while also retaining “novelty” as follows.
L76-85: “For example, different types of arrangements may influence the substrates of conditioning to S2 by influencing its novelty and/or its predictive value at the time of the shock, on the supposition that familiar stimuli are processed in the periphery of attention and, thereby, the PRh (Bogacz & Brown, 2003; Brown & Banks, 2015; Brown & Bashir, 2002; Martin et al., 2013; McClelland et al., 2014; Morillas et al., 2017; Murray & Wise, 2012; Robinson et al., 2010; Suzuki & Naya, 2014; Voss et al., 2009; Yang et al., 2023) whereas novel stimuli are processed in the focus of attention and, thereby, the amygdala (Holmes et al., 2018; Qureshi et al., 2023; Roozendaal et al., 2006; Rutishauser et al., 2006; Schomaker & Meeter, 2015; Wright et al., 2003).”
L116-120: “Subsequent experiments then used variations of this protocol to examine whether the engagement of NMDAR in the PRh or BLA for Pavlovian fear conditioning is influenced by the novelty/predictive value of the stimuli at the time of the shock (second implication of theory) as well as their distance or separation from the shock (third implication of theory; Table 1).”
(4) Line 121: This statement should refer to S1, not S2.
(5) Line 124: This one should refer to S2, not S1.
We have checked the text on these lines for errors and confirmed that the statements are correct. The lines encompassing this text (L121-130) are reproduced here for convenience:
(1) When rats are exposed to novel S2-S1-shock sequences, conditioning of S2 and S1 will be disrupted by a DAP5 infusion into the BLA but not into the PRh (Experiments 2A and 2B);
(2) When rats are exposed to S2-S1 pairings and then to S2-S1-shock sequences, conditioning of S2 will be disrupted by a DAP5 infusion into the PRh but not the BLA whereas conditioning of S1 will be disrupted by a DAP5 infusion into the BLA not the PRh (Experiments 3A and 3B);
(3) When rats are exposed to S2-S1 pairings and then to S2 (trace)-shock pairings, conditioning of S2 will be disrupted by a DAP5 into the BLA not the PRh (Experiments 4A and 4B).
(6) Additionally, the rationale for Experiment 4 is not introduced before the Results section. While it is understandable that Experiment 4 functions as a follow-up to Experiment 3, it would be helpful to briefly explain the reasoning behind its inclusion.
Experiment 4 follows from the results obtained in Experiment 3; and, as noted, the reasoning for its inclusion is provided locally in its introduction. We attempted to flag this experiment earlier in the general introduction to the paper; but this came at the cost of clarity to the overall story. As such, our revised paper retains the local introduction to this experiment. It is reproduced here for convenience:
“In Experiments 3A and 3B, conditioning of the pre-exposed S1 required NMDAR-activation in the BLA and not the PRh; whereas conditioning of the pre-exposed S2 required NMDAR-activation in the PRh and not the BLA. We attributed these findings to the fact that the pre-exposed S2 was separated from the shock by S1 during conditioning of the S2-S1-shock sequences in stage 2: hence, at the time of the shock, S2 was no longer processed in the focal state of attention supported by the BLA; instead, it was processed in the peripheral state of attention supported by the PRh.
“Experiments 4A and 4B employed a modification of the protocol used in Experiments 3A and 3B to examine whether a pre-exposed S1 influences the processing of a pre-exposed S2 across conditioning with S2-S1-shock sequences. The design of these experiments is shown in Figure 4A. Briefly, in each experiment, two groups of rats were exposed to a session of S2-S1 pairings in stage 1 and, 24 hours later, a session of S2-[trace]-shock pairings in stage 2, where the duration of the trace interval was equivalent to that of S1 in the preceding experiments. Immediately prior to the trace conditioning session in stage 2, one group in each experiment received an infusion of DAP5 or vehicle only into either the PRh (Experiment 4A) or BLA (Experiment 4B). Finally, all rats were tested with presentations of the S2 alone in stage 3. If the substrates of conditioning to S2 are determined only by the amount of time between presentations of this stimulus and foot shock in stage 2, the results obtained in Experiments 4A and 4B should be the same as those obtained in Experiments 3A and 3B: acquisition of freezing to S2 will require activation of NMDARs in the PRh and not the BLA. If, however, the presence of S1 in the preceding experiments (Experiments 3A and 3B) accelerated the rate at which processing of S2 transitioned from the focus of attention to its periphery, the results obtained in Experiments 4A and 4B will differ from those obtained in Experiments 3A and 3B. That is, in contrast to the preceding experiments where acquisition of freezing to S2 required NMDAR-activation in the PRh and not the BLA, here acquisition of freezing to S2 should require NMDAR-activation in the BLA but not the PRh.”
Reviewer #1 (Recommendations for the authors):
I greatly enjoyed reading and reviewing this manuscript, and so I only have boilerplate recommendations.
(1) I might add a couple of sentences discussing how/why preconditioned fear could be intact while first-order fear is impaired. Of course, if I am interpreting the provided interpretation correctly, the reason is that peripheral processing is still intact even when BLA NMDA receptors are blocked, and so mediated conditioning still occurs. Does this mean that mediated conditioning does not require learning the first-order relationship, and that they occur in parallel? Perhaps I just missed this, but I cannot help but wonder whether/how the psychological processes at play might change when first-order learning is impaired, so this would be greatly appreciated.
As noted above, we have revised the general introduction (around lines 55-59) to clarify that the direct S1-shock and mediated S2-shock associations form in parallel. Hence, manipulations that disrupt first-order fear to the S1 (such as a BLA infusion of the NMDA receptor antagonist, DAP5) do not automatically disrupt the expression of sensory preconditioned fear to the S2.
(2) Adding to the above - does the SOP or another theory predict serial vs parallel information flow from focal state to peripheral, or perhaps it is both to some extent?
SOP predicts both serial and parallel processing of information in its focal and peripheral states. That is, some proportion of the elements that comprise a stimulus may decay from the focal state of attention to the periphery (serial processing); hence, at any given moment, the elements that comprise a stimulus can be represented in both focal and peripheral states (parallel processing).
Given the nature of the designs and tools used in the present study (between-subject assessment of a DAP5 effect in the BLA or PRh), we selected parameters that would maximize the processing of the S2 and S1 stimuli in one or the other state of activation; hence the results of the present study. We are currently examining the joint processing of stimulus elements across focal and peripheral states using simultaneous recordings of activity in the BLA and PRh. These recordings are collected from rats trained in the different stages of a within-subject sensory preconditioning protocol. The present study created the basis for this work, which will be published separately in due course.
(3) The organization of PRh vs BLA is nice and consistent across each figure, but I would suggest adding any kind of additional demarcation beyond the colors and text, maybe just more space between AB / CD. The figure text indicating PRh/BLA is a bit small.
Thank you for the suggestion – we have added more space between the top and bottom panels of the figure.
(4) Line 496 typo ..."in the BLA but not the BLA".
Apologies for the type - this has been corrected.
Reviewer #2 (Recommendations for the authors):
I found the experiments to be extremely well-designed and the results convincing and exciting. The hypothesis of the focal and peripheral states of attention being encoded by BLA and PRh respectively, is enticing, yet as indicated in the public review, this does not seem to be the only possible interpretation. This is my only serious comment for the authors.
(1) I think it would be worth reframing the article slightly to give credence to alternative hypotheses. Not to say that the authors' intriguing hypothesis shouldn't be an integral part of the introduction, but no alternatives are mentioned. In experiment 2, could the fact that S2 is already being a predictor of S1, not block new learning to S2? In the framework of stimulus-stimulus associations, there would be no surprise in the serial-compound stage of conditioning at the onset of S1. This may prevent direct learning of the S2-shock association within the BLA. This type of association may as well (S2 predicts S1, but it's omitted), which could support learning by S2. fall under the peripheral/focal theory, but I don't think it's necessary to frame this possibility in terms of a peripheral/focal theory. To build on this alternative interpretation, the absence of S1 in experiment 4 may induce a prediction error. The peripheral and focal states appear to correspond to A2 and A1 in SOP extremely well, and I think it would potentially add interest and support. If the authors do intend to make the paper a strong argument for their hypothesis, perhaps a few additional experiments may be introduced. If the novelty of S2 is critical for S2 not to be processed in a focal state during the serial compound stage, could pre-exposure of S2 alone allow for dependence of S2-shock on the PRh? Assuming this is what the authors would predict, this might disentangle the S-S theory mentioned above from the peripheral/focal theory. Or perhaps run an experiment S2-X in stage 1 and S2-S1-shock in stage 2? This said, I think the experiments are more than sufficient for an exciting paper as is, and I don't think running additional experiments is necessary. I would only argue for this if the authors make a hard claim about the peripheral/focal theory, as is the case for the way the paper is currently written.
We appreciate the reviewer’s excellent point and suggestions. We have included an additional paragraph in the Discussion on page 24 (lines 566-594). “An additional point to consider in relation to Experiments 3A, 3B, 4A and 4B is the level of surprise that rats experienced following presentations of the familiar S2 in stage 2. Specifically, in Experiments 3A and 3B, S2 was followed by the expected S1 (low surprise) and its conditioning required activation of NMDA receptors in the PRh and not the BLA. By contrast, in Experiments 4A and 4B, S2 was followed by omission of the expected S1 (high surprise) and its conditioning required activation of NMDA receptors in the BLA and not the PRh. This raises the possibility that surprise, or prediction error, also influences the way that S2 is processed in focal and peripheral states of attention. When prediction error is low, S2 is processed in the peripheral state of attention: hence, learning under these circumstances requires NMDA receptor activation in the PRh and not the BLA. By contrast, when prediction error is high, S2 is preserved in the focal state of attention: hence, learning under these circumstances requires NMDA receptor activation in the BLA and not the PRh. The impact of prediction error on the processing of S2 could be assessed using two types of designs. In the first design, rats are pre-exposed to S2-S1 pairings in stage 1 and this is followed by S2-S3-shock pairings in stage 2. The important feature of this design is that, in stage 2, the S2 is followed by surprise in omission of S1 and presentation of S3. Thus, if a large prediction error maintains processing of the familiar S2 in the BLA, we might expect that its conditioning in this design would require NMDA receptor activation in the BLA (in contrast to the results of Experiment 3B) and no longer require NMDA receptor activation in the PRh (in contrast to the results of Experiment 3A). In the second design, rats are pre-exposed to S2 alone in stage 1 and this is followed by S2-[trace]-shock pairings in stage 2. The important feature of this design is that, in stage 2, the S2 is not followed by the surprising omission of any stimulus. Thus, if a small prediction error shifts processing of the familiar S2 to the PRh, we might expect that its conditioning in this design would no longer require NMDA receptor activation in the BLA (in contrast to the results of Experiment 4B) but, instead, require NMDA receptor activation in the PRh (in contrast to the results of Experiment 4A). Future studies will use both designs to determine whether prediction error influences the processing of S2 in the focus versus periphery of attention and, thereby, whether learning about this stimulus requires NMDA receptor activation in the BLA or PRh.”
(3) I was surprised the authors didn't frame their hypothesis more in terms of Wagner's SOP model. It was minimally mentioned in the introduction or the authors' theory if it were included more in the introduction. I was wondering whether the authors may have avoided this framing to avoid an expectation for modeling SOP in their design. If this were the case, I think the paper stands on its own without modeling, and at least for myself, a comparison to SOP would not require modeling of SOP. If this was the authors' concern for avoiding it, I would suggest to the authors that they need not be concerned about it.
We appreciate the endorsement of Wagner’s SOP theory as a nice way of framing our results. We are currently working on a paper in which we use simulations to show how Wagner’s theory can accommodate the present findings as well as others in the literature on sensory preconditioning. For this reason, we have not changed the current paper in relation to this point.
eLife Assessment
This study presents an important new approach to quantifying parsimony preferences in human inference. The work provides convincing evidence that humans are sensitive to specific formalizations of parsimony, such as the dimensionality of perceptual shapes. The work is considered timely, well-written, and technically sophisticated, effectively bridging concepts from statistical inference and human decision-making.
Reviewer #1 (Public review):
I have to preface my evaluation with a disclosure that I lack the mathematical expertise to fully assess what seems to be the authors' main theoretical contribution. I am providing this assessment to the best of my ability, but I cannot substitute for a reviewer with more advanced mathematical/physical training.
Summary:
This paper describes a new theoretical framework for measuring parsimony preferences in human judgments. The authors derive four metrics that they associate with parsimony (dimensionality, boundary, volume, and robustness) and measure whether human adults are sensitive to these metrics. In two tasks, adults had to choose one of two flower beds which a statistical sample was generated from, with or without explicit instruction to choose the flower bed perceptually closest to the sample. The authors conduct extensive statistical analyses showing that humans are sensitive to most of the derived quantities, even when the instructions encouraged participants to choose only based on perceptual distance. The authors complement their study with a computational neural network model that learns to make judgments about the same stimuli with feedback. They show that the computational model is sensitive to the tasks communicated by feedback and only uses the parsimony-associated metrics when feedback trains it to do so.
Strengths:
(1) The paper derives and applies new mathematical quantities associated with parsimony. The mathematical rigor is very impressive and is much more extensive than in most other work in the field, where studies often adopt only one metric (such as the number of causes or parameters). These formal metrics can be very useful for the field.
(2) The studies are preregistered, and the statistical analyses are strong.
(3) The computational model complements the behavioral findings, showing that the derived quantities are not simply equivalent to maximum-likelihood inference in the task.
(4) The speculations in the discussion section (e.g., the idea that human sensitivity is driven by the computational demands each metric requires) are intriguing and could usefully guide future work.
Weaknesses:
(1) The paper is very hard to understand. Many of the key details of the derived metrics are in the appendix, with very little accessible explanation in the main text. The figures helped me understand the metrics somewhat, although I am still not sure how some of them (such as boundary or robustness as measured here) are linked to parsimony. I understand that this is addressed by the derivations in the appendix, but as a computational cognitive scientist, I would have benefited from more accessible explanations. Important aspects of the human studies are also missing from the main text, such as the sample size for Experiment 2.
(2) It is not fully clear whether the sensitivity of human participants to some of the quantities convincingly reported here actually means that participants preferred shapes according to the corresponding aspect of parsimony. The title and framing suggest that parsimony "guides" human decision-making, which may lead readers to conclude that humans prefer more parsimonious shapes. I am not sure the sensitivity findings alone support this framing, but it might just be my misunderstanding of the analyses.
(3) The stimulus set included only four combinations of shapes, each designed to diagnostically target one of the theoretical quantities. It is unclear whether the results are robust or specific to these particular 4 stimuli.
(4) The study is framed as measuring "decision-making," but the task resembles statistical inference (e.g., which shape generated the data) or perceptual judgment. This is a minor point since "decision-making" is not well defined in the literature, yet the current framing in the title gave me the initial impression that humans would be making preference choices and learning about them over time with feedback.
Reviewer #2 (Public review):
This manuscript presents a sophisticated investigation into the computational mechanisms underlying human decision-making, and it presents evidence for a preference for simpler explanations (Occam's razor). The authors dissect the simplicity bias into four different components, and they design experiments to target each of them by presenting choices whose underlying models differ only in one of these components. In the learning tasks, participants must infer a "law" (a logical rule) from observed data in a way that operationalizes the process of scientific reasoning in a controlled laboratory setting. The tasks are complex enough to be engaging but simple enough to allow for precise computational modeling.
As a further novel feature, authors derive a further term in the expansion of the log-evidence, which arises from boundary terms. This is combined with a choice model, which is the one that is tested in experiments. Experiments are run, but with humans and with artificial intelligence agents, showing that humans have an enhanced preference for simplicity as compared to artificial neural networks.
Overall, the work is well written, interesting, and timely, bridging concepts in statistical inference and human decision making. Although technical details are rather elaborate, my understanding is that they represent the state of the art.
I have only one main comment that I think deserves more comments. Computing the complexity penalty of models may be hard. It is unlikely that humans can perform such a calculation on the fly. As authors discuss in the final section, while the dimensionality term may be easier to compute, others (e.g., the volume term, which requires an integral) may be considerably harder to compute (it is true that they should be computed once and for all for each task, but still...). I wonder whether the sensitivity of human decision making with reference to the different terms is so different, and in particular whether it aligns with computational simplicity, or with the possibility of approximating each term by simple heuristics. Indeed, the sensitivity to the volume term is significantly and systematically lower than that of other terms. I wonder whether this relation could be made more quantitative using neural networks, using as a proxy of computational hardness the number of samples needed to reach a given error level in learning each of these terms.
Reviewer #3 (Public review):
Summary:
This is a very interesting paper that documents how humans use a variety of factors that penalize model complexity and integrate over a possible set of parameters within each model. By comparison, trained neural networks also use these biases, but only on tasks where model selection was part of the reward structure. In the situation where training emphasizes maximum-likelihood decisions, only neural networks, but not humans, were able to adapt their decision-making. Humans continue to use model integration simplicity biases.
Strengths:
This study used a pre-registered plan for analyzing human data, which exceeds the standards compared to other current studies.
The results are technically correct.
Weaknesses:
The presentation of the results could be improved.
Author response:
Reviewer #1 (Public review)
I have to preface my evaluation with a disclosure that I lack the mathematical expertise to fully assess what seems to be the authors' main theoretical contribution. I am providing this assessment to the best of my ability, but I cannot substitute for a reviewer with more advanced mathematical/physical training.
Summary:
This paper describes a new theoretical framework for measuring parsimony preferences in human judgments. The authors derive four metrics that they associate with parsimony (dimensionality, boundary, volume, and robustness) and measure whether human adults are sensitive to these metrics. In two tasks, adults had to choose one of two flower beds which a statistical sample was generated from, with or without explicit instruction to choose the flower bed perceptually closest to the sample. The authors conduct extensive statistical analyses showing that humans are sensitive to most of the derived quantities, even when the instructions encouraged participants to choose only based on perceptual distance. The authors complement their study with a computational neural network model that learns to make judgments about the same stimuli with feedback. They show that the computational model is sensitive to the tasks communicated by feedback and only uses the parsimony-associated metrics when feedback trains it to do so.
Strengths:
(1) The paper derives and applies new mathematical quantities associated with parsimony. The mathematical rigor is very impressive and is much more extensive than in most other work in the field, where studies often adopt only one metric (such as the number of causes or parameters). These formal metrics can be very useful for the field.
(2) The studies are preregistered, and the statistical analyses are strong.
(3) The computational model complements the behavioral findings, showing that the derived quantities are not simply equivalent to maximum-likelihood inference in the task.
(4) The speculations in the discussion section (e.g., the idea that human sensitivity is driven by the computational demands each metric requires) are intriguing and could usefully guide future work.
Weaknesses:
(1) The paper is very hard to understand. Many of the key details of the derived metrics are in the appendix, with very little accessible explanation in the main text. The figures helped me understand the metrics somewhat, although I am still not sure how some of them (such as boundary or robustness as measured here) are linked to parsimony. I understand that this is addressed by the derivations in the appendix, but as a computational cognitive scientist, I would have benefited from more accessible explanations. Important aspects of the human studies are also missing from the main text, such as the sample size for Experiment 2.
(2) It is not fully clear whether the sensitivity of human participants to some of the quantities convincingly reported here actually means that participants preferred shapes according to the corresponding aspect of parsimony. The title and framing suggest that parsimony "guides" human decision-making, which may lead readers to conclude that humans prefer more parsimonious shapes. I am not sure the sensitivity findings alone support this framing, but it might just be my misunderstanding of the analyses.
(3) The stimulus set included only four combinations of shapes, each designed to diagnostically target one of the theoretical quantities. It is unclear whether the results are robust or specific to these particular 4 stimuli.
(4) The study is framed as measuring "decision-making," but the task resembles statistical inference (e.g., which shape generated the data) or perceptual judgment. This is a minor point since "decision-making" is not well defined in the literature, yet the current framing in the title gave me the initial impression that humans would be making preference choices and learning about them over time with feedback.
We are grateful for the supportive comments highlighting the rigor of our experimental design and data analysis. The Reviewer lists four points under “weaknesses”, to which we reply below.
(1) The paper is very hard to understand
In the revised version of the paper, we will expand the main text to include a more detailed and intuitive description of the terms of the Fisher Information Approximation, in particular clarifying the interpretation of robustness and boundary as parsimony. We also will include more details that are now given only in Methods, such as the sample size for the second experiment.
(2) Sensitivity of human participants
We do argue, and believe, that our data show that people tend to prefer simpler shapes. However, giving a well-posed definition of "preference" in this context turns out to be nontrivial.
At the very least, any statement such as "people prefer shape A over B" should be qualified with something like “when the distance of the data from both shapes is the same.” In other words, one should control for goodness-of-fit. Even before making any reference to our behavioral model, this phenomenon (a preference for the simpler model when goodness of fit is matched between models) is visible in Figure 3a, where the effective decision boundary used by human participants is closer to the more complex model than the cyan line representing the locus of points with equal goodness of fit under the two models (or equivalently, with the same Euclidean distance from the two shapes). The goal of our theory and our behavioral model is precisely to systematize this sort of control, extending it beyond just goodness-of-fit and allowing us to control simultaneously for multiple features of model complexity that may affect human behavior in different ways. In other words, it allows us not only to ask whether people prefer shape A over B after controlling for the distance of the data to the shapes, but also to understand to what extent this preference is driven by important geometrical features such as dimensionality, volume, curvature, and boundaries of the shapes. More specifically, and importantly, our theory makes it possible to measure the strength of the preference, rather than merely asserting its existence. In our modeling framework, the existence of a preference for simpler shapes is captured by the fact that the estimated sensitivities to the complexity penalties are positive (and although they differ in magnitude, all are statistically reliable).
(3) Generalization to different shapes
Thank you for bringing up this important topic. First, note that while dimensionality and volume are global properties of models and only take two possible values in our human tasks, the boundary and robustness penalties depend on the model and on the data and therefore assume a continuum of values through the tasks (note also that the boundary penalty is relevant for all task types, not just the one designed specifically to study it, because all models except the zero-dimensional dot have boundaries). Therefore, our experimental setting is less restrictive of what it may seem, because it explores a range of possible values for two of the four model features. However, we agree that it would be interesting to repeat our experiment with a broader range of models, perhaps allowing their dimensionality and volume to vary more. In the same spirit, it would be interesting to study the dependence of human behavior on the amount of available data. We believe that these are all excellent ideas for further study that exceed the scope of the present paper. We will include these important points in a revised Discussion.
(4) Usage of “decision making” vs “perceptual judgment”
Thank you. We will clarify better in the text that our usage of “decision making” overlaps with the idea of a perceptual judgment and that our experiments do not tackle sequential aspects of repeated decisions.
Reviewer #2 (Public review):
This manuscript presents a sophisticated investigation into the computational mechanisms underlying human decision-making, and it presents evidence for a preference for simpler explanations (Occam's razor). The authors dissect the simplicity bias into four different components, and they design experiments to target each of them by presenting choices whose underlying models differ only in one of these components. In the learning tasks, participants must infer a "law" (a logical rule) from observed data in a way that operationalizes the process of scientific reasoning in a controlled laboratory setting. The tasks are complex enough to be engaging but simple enough to allow for precise computational modeling.
As a further novel feature, authors derive a further term in the expansion of the logevidence, which arises from boundary terms. This is combined with a choice model, which is the one that is tested in experiments. Experiments are run, but with humans and with artificial intelligence agents, showing that humans have an enhanced preference for simplicity as compared to artificial neural networks.
Overall, the work is well written, interesting, and timely, bridging concepts in statistical inference and human decision making. Although technical details are rather elaborate, my understanding is that they represent the state of the art.
I have only one main comment that I think deserves more comments. Computing the complexity penalty of models may be hard. It is unlikely that humans can perform such a calculation on the fly. As authors discuss in the final section, while the dimensionality term may be easier to compute, others (e.g., the volume term, which requires an integral) may be considerably harder to compute (it is true that they should be computed once and for all for each task, but still...). I wonder whether the sensitivity of human decision making with reference to the different terms is so different, and in particular whether it aligns with computational simplicity, or with the possibility of approximating each term by simple heuristics. Indeed, the sensitivity to the volume term is significantly and systematically lower than that of other terms. I wonder whether this relation could be made more quantitative using neural networks, using as a proxy of computational hardness the number of samples needed to reach a given error level in learning each of these terms.
Thank you. The computational complexity associated with calculating the different terms and its potential connection to human sensitivity to the terms is an intriguing topic. As we hinted at in the discussion, we agree with the reviewer that this is a natural candidate for further research, which likely deserves its own study and exceeds the scope of the present paper.
As a minor aside, at least for the present task the volume term may not be that hard to compute, because it can be expressed with the number of distinguishable probability distributions in the model (Balasubramanian 1996). Given the nature of our task, where noise is Gaussian, isotropic and with known variance, the geometry of the model is actually the Euclidean geometry of the plane, and the volume is simply the (log of the) length of the line that represents the one-dimensional models, measured in units of the standard deviation of the noise.
Reviewer #3 (Public review):
Summary:
This is a very interesting paper that documents how humans use a variety of factors that penalize model complexity and integrate over a possible set of parameters within each model. By comparison, trained neural networks also use these biases, but only on tasks where model selection was part of the reward structure. In the situation where training emphasizes maximum-likelihood decisions, only neural networks, but not humans, were able to adapt their decision-making. Humans continue to use model integration simplicity biases.
Strengths:
This study used a pre-registered plan for analyzing human data, which exceeds the standards compared to other current studies.
The results are technically correct.
Weaknesses:
The presentation of the results could be improved.
We thank the reviewer for their appreciation of our experimental design and methodology, and for pointing out (in the separate "recommendations to authors") a few passages of the paper where the presentation could be improved. We will clarify these passages in the revision.
network
Razorpay, since we aren't yet storing tokens with card networks (no network tokenisation)
network
Razorpay, since we aren't yet storing tokens with card networks (no network tokenisation)
card network
Razorpay, since we aren't yet storing tokens with card networks (no network tokenisation)
Handy Tips CVV is not required by default for tokenised cards across all networks. CVV is optional for tokenised card payments. Do not pass dummy CVV values. To implement this change, skip passing the cvv parameter entirely, or pass a null or empty value in the CVV field. We recommend removing the CVV field from your checkout UI/UX for tokenised cards. If CVV is still collected for tokenised cards and the customer enters a CVV, pass the entered CVV value to Razorpay
CVV Less Flow is not yet present in i18n
Handy Tips CVV is not required by default for tokenised cards across all networks. CVV is optional for tokenised card payments. Do not pass dummy CVV values. To implement this change, skip passing the cvv parameter entirely, or pass a null or empty value in the CVV field. We recommend removing the CVV field from your checkout UI/UX for tokenised cards. If CVV is still collected for tokenised cards and the customer enters a CVV, pass the entered CVV value to Razorpay.
CVV Less Flow is not yet present in i18n
card networks
Razorpay, since we aren't yet storing tokens with card networks (no network tokenisation)
card network
Razorpay, since we aren't yet storing tokens with card networks (no network tokenisation)
card network
Razorpay, since we aren't yet storing tokens with card networks (no network tokenisation)
card networks
Razorpay, since we aren't yet storing tokens with card networks (no network tokenisation)