10,000 Matching Annotations
  1. Oct 2025
    1. Reviewer #3 (Public review):

      Summary:

      Hull et al examine Drosophila mutants for the Gaucher's disease locus GBA1/Gba1b, a locus that, when heterozygous, is a risk factor for Parkinson's. Focusing on the Malpighian tubules and their function, they identify a breakdown of cell junctions, loss of haemolymph filtration, sensitivity to ionic imbalance, water retention, and loss of endocytic function in nephrocytes. There is also an imbalance in ROS levels between the cytoplasm and mitochondria, with reduced glutathione levels, rescue of which could not improve longevity. They observe some of the same phenotypes in mutants of Parkin, but treatment by upregulation of autophagy via rapamycin feeding could only rescue the Gba1b mutant and not the Parkin mutant.

      Strengths:

      The paper uses a range of cellular, genetic, and physiological analyses and manipulations to fully describe the renal dysfunction in the GBa1b animals. The picture developed has depth and detail; the data appears sound and thorough.

      Weaknesses:

      The paper relies mostly on the biallelic Gba1b mutant, which may reflect dysfunction in Gaucher's patients, though this has yet to be fully explored. The claims for the heterozygous allele and a role in Parkinson's is a little more tenuous, making assumptions that heterozygosity is a similar but milder phenotype than the full loss-of-function.

    2. Author response:

      Reviewer #1 (Public review):

      Major Comments:

      (1) The abstract frames progressive renal dysfunction as a "central, disease-modifying feature" in both Gba1b and Parkin models, with systemic consequences including water retention, ionic hypersensitivity, and worsened neuro phenotypes. While the data demonstrates renal degeneration and associated physiological stress, the causal contribution of renal defects versus broader organismal frailty is not fully disentangled. Please consider adding causal experiments (e.g., temporally restricted renal rescue/knockdown) to directly establish kidney-specific contributions.

      We concur that this would help strengthen our conclusions. However, manipulating Gba1b in a tissue-specific manner remains challenging due to its propensity for secretion via extracellular vesicles (ECVs). Leo Pallanck and Marie Davis have elegantly shown that ectopic Gba1b expression in neurons and muscles (tissues with low predicted endogenous expression) is sufficient to rescue major organismal phenotypes. Consistent with this, we have been unable to generate clear tissue-specific phenotypes using Gba1b RNAi.

      We will pursue more detailed time-course experiments of the progression of renal pathology, (water weight, renal stem cell proliferation, redox defects, etc.) with the goal of identifying earlier-onset phenotypes that potentially drive dysfunction.

      (2) The manuscript shows multiple redox abnormalities in Gba1b mutants (reduced whole fly GSH, paradoxical mitochondrial reduction with cytosolic oxidation, decreased DHE, increased lipid peroxidation, and reduced peroxisome density/Sod1 mislocalization). These findings support a state of redox imbalance, but the driving mechanism remains broad in the current form. It is unclear if the dominant driver is impaired glutathione handling or peroxisomal antioxidant/β-oxidation deficits or lipid peroxidation-driven toxicity, or reduced metabolic flux/ETC activity. I suggest adding targeted readouts to narrow the mechanism.

      We agree that we have not yet established a core driver of redox imbalance. Identifying one is likely to be challenging, especially as our RNA-sequencing data from aged Gba1b<sup>⁻/⁻</sup> fly heads (Atilano et al., 2023) indicate that several glutathione S-transferases (GstD2, GstD5, GstD8, and GstD9) are upregulated. We can attempt overexpression of GSTs, which has been elegantly shown by Leo Pallanck to ameliorate pathology in Pink1/Parkin mutant fly brains. However, mechanisms that specifically suppress lipid peroxidation or its associated toxicity, independently of other forms of redox damage, remain poorly understood in Drosophila. Our position is there probably will not be one dominant driver of redox imbalance. Notably, CytB5 overexpression has been shown to reduce lipid peroxidation (Chen et al., 2017), and GstS1 has been reported to conjugate glutathione to the toxic lipid peroxidation product 4-HNE (Singh et al., 2001). Additionally, work from the Bellen lab demonstrated that overexpression of lipases, bmm or lip4, suppresses lipid peroxidation-mediated neurodegeneration (Liu et al., 2015). We will therefore test the effects of over-expressing CytB5, bmm and lip4 in Gba1b<sup>⁻/⁻</sup> flies to help further define the mechanism.

      (3) The observation that broad antioxidant manipulations (Nrf2 overexpression in tubules, Sod1/Sod2/CatA overexpression, and ascorbic acid supplementation) consistently shorten lifespan or exacerbate phenotypes in Gba1b mutants is striking and supports the idea of redox fragility. However, these interventions are broad. Nrf2 influences proteostasis and metabolism beyond redox regulation, and Sod1/Sod2/CatA may affect multiple cellular compartments. In the absence of dose-response testing or controls for potential off-target effects, the interpretation that these outcomes specifically reflect redox dyshomeostasis feels ahead of the data. I suggest incorporating narrower interpretations (e.g., targeting lipid peroxidation directly) to clarify which redox axis is driving the vulnerability.

      We are in agreement that Drosophila Cnc exhibits functional conservation with both Nrf1 and Nrf2, which have well-established roles in proteostasis and lysosomal biology that may exacerbate pre-existing lysosomal defects in Gba1b mutants. In our manuscript, Nrf2 manipulation forms part of a broader framework of evidence, including dietary antioxidant ascorbic acid and established antioxidant effectors CatA, Sod1, and Sod2. Together, these data indicate that Gba1b mutant flies display a deleterious response to antioxidant treatments or manipulations. To further characterise the redox state, we will quantify lipid peroxidation using Bodipy 581/591 and assess superoxide levels via DHE staining under our redox-altering experimental conditions.

      As noted above, we will attempt to modulate lipid peroxidation directly through CytB5 and GstS1 overexpression, acknowledging the caveat that this approach may not fully dissociate lipid peroxidation from other aspects of redox stress. We have also observed detrimental effects of PGC1α on the lifespan of Gba1b<sup>⁻/⁻</sup> flies and will further investigate its impact on redox status in the renal tubules.

      (4) This manuscript concludes that nephrocyte dysfunction does not exacerbate brain pathology. This inference currently rests on a limited set of readouts: dextran uptake and hemolymph protein as renal markers, lifespan as a systemic measure, and two brain endpoints (LysoTracker staining and FK2 polyubiquitin accumulation). While these data suggest that nephrocyte loss alone does not amplify lysosomal or ubiquitin stress, they may not fully capture neuronal function and vulnerability. To strengthen this conclusion, the authors could consider adding functional or behavioral assays (e.g., locomotor performance)

      We will address this suggestion by performing DAM activity assays and climbing assays in the Klf15; Gba1b<sup>⁻/⁻</sup> double mutants.

      (5) The manuscript does a strong job of contrasting Parkin and Gba1b mutants, showing impaired mitophagy in Malpighian tubules, complete nephrocyte dysfunction by day 28, FRUMS clearance defects, and partial rescue with tubule-specific Parkin re-expression. These findings clearly separate mitochondrial quality control defects from the lysosomal axis of Gba1b. However, the mechanistic contrast remains incomplete. Many of the redox and peroxisomal assays are only presented for Gba1b. Including matched readouts across both models (e.g., lipid peroxidation, peroxisome density/function, Grx1-roGFP2 compartmental redox status) would make the comparison more balanced and strengthen the conclusion that these represent distinct pathogenic routes.

      We agree that park<sup>⁻/⁻</sup> mutants have been characterised in greater detail than park<sup>⁻/⁻</sup>. The primary aim of our study was not to provide an exhaustive characterisation of park¹/¹, but rather to compare key shared and distinct mechanisms underlying renal dysfunction. We have included several relevant readouts for park<sup>⁻/⁻</sup> tubules (e.g., Figure 7D and 8H: mito-Grx1-roGFP2; Figure 8J: lipid peroxidation using BODIPY 581/591). To expand our characterisation of park¹/¹ flies, we will express the cytosolic Grx1 reporter and the peroxisomal marker YFP::Pts.

      (6) Rapamycin treatment is shown to rescue several renal phenotypes in Gba1b mutants (water retention, RSC proliferation, FRUMS clearance, lipid peroxidation) but not in Parkin, and mitophagy is not restored in Gba1b. This provides strong evidence that the two models engage distinct pathogenic pathways. However, the therapeutic interpretation feels somewhat overstated. Human relevance should be framed more cautiously, and the conclusions would be stronger with mechanistic markers of autophagy (e.g., Atg8a, Ref(2)p flux in Malpighian tubules) or with experiments varying dose, timing, and duration (short-course vs chronic rapamycin).

      We will measure Atg8a, polyubiquitin, and Ref(2)P levels in Gba1b<sup>⁻/⁻</sup> and park<sup>¹/¹</sup> tubules following rapamycin treatment. In our previous study focusing on the gut (Atilano et al., 2023), we showed that rapamycin treatment increased lysosomal area, as assessed using LysoTracker<sup>TM</sup>. We will extend this analysis to the renal tubules following rapamycin exposure. Another reviewer requested that we adopt more cautious language regarding the clinical translatability of this work, and we will amend this in Version 2.

      (7) Several systemic readouts used to support renal dysfunction (FRUMS clearance, salt stress survival) could also be influenced by general organismal frailty. To ensure these phenotypes are kidney-intrinsic, it would be helpful to include controls such as tissue-specific genetic rescue in Malpighian tubules or nephrocytes, or timing rescue interventions before overt systemic decline. This would strengthen the causal link between renal impairment and the observed systemic phenotypes.

      As noted in our response to point 1, we currently lack reliable approaches to manipulate Gba1b in a tissue-specific manner. However, we agree that it is important to distinguish kidney-intrinsic dysfunction from generalised organismal frailty. In the park model, we have already performed renal cell-autonomous rescue: re-expression of Park specifically in Malpighian tubule principal cells (C42-Gal4) throughout adulthood partially normalises water retention, whereas brain-restricted Park expression has no effect on renal phenotypes. Because rescuing Park only in the renal tubules is sufficient to correct a systemic fluid-handling phenotype in otherwise mutant animals, these findings indicate that the systemic defects are driven, at least in part, by renal dysfunction rather than nonspecific organismal frailty.

      To strengthen this causal link, we will now extend this same tubule-specific Park rescue (C42-Gal4 and the high-fidelity Malpighian tubule driver CG31272-Gal4) to additional systemic readouts raised by the reviewer. Specifically, we will assay FRUMS clearance and salt stress survival in rescued versus non-rescued park mutants to determine whether renal rescue also mitigates these systemic phenotypes.

      Reviewer #2 (Public review):

      (1) The authors claim that: "renal system dysfunction negatively impacts both organismal and neuronal health in Gba1b-/- flies, including autophagic-lysosomal status in the brain." This statement implies that renal impairments drive neurodegeneration. However, there is no direct evidence provided linking renal defects to neurodegeneration in this model. It is worth noting that Gba1b-/- flies are a model for neuronopathic Gaucher disease (GD): they accumulate lipids in their brains and present with neurodegeneration and decreased survival, as shown by Kinghorn et al. (The Journal of Neuroscience, 2016, 36, 11654-11670) and by others, which the authors failed to mention (Davis et al., PLoS Genet. 2016, 12: e1005944; Cabasso et al., J Clin Med. 2019, 8:1420; Kawasaki et al., Gene, 2017, 614:49-55).

      With the caveats noted in the responses below, we show that driving Nrf2 expression using the renal tubular driver C42 results in decreased survival, more extensive renal defects, and increased brain pathology in Gba1b<sup>⁻/⁻</sup> flies, but not in healthy controls. This suggests that a healthy brain can tolerate renal dysfunction without severe pathological consequences. Our findings therefore indicate that in Gba1b<sup>⁻/⁻</sup> flies, there may be an interaction between renal defects and brain pathology. We do not explicitly claim that renal impairments drive neurodegeneration; rather, we propose that manipulations exacerbating renal dysfunction can have organism-wide effects, ultimately impacting the brain.

      The reviewer is correct that our Gba1b<sup>⁻/⁻</sup> fly model represents a neuronopathic GD model with age-related pathology. Indeed, we reproduce the autophagic-lysosomal defects previously reported (Kinghorn et al., 2016) in Figure 5. We agree that the papers cited by the reviewer merit inclusion, and in Version 2 we will incorporate them into the following pre-existing sentence in the Results:

      “The gut and brain of Gba1b<sup>⁻/⁻</sup> flies, similar to macrophages in GD patients, are characterised by enlarged lysosomes (Kinghorn et al., 2016; Atilano et al., 2023).”

      (2) The authors tested brain pathology in two experiments:

      (a) To determine the consequences of abnormal nephrocyte function on brain health, they measured lysosomal area in the brain of Gba1b-/-, Klf15LOF, or stained for polyubiquitin. Klf15 is expressed in nephrocytes and is required for their differentiation. There was no additive effect on the increased lysosomal volume (Figure 3D) or polyubiquitin accumulation (Figure 3E) seen in Gba1b-/- fly brains, implying that loss of nephrocyte viability itself does not exacerbate brain pathology.

      (b) The authors tested the consequences of overexpression of the antioxidant regulator Nrf2 in principal cells of the kidney on neuronal health in Gba1b-/- flies, using the c42-GAL4 driver. They claim that "This intervention led to a significant increase in lysosomal puncta number, as assessed by LysoTrackerTM staining (Figure 5D), and exacerbated protein dyshomeostasis, as indicated by polyubiquitin accumulation and increased levels of the ubiquitin-autophagosome trafficker Ref(2)p/p62 in Gba1b-/- fly brains (Figure 5E). Interestingly, Nrf2 overexpression had no significant effect on lysosomal area or ubiquitin puncta in control brains, demonstrating that the antioxidant response specifically in Gba1b-/- flies negatively impacts disease states in the brain and renal system."Notably, c42-GAL4 is a leaky driver, expressed in salivary glands, Malpighian tubules, and pericardial cells (Beyenbach et al., Am. J. Cell Physiol. 318: C1107-C1122, 2020). Expression in pericardial cells may affect heart function, which could explain deterioration in brain function.

      Taken together, the contribution of renal dysfunction to brain health remains debatable.

      Based on the above, I believe the title should be changed to: Redox Dyshomeostasis Links Renal and Neuronal Dysfunction in Drosophila Models of Gaucher disease. Such a title will reflect the results presented in the manuscript

      We agree that C42-Gal4 is a leaky driver; unfortunately, this was true for all commonly used Malpighian tubule drivers available when we began the study. A colleague has recommended CG31272-Gal4 from the Perrimon lab’s recent publication (Xu et al., 2024) as a high-fidelity Malpighian tubule driver. If it proves to maintain principal-cell specificity throughout ageing in our hands, we will repeat key experiments using this driver.

      (3) The authors mention that Gba1b is not expressed in the renal system, which means that no renal phenotype can be attributed directly to any known GD pathology. They suggest that systemic factors such as circulating glycosphingolipids or loss of extracellular vesicle-mediated delivery of GCase may mediate renal toxicity. This raises a question about the validity of this model to test pathology in the fly kidney. According to Flybase, there is expression of Gba1b in renal structures of the fly.

      Our evidence suggesting that Gba1b is not substantially expressed in renal tissue is based on use of the Gba1b-CRIMIC-Gal4 line, which fails to drive expression of fluorescently tagged proteins in the Malpighian tubules and we have previously shown there is no expression within the nephrocytes with this driver line (Atilano et al., 2023). This does not exclude the possibility that Gba1b functions within the tubules. Notably, Leo Pallanck has provided compelling evidence that Gba1b is present in extracellular vesicles (ECVs) and given the role of the Malpighian tubules in haemolymph filtration, these cells are likely exposed to circulating ECVs. The lysosomal defects observed in Gba1b<sup>⁻/⁻</sup> tubules therefore suggest a potential role for Gba1b in this tissue.  

      John Vaughan and Thomas Clandinin have developed mCherry- and Lamp1.V5-tagged Gba1b constructs. We intend to express these in tissues shown by the Pallanck lab to release ECVs (e.g., neurons and muscle) and examine whether the protein can be detected in the tubules.

      (4) It is worth mentioning that renal defects are not commonly observed in patients with Gaucher disease. Relevant literature: Becker-Cohen et al., A Comprehensive Assessment of Renal Function in Patients With Gaucher Disease, J. Kidney Diseases, 2005, 46:837-844.

      We have identified five references indicating that renal involvement, while rare, does occur in association with GD. We agree that this is a valid citation and will include it in the revised introductory sentence:

      “However, renal dysfunction remains a rare symptom in GD patients (Smith et al., 1978; Chander et al., 1979; Siegel et al., 1981; Halevi et al., 1993).”

      (5) In the discussion, the authors state: "Together, these findings establish renal degeneration as a driver of systemic decline in Drosophila models of GD and PD..." and go on to discuss a brain-kidney axis in PD. However, since this study investigates a GD model rather than a PD model, I recommend omitting this paragraph, as the connection to PD is speculative and not supported by the presented data.

      Our position is that Gba1b<sup>⁻/⁻</sup> represents a neuronopathic Gaucher disease model with mechanistic relevance to PD. The severity of GBA1 mutations correlates with the extent of GBA1/GCase loss of function and, consequently, with increased PD risk. Likewise, biallelic park<sup>⁻/⁻</sup> mutants cause a severe and heritable form of PD, and the Drosophila park<sup>⁻/⁻</sup> model is a well-established and widely recognised system that has been instrumental in elucidating how Parkin and Pink1 mutations drive PD pathogenesis.

      We therefore see no reason to omit this paragraph. While some aspects are inherently speculative, such discussion is appropriate and valuable when addressing mechanisms underlying a complex and incompletely understood disease, provided interpretations remain measured. At no point do we claim that our work demonstrates a direct brain-renal axis. Rather, our data indicate that renal dysfunction is a disease-modifying feature in these models, aligning with emerging epidemiological evidence linking PD and renal impairment.

      (6) The claim: "If confirmed, our findings could inform new biomarker strategies and therapeutic targets for GBA1 mutation carriers and other at-risk groups. Maintaining renal health may represent a modifiable axis of intervention in neurodegenerative disease," extends beyond the scope of the experimental evidence. The authors should consider tempering this statement or providing supporting data.

      (7) The conclusion, "we uncover a critical and previously overlooked role for the renal system in GD and PD pathogenesis," is too strong given the data presented. As no mechanistic link between renal dysfunction and neurodegeneration has been established, this claim should be moderated.

      We agree that these sections may currently overstate our findings. In Version 2, we will revise them to ensure our claims remain balanced, while retaining the key points that arise from our data and clearly indicating where conclusions require confirmation (“if confirmed”) or additional study (“warrants further investigation”).

      “If confirmed, our findings could inform new biomarker strategies and therapeutic targets for patients with GD and PD. Maintaining renal health may represent a modifiable axis of intervention in these diseases.”

      “We uncover a notable and previously underappreciated role for the renal system in GD and PD, which now warrants further investigation.”

      (8) The relevance of Parkin mutant flies is questionable, and this section could be removed from the manuscript.

      We intend to include the data for the Parkin loss-of-function mutants, as these provide essential support for the PD-related findings discussed in our manuscript. To our knowledge, this represents the first demonstration that Parkin mutants display defects in Malpighian tubule function and water homeostasis. We therefore see no reason to remove these findings. Furthermore, as Reviewer 1 specifically requested additional experiments using the Park fly model, we plan to incorporate these analyses in the revised manuscript.

      Minor comments:

      (1)  Figure 1G: The FRUMS assay is not shown for Gba1b-/- flies.

      The images in Figure 1G illustrate representative stages of dye clearance. We have quantified the clearance time course for both genotypes. During this process, the tubules of Gba1b<sup>⁻/⁻</sup> flies, similar to controls, sequentially resemble each of the three example images. As the Gba1b<sup>⁻/⁻</sup> tubules appear morphologically identical to controls, differing only in population-level clearance dynamics, we do not feel that including additional example images would provide further informative value.

      (2) In panels D and F of Figure 2, survival of control and Gba1b-/- flies in the presence of 4% NaCl is presented. However, longevity is different (up to 10 days in D and ~3 days in F for control). The authors should explain this.

      We agree. In our experience, feeding-based stress survival assays show considerable variability between experiments, and we therefore interpret results only within individual experimental replicates. We have observed similar variability in oxidative stress, starvation, and xenobiotic survival assays, which may reflect batch-specific or environmental effects.

      (3) In Figure 7F, the representative image does not correspond to the quantification; the percentage of endosome-negative nephrocytes seems to be higher for the control than for the park1/1 flies. Please check this.

      The example images are correctly oriented. Typically, an endosome-negative nephrocyte shows no dextran uptake, whereas an endosome-positive nephrocyte displays a ring of puncta around the cell periphery. In park¹/¹ mutants, dysfunctional nephrocytes exhibit diffuse dextran staining throughout the cell, accompanied by diffuse DAPI signal, indicating a complete loss of membrane integrity and likely cell death. We have 63× images from the preparations shown in Figure 7F demonstrating this. In Version 2, we will include apical and medial z-slices of the nephrocytes to illustrate these findings (to be added as supplementary   data).

      (4) In Figure 7H, the significance between control and park1/1 flies in the FRUMS assay is missing.

      We observe significant dye clearance from the haemolymph; however, the difference in complete clearance from the tubules does not reach statistical significance. This may speculatively reflect alterations in specific aspects of tubule function, where absorption and transcellular flux are affected, but subsequent clearance from the tubule lumen remains intact. We do not feel that our current data provide sufficient resolution to draw detailed conclusions about tubule physiology at this level.

      Reviewer #3 (Public review):

      Weaknesses:

      The paper relies mostly on the biallelic Gba1b mutant, which may reflect dysfunction in Gaucher's patients, though this has yet to be fully explored. The claims for the heterozygous allele and a role in Parkinson's is a little more tenuous, making assumptions that heterozygosity is a similar but milder phenotype than the full loss-of-function.

      We agree with the reviewer that studying heterozygotes may provide valuable insight into GBA1-associated PD. We will therefore assess whether subtle renal defects are detectable in Gba1b<sup>⁻/⁻</sup> heterozygotes. We clearly state that GBA1 mutations act as a risk factor for PD rather than a Mendelian inherited cause. Consistent with findings from Gba heterozygous mice, Gba1b<sup>⁻/⁻</sup> flies display minimal phenotypes (Kinghorn et al. 2016), and any observable effects are expected to be very mild and age dependent.

      (1) Figure 1c, the loss of stellate cells. What age are the MTs shown? Is this progressive or developmental?

      These experiments were conducted on flies that were three weeks of age, as were all manipulations unless otherwise stated. We will ensure that this information is clearly indicated in the figure legends in Version 2. We did not observe changes in stellate cell number at three days of age, and this result will be included in the supplementary material in Version 2. Our data therefore suggest that this is a progressive phenotype.

      (2) I might have missed this, but for Figure 3, do the mutant flies start with a similar average weight, or are they bloated?

      We will perform an age-related time course of water weight in response to Reviewer 1’s comments. For all experiments, fly eggs are age-matched and seeded below saturation density to ensure standardised conditions. Gba1b mutant flies do not exhibit any defects in body size or timing of eclosion.

      (3) On 2F, add to the graph that 4% NaCl (or if it is KCL) is present for all conditions, just to make the image self-sufficient to read.

      Many thanks for the suggestion. We agree that this will increase clarity and will make this amendment in Version 2 of the manuscript

      (4) P13 - rephrase, 'target to either the mitochondria or the cytosol' (as it is phrased, it sounds as though you are doing both at the same time).

      We agree and we plan to revise the sentence as follows:

      Original:

      “To further evaluate the glutathione redox potential (E<sub>GSH</sub>) in MTs, we utilised the redox-sensitive green, fluorescent biosensor Grx1-roGFP2, targeted to both the mitochondria and cytosol (Albrecht et al., 2011).”

      Revised:

      “To further evaluate the glutathione redox potential (E<sub>GSH</sub>) in MTs, we utilised the redox-sensitive fluorescent biosensor Grx1-roGFP2, targeted specifically to either the mitochondria or the cytosol using mito- or cyto-tags, respectively (Albrecht et al., 2011).”

      (5) In 6F - the staining appears more intense in the Park mutant - perhaps add asterisks or arrowheads to indicate the nephrocytes so that the reader can compare the correct parts of the image?

      Reviewer 2 reached the same interpretation. Typically, an endosome-negative nephrocyte shows no dextran uptake, whereas an endosome-positive nephrocyte displays a ring of puncta around the cell periphery. In park¹/¹ mutants, dysfunctional nephrocytes exhibit diffuse dextran staining throughout the cell, accompanied by diffuse DAPI signal, indicative of a complete loss of membrane integrity and likely cell death. We have 63× images from the preparations shown in Figure 7F demonstrating this, and in Version 2 we will include apical and medial z-slices of the nephrocytes to illustrate these findings (to be added as supplementary data).

      (6) In the main results text - need some description/explanation of the SOD1 v SOD2 distribution (as it is currently understood) in the cell - SOD2 being predominantly mitochondrial. This helps arguments later on.

      Thank you for this suggestion. We plan to amend the text as follows:

      “Given that Nrf2 overexpression shortens lifespan in Gba1b<sup>⁻/⁻</sup> flies, we investigated the effects of overexpressing its downstream antioxidant targets, Sod1, Sod2, and CatA, both ubiquitously using the tub-Gal4 driver and with c42-Gal4, which expresses in PCs.”

      to:

      “Given that Nrf2 overexpression shortens lifespan in Gba1b<sup>⁻/⁻</sup> flies, we investigated the effects of overexpressing its downstream antioxidant targets, Sod1, Sod2, and CatA, both ubiquitously using the tub-Gal4 driver and with c42-Gal4, which expresses in PCs. Sod1 and CatA function primarily in the cytosol and peroxisomes, whereas Sod2 is localised to the mitochondria. Sod1 and Sod2 catalyse the dismutation of superoxide radicals to hydrogen peroxide, while CatA subsequently degrades hydrogen peroxide to water and oxygen.”

      (7) Figure 1G, what age are the flies? Same for 3D and E, 4C,D,E, 5B - please check the ages of flies for all of the imaging figures; this information appears to have been missed out.

      As stated above, all experiments were conducted on three-week-old flies unless otherwise specified. In Version 2 of the manuscript, we will ensure this information is included consistently in the figure legends to prevent any potential confusion.

    1. Reviewer #1 (Public review):

      Summary:

      The authors used weighted ensemble enhanced sampling molecular dynamics (MD) to test the hypothesis that a double mutant of Abl favors the DFG-in state relative to the WT and therefore causes the drug resistance to imatinib.

      Strengths:

      The authors employed three novel progress coordinates to sample the DFG flip of ABl. The hypothesis regarding the double mutant's drug resistance is novel.

      Weaknesses:

      The study contains many uncertain aspects. As such, major conclusions do not appear to be supported.

      Comments on revisions:

      The authors have addressed some of my concerns, but these concerns remain to be addressed:

      (1) Definition of the DFG conformation (in vs out). The authors specified their definition in the revised manuscript, but it has not been validated for a large number of kinases to distinguish between the two states. Thus, I recommend that the authors calculate the FES using another definition (see Tsai et al, JACS 2019, 141, 15092−15101) to confirm their findings. This FES can be included in the SI.

      (2) There is no comparison to previous computational work. I would like to see a comparison between the authors' finding of the DFG-in to DFG-out transition and that described in Tsai et al, JACS 2019, 141, 15092−15101.

      (3) My previous comment: "The study is not very rigorous. The major conclusions do not appear to be supported. The claim that it is the first unbiased simulation to observe DFG flip is not true. For example, Hanson, Chodera et al (Cell Chem Biol 2019), Paul, Roux et al (JCTC 2020), and Tsai, Shen et al (JACS 2019) have also observed the DFG flip." has not been adequately addressed.

      The newly added paragraph clearly does not address my original comment.

      "Through our work, we have simulated an ensemble of DFG flip pathways in a wild-type kinase and its variants with atomistic resolution and without the use of biasing forces, also reporting the effects of inhibitor-resistant mutations in the broader context of kinase inactivation likelihood with such level of detail. "

      (4) My previous comment, "Setting the DFG-Asp to the protonated state is not justified, because in the DFG-in state, the DFG-Asp is clearly deprotonated." has not been addressed.

      In the authors's response stated:

      According to previous publications, DFG-Asp is frequently protonated in the DFG-in state of Abl1 kinase. For instance, as quoted from Hanson, Chodera, et al., Cell Chem Bio (2019), "Consistent with previous simulations on the DFG-Asp-out/in interconversion of Abl kinase we only observe the DFG flip with protonated Asp747 ( Shan et al., 2009 ). We showed previously that the pKa for the DFG-Asp in Abl is elevated at 6.5."

      Since the pKa of DFG-Asp is 6.5, it should be deprotonated at the physiological pH 7.5. Thus, the fact that the authors used protonated DFG-Asp contradicts this. I am not requesting the authors to redo the entire simulations, but they need to acknowledge this discrepancy and add a brief discussion. See a constant pH study that demonstrates the protonation state population shift for DFG-Asp as the DFG transitions from in to out state (see Tsai et al, JACS 2019, 141, 15092−15101).

    2. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1:

      Specifically, the authors need to define the DFG conformation using criteria accepted in the field, for example, see https://klifs.net/index.php.

      We thank the reviewer for this suggestion. In the manuscript, we use pseudodihedral and bond angle-based DFG definitions that have been previously established by literature cited in the study (re-iterated below) to unambiguously define the side-chain conformational states of the DFG motif. As we are interested in the specific mechanics of DFG flips under different conditions, we’ve found that the descriptors defined below are sufficient to distinguish between DFG states and allow a more direct comparison with previously-reported results in the literature using different methods.

      We amended the text to be more clear as to those definitions and their choice:

      DFG angle definitions:

      Phe382/Cg, Asp381/OD2, Lys378/O

      Source: Structural Characterization of the Aurora Kinase B "DFG-flip" Using Metadynamics. Lakkaniga NR, Balasubramaniam M, Zhang S, Frett B, Li HY. AAPS J. 2019 Dec 18;22(1):14. doi: 10.1208/s12248-019-0399-6. PMID: 31853739; PMCID: PMC7905835.

      “Finally, we chose the angle formed by Phe382's gamma carbon, Asp381's protonated side chain oxygen (OD2), and Lys378's backbone oxygen as PC3 based on observations from a study that used a similar PC to sample the DFG flip in Aurora Kinase B using metadynamics \cite{Lakkaniga2019}. This angular PC3 should increase or decrease (based on the pathway) during the DFG flip, with peak differences at intermediate DFG configurations, and then revert to its initial state when the flip concludes.”

      DFG pseudodihedral definitions:

      Ala380/Cb, Ala380/Ca, Asp381/Ca, Asp381/Cg

      Ala380/Cb, Ala380/CA, Phe382/CA, Phe382Cg

      Source: Computational Study of the “DFG-Flip” Conformational Transition in c-Abl and c-Src Tyrosine Kinases. Yilin Meng, Yen-lin Lin, and Benoît Roux The Journal of Physical Chemistry B 2015 119 (4), 1443-1456 DOI: 10.1021/jp511792a

      “For downstream analysis, we used two pseudodihedrals previously defined in the existing Abl1 DFG flip simulation literature \cite{Meng2015} to identify and discriminate between DFG states. The first (dihedral 1) tracks the flip state of Asp381, and is formed by the beta carbon of Ala380, the alpha carbon of Ala380, the alpha carbon of Asp381, and the gamma carbon of Asp381. The second (dihedral 2) tracks the flip state of Phe382, and is formed by the beta carbon of Ala380, the alpha carbon of Ala380, the alpha carbon of Phe381, and the gamma carbon of Phe381. These pseudodihedrals, when plotted in relation to each other, clearly distinguish between the initial DFG-in state, the target DFG-out state, and potential intermediate states in which either Asp381 or Phe381 has flipped.”

      Convergence needs to be demonstrated for estimating the population difference between different conformational states.

      We agree that demonstrating convergence is important for accurate estimations of population differences between conformational states. However, as the DFG flip is a complex and concerted conformational change with an energy barrier of 30 kcal/mol [1], and considering the traditional limitations of methods like weighted ensemble molecular dynamics (WEMD), it would take an unrealistic amount of GPU time (months) to observe convergence in our simulations. As discussed in the text (see examples below), we caveat our energy estimations by explicitly mentioning that the state populations we report are not converged and are indicative of a much larger energy barrier in the mutant.

      “These relative probabilities qualitatively agree with the large expected free energy barrier for the DFG-in to DFG-out transition (~32 kcal/mol), and with our observation of a putative metastable DFG-inter state that is missed by NMR experiments due to its low occupancy.”

      “As an important caveat, it is unlikely that the DFG flip free energy barriers of over 70 kcal/mol estimated for the Abl1 drug-resistant variants quantitatively match the expected free energy barrier for their inactivation. Rather, our approximate free energy barriers are a symptom of the markedly increased simulation time required to sample the DFG flip in the variants relative to the wild-type, which is a strong indicator of the drastically reduced propensity of the variants to complete the DFG flip. Although longer WE simulations could allow us to access the timescales necessary for more accurately sampling the free energy barriers associated with the DFG flip in Abl1's drug-resistant compound mutants, the computational expense of running WE for 200 iterations is already large (three weeks with 8 NVIDIA RTX3900 GPUs for one replicate); this poses a logistical barrier to attempting to sample sufficient events to be able to fully characterize how the reaction path and free energy barrier change for the flip associated with the mutations. Regardless, the results of our WE simulations resoundingly show that the Glu255Lys/Val and Thr315Ile compound mutations drastically reduce the probability for DFG flip events in Abl1.”

      (1) Conformational states dynamically populated by a kinase determine its function. Tao Xie et al., Science 370, eabc2754 (2020). DOI:10.1126/science.abc2754

      The DFG flip needs to be sampled several times to establish free energy difference.

      Our simulations have captured thousands of correlated and dozens of uncorrelated DFG flip events. The per-replicate free energy differences are computed based on the correlated transitions. Please consult the WEMD literature (referenced below and in the manuscript, references 34 and 36) for more information on how WEMD allows the sampling of multiple such events and subsequent estimation of probabilities:

      Zuckermann et al (2017) 10.1146/annurev-biophys-070816-033834

      Chong et al (2021) 10.1021/acs.jctc.1c01154

      The free energy plots do not appear to show an intermediate state as claimed.

      Both the free energy plots and the representative/anecdotal trajectories analyzed in the study show a saddle point when Asp381 has flipped but Phe382 has not (which defines the DFG-inter state), we observe a distinct change in probability when going to the pseudodihedral values associated with DFG-inter to DFG-up or DFG-out. We removed references to the putative state S1 as we we agree with the reviewer that its presence is unlikely given the data we show.

      The trajectory length of 7 ns in both Figure 2 and Figure 4 needs to be verified, as it is extremely short for a DFG flip that has a high free energy barrier.

      We appreciate this point. To clarify, the 7 ns segments corresponds to a collated trajectory extracted from the tens of thousands of walkers that compose the WEMD ensemble, and represent just the specific moment at which the dihedral flips occur rather than the entire flip process. On average, our WEMD simulations sample over 3 us of aggregate simulation time before the first DFG flip event is observed, in line with a high energy barrier. This is made clear in the manuscript excerpt below: “Over an aggregate simulation time of over 20 $\mu$s, we have collected dozens of uncorrelated and unbiased inactivation events, starting from the lowest energy conformation of the Abl1 kinase core (PDB 6XR6) \cite{Xie2020}.”

      The free energy scale (100 kT) appears to be one order of magnitude too large.

      As discussed in the text and quoted in response to comment 2, the exponential splitting nature of WEMD simulations (where the probability of individual walkers are split upon crossing each bin threshold) often leads to unrealistically high energy barriers for rare events. This is not unexpected, and as discussed in the text, we consider that value to be a qualitative measurement of the decreased probability of a DFG flip in Abl1 mutants, and not a direct measurement of energy barriers.

      Setting the DFG-Asp to the protonated state is not justified, because in the DFG-in state, the DFG-Asp is clearly deprotonated.

      According to previous publications, DFG-Asp is frequently protonated in the DFG-in state of Abl1 kinase. For instance, as quoted from Hanson, Chodera, et al., Cell Chem Bio (2019), “C onsistent with previous simulations on the DFG-Asp-out/in interconversion of Abl kinase we only observe the DFG flip with protonated Asp747 ( Shan et al., 2009 ). We showed previously that the pKa for the DFG-Asp in Abl is elevated at 6.5.”

      Finally, the authors should discuss their work in the context of the enormous progress made in theoretical studies and mechanistic understanding of the conformational landscape of protein kinases in the last two decades, particularly with regard to the DFG flip. and The study is not very rigorous. The major conclusions do not appear to be supported. The claim that it is the first unbiased simulation to observe DFG flip is not true. For example, Hanson, Chodera et al (Cell Chem Biol 2019), Paul, Roux et al (JCTC 2020), and Tsai, Shen et al (JACS 2019) have also observed the DFG flip.

      We thank the reviewer for pointing out these issues. We have revised the manuscript to better contextualize our claims within the limitations of the method and to acknowledge previous work by Hanson, Chodera et al., Paul, Roux et al., and Tsai, Shen et al.

      The updated excerpt is described below

      “Through our work, we have simulated an ensemble of DFG flip pathways in a wild-type kinase and its variants with atomistic resolution and without the use of biasing forces, also reporting the effects of inhibitor-resistant mutations in the broader context of kinase inactivation likelihood with such level of detail. “

      Reviewer #2:

      I appreciated the discussion of the strengths/weaknesses of weighted ensemble simulations. Am I correct that this method doesn't do anything to explicitly enhance sampling along orthogonal degrees of freedom? Maybe a point worth mentioning if so.

      Yes, this is correct. We added a sentence to WEMD summary section of Results and Discussion discussing it.

      “As a supervised enhanced sampling method, WE employs progress coordinates (PCs) to track the time-dependent evolution of a system from one or more basis states towards a target state. Although weighted ensemble simulations are unbiased in the sense that no biasing forces are added over the course of the simulations, the selection of progress coordinates and the bin definitions can potentially bias the results towards specific pathways \cite{Zuckerman2017}. Additionally, traditional WEMD simulations do not explicitly enhance sampling along orthogonal degrees of freedom (those not captured by the progress coordinates). In practice, this means that insufficient PC definitions can lead to poor sampling.”

      I don't understand Figure 3C. Could the authors instead show structures corresponding to each of the states in 3B, and maybe also a representative structure for pathways 1 and 2?

      We have remade Figure 3. We removed 3B and accompanying discussion as upon review we were not confident on the significance of the LPATH results where it pertains to the probability of intermediate states. We replaced 3B with a summary of the pathways 1 and 2 in regards to the Phe382 flip (which is the most contrasting difference).

      Why introduce S1 and DFG-inter? And why suppose that DFG-inter is what corresponds to the excited state seen by NMR?

      As a consequence of dropping the LPATH analysis, we also removed mentions to S1 as it further analysis made it hard to distinguish from DFG-in, For DFG-inter, we mention that conformation because (a) it is shared by both flipping mechanisms that we have found, and (b) it seems relevant for pharmacology, as it has been observed in other kinases such as Aurora B (PDB 2WTV), as Asp381 flipping before Phe382 creates space in the orthosteric kinase pocket which could be potentially targeted by an inhibitor.

      It would be nice to have error bars on the populations reported in Figure 3.

      Agreed, upon review we decided do drop the populations as we were not confident on the significance of the LPATH results where it pertains to the probability of intermediate states.

      I'm confused by the attempt to relate the relative probabilities of states to the 32 kca/mol barrier previously reported between the states. The barrier height should be related to the probability of a transition. The DFG-out state could be equiprobable with the DFG-in state and still have a 32 kcal/mol barrier separating them.

      Thanks for the correction, we agree with the reviewer and have amended the discussion to reflect this. Since we are starting our simulations in the DFG-in state, the probability of walkers arriving in DFG-out in our steady state WEMD simulations should (assuming proper sampling) represent the probability of the transition. We incorrectly associated the probability of the DFG-out state itself with the probability of the transition.

      How do the relative probabilities of the DFG-in/out states compare to experiments, like NMR?

      Previous NMR work has found the population of apo DFG in (PDB 6XR6) in solution to be around 88% for wild-type ABL1, and 6% for DFG out (PDB 6XR7). The remaining 6% represents post-DFG-out state (PDB 6XRG) where the activation loop has folded in near the hinge, which we did not simulate due to the computational cost associated with it. The same study reports the barrier height from DFG-in to DFG-out to be estimated at around 30 kcal/mol.

      (1) Conformational states dynamically populated by a kinase determine its function. Tao Xie et al., Science 370, eabc2754 (2020). DOI:10.1126/science.abc2754

      (we already have that in the text, just need to quote here)

      “Do the staggered and concerted DFG flip pathways mentioned correspond to pathways 1 and 2 in Figure 3B, or is that a concept from previous literature?”

      Yes, we have amended Figure 3B to be clearer. In previous literature both pathways have been observed [1], although not specifically defined.

      Source: Computational Study of the “DFG-Flip” Conformational Transition in c-Abl and c-Src Tyrosine Kinases. Yilin Meng, Yen-lin Lin, and Benoît Roux The Journal of Physical Chemistry B 2015 119 (4), 1443-1456 DOI: 10.1021/jp511792a

    1. The resulting “Customer Service for Anything” hotline promises callers that a Zappos staffer will give them Netflix recommendations, check the inventory levels at their local grocery store, or just be a friendly voice for someone feeling isolated in a lockdown situation.9

      This is another fascinating model from Zappos. I had no idea they were so creative in their business models and the way they operate. I think this makes them stand out but I am just now learning this so apparently it was before things could go viral. I think if this was happening right now with tik tok and other social media people would be all over a customer service representative for Zappos that can help you with shoes and also tell you what to watch on Netflix.

    1. Reviewer #1 (Public review):

      Domínguez-Rodrigo and colleagues make a moderately convincing case for habitual elephant butchery by Early Pleistocene hominins at Olduvai Gorge (Tanzania), ca. 1.8-1.7 million years ago. They present this at the site scale (the EAK locality, which they excavated), as well as across the penecontemporaneous landscape, analyzing a series of findspots that contain stone tools and large-mammal bones. The latter are primarily elephants, but giraffids and bovids were also butchered in a few localities. The authors claim that this is the earliest well-documented evidence for elephant butchery; doing so requires debunking other purported cases of elephant butchery in the literature, or in one case, reinterpreting elephant bone manipulation as being nutritional (fracturing to obtain marrow) rather than technological (to make bone tools). The authors' critical discussion of these cases may not be consensual, but it surely advances the scientific discourse. The authors conclude by suggesting that an evolutionary threshold was achieved at ca. 1.8 ma, whereby regular elephant consumption rich in fats and perhaps food surplus, more advanced extractive technology (the Acheulian toolkit), and larger human group size had coincided.

      The fieldwork and spatial statistics methods are presented in detail and are solid and helpful, especially the excellent description (all too rare in zooarchaeology papers) of bone conservation and preservation procedures. However, the methods of the zooarchaeological and taphonomic analysis - the core of the study - are peculiarly missing. Some of these are explained along the manuscript, but not in a standard Methods paragraph with suitable references and an explicit account of how the authors recorded bone-surface modifications and the mode of bone fragmentation. This seems more of a technical omission that can be easily fixed than a true shortcoming of the study. The results are detailed and clearly presented.

      By and large, the authors achieved their aims, showcasing recurring elephant butchery in 1.8-1.7 million-year-old archaeological contexts. Nevertheless, some ambiguity surrounds the evolutionary significance part. The authors emphasize the temporal and spatial correlation of (1) elephant butchery, (2) Acheulian toolkits, and (3) larger sites, but do not actually discuss how these elements may be causally related. Is it not possible that larger group size or the adoption of Acheulian technology have nothing to do with megafaunal exploitation? Alternative hypotheses exist, and at least, the authors should try to defend the causation, not just put forward the correlation. The only exception is briefly mentioning food surplus as a "significant advantage", but how exactly, in the absence of food-preservation technologies? Moreover, in a landscape full of aggressive scavengers, such excess carcass parts may become a death trap for hominins, not an advantage. I do think that demonstrating habitual butchery bears very significant implications for human evolution, but more effort should be invested in explaining how this might have worked.

      Overall, this is an interesting manuscript of broad interest that presents original data and interpretations from the Early Pleistocene archaeology of Olduvai Gorge. These observations and the authors' critical review of previously published evidence are an important contribution that will form the basis for building models of Early Pleistocene hominin adaptation.

    2. Reviewer #2 (Public review):

      The authors argue that the Emiliano Aguirre Korongo (EAK) assemblage from the base of Bed II at Olduvai Gorge shows systematic exploitation of elephants by hominins about 1.78 million years ago. They describe it as the earliest clear case of proboscidean butchery at Olduvai and link it to a larger behavioral shift from the Oldowan to the Acheulean.

      The paper includes detailed faunal and spatial data. The excavation and mapping methods appear to be careful, and the figures and tables effectively document the assemblage. The data presentation is strong, but the behavioral interpretation is not supported by the evidence.

      The claim for butchery is based mainly on the presence of green-bone fractures and the proximity of bones and stone artifacts. These observations do not prove human activity. Fractures of this kind can form naturally when bones break while still fresh, and spatial overlap can result from post-depositional processes. The studies cited to support these points, including work by Haynes and colleagues, explain that such traces alone are not diagnostic of butchery, but this paper presents them as if they were.

      The spatial analyses are technically correct, but their interpretation extends beyond what they can demonstrate. Clustering indicates proximity, not behavior. The claim that statistical results demonstrate a functional link between bones and artifacts is not justified. Other studies that use these methods combine them with direct modification evidence, which is lacking in this case.

      The discussion treats different bodies of evidence unevenly. Well-documented cut-marked specimens from Nyayanga and other sites are described as uncertain, while less direct evidence at EAK is treated as decisive. This selective approach weakens the argument and creates inconsistency in how evidence is judged.

      The broader evolutionary conclusions are not supported by the data. The paper presents EAK as marking the start of systematic megafaunal exploitation, but the evidence does not show this. The assemblage is described well, but the behavioral and evolutionary interpretations extend far beyond what can be demonstrated.

    1. the negative impacts gradually emerged and intensi-fied over subsequent months.

      Over time, the AI has hurt the efficiency and work being done in the hospital. This could be due to the change in habit, but most likely due to the addition of AI specifically.

    2. For example, AI can scan hundreds of medical images and identify potential disease risks within minutes (Ardila et al., 2019), provid-ing recommendations that are comparable to those of experts (McKinney et al., 2020), thereby directly improv-ing the overall efficiency of the healthcare system.

      AI is incredibly powerful and intelligent when applied properly, able to find potential solutions to diseases without cures, which could be really useful, but also really concerning that it can do something like that so easily.

    3. For instance, AI demonstrates dermato-logical diagnostic accuracy through image analysis that matches or exceeds board-certified dermatologists (Leachman & Merlino, 2017).

      AI can be greater and smarter than humans, but with the drawback of also making mistakes that it must learn from first to not make again.

    1. Reviewer #1 (Public review):

      Summary:

      The study by Castro et al. presents an interesting blueprint for designing influenza immunogens that can induce cross-group influenza-specific antibodies. The authors used a structure-based design to transplant receptor binding site (RBS) residues from H5 and H3 into an H1 scaffold. In addition, they assembled the transplanted structures as heterotrimers. They characterized the constructs structurally and used them to immunize mice to define ELISA binding and neutralizing antibodies (Abs) to different influenza strains.

      Strengths and Weaknesses:

      The authors succeeded in generating the different, correctly folded immunogens. The heterotrimers would benefit from more characterization: it remains unclear whether they are even formed or whether the sample is a mix of homotrimers and whether some combinations are more likely than others. While some of these questions are complex to answer, authors should at least confirm the presence of heterotrimers.

      While all constructs were able to elicit H1-specific Abs, different immunogens displayed differential ability to induce a response to the transplanted epitope. While H3-transplant resulted in H3-specific Abs, this was not the case for H5 or the heterotrimers. The importance of the finding is that authors are able to elicit polyclonal Abs neutralizing group 1 and group 2 influenza viruses with a single immunogen. A more in-depth discussion on why the H3-transplant but not the H5-transplant resulted in those specific Abs could be beneficial.

      Overall, the work is a proof of concept that H1-H3 chimeric proteins can be produced and an important first step towards computational vaccines, inducing Abs to multiple groups.

    2. Reviewer #2 (Public review):

      Summary:

      The manuscript from Castro et al describes the engineering of influenza hemagglutinin H1-based head domains that display receptor-binding-site residues from H5 and H3 HAs. The initial head-only chimeras were able to bind to FluA20, which recognizes the trimer interface, but did not bind well to H5 or H3-specific antibodies. Furthermore, these constructs were not particularly stable in solution as assessed by low melting temperatures. Crystal structures of each chimeric head in complex with FluA20 were obtained, demonstrating that the constructs could adopt the intended conformation upon stabilization with FluA20. The authors next placed the chimeric heads onto an H1 stalk to create homotrimeric HA ectodomains, as well as a heterotrimeric HA ectodomain. The homotrimeric chimeric HAs were better behaved in solution, and H3- and H5-specific antibodies bound to these trimers with affinities that were only about 10-fold weaker compared to their respective wildtype HAs. The heterotrimeric chimeric HA showed transient stability in solution and could bind more weakly to the H3- and H5-specific antibodies. Mice immunized with these trimers elicited cross-reactive binding antibodies, although the cross-neutralizing titers were less robust. The most positive result was that the H1H3 trimer was able to elicit sera that neutralized both H1 and H3 viruses.

      Strengths:

      The manuscript is very well-written with clear figures. The biophysical and structural characterizations of the antigen were performed to a high standard. The engineering approach is novel, and the results should provide a basis for further iteration and improvement of RBS transplantation.

      Weaknesses:

      The main limitation of the study is that there are no statistical tests performed for the immunogenicity results shown in Figures 4 and 5. It is therefore unknown whether the differences observed are statistically significant. Additionally, fits of the BLI data in Figure 3 to the binding model used to determine the binding constants should be shown.

    1. Reviewer #2 (Public review):

      The unstructured α- and β-tubulin C-terminal tails (CTTs), which differ between tubulin isoforms, extend from the surface of the microtubule, are post-translationally modified, and help regulate the function of MAPs and motors. Their dynamics and extent of interactions with the microtubule lattice are not well understood. Hotta et al. explore this using a set of three distinct probes that bind to the CTTs of tyrosinated (native) α-tubulin. Under normal cellular conditions, these probes associate with microtubules only to a limited extent, but this binding can be enhanced by various manipulations thought to alter the tubulin lattice conformation (expanded or compact). These include small-molecule treatment (Taxol), changes in nucleotide state, and the binding of microtubule-associated proteins and motors. Overall, the authors conclude that microtubule lattice "expanders" promote probe binding, suggesting that the CTT is generally more accessible under these conditions. Consistent with this, detyrosination is enhanced. Mechanistically, molecular dynamics simulations indicate that the CTT may interact with the microtubule lattice at several sites, and that these interactions are affected by the tubulin nucleotide state.

      Strengths:

      Key strengths of the work include the use of three distinct probes that yield broadly consistent findings, and a wide variety of experimental manipulations (drugs, motors, MAPs) that collectively support the authors' conclusions, alongside a careful quantitative approach.

      Weaknesses:

      The challenges of studying the dynamics of a short, intrinsically disordered protein region within the complex environment of the cellular microtubule lattice, amid numerous other binders and regulators, should not be understated. While it is very plausible that the probes report on CTT accessibility as proposed, the possibility of confounding factors (e.g., effects on MAP or motor binding) cannot be ruled out. Sensitivity to the expression level clearly introduces additional complications. Likewise, for each individual "expander" or "compactor" manipulation, one must consider indirect consequences (e.g., masking of binding sites) in addition to direct effects on the lattice; however, this risk is mitigated by the collective observations all pointing in the same direction.

      The discussion does a good job of placing the findings in context and acknowledging relevant caveats and limitations. Overall, this study introduces an interesting and provocative concept, well supported by experimental data, and provides a strong foundation for future work. This will be a valuable contribution to the field.

    2. Reviewer #3 (Public review):

      Summary:

      In this study, the authors investigate how the structural state of the microtubule lattice influences the accessibility of the α-tubulin C-terminal tail (CTT). By developing and applying new biosensors, they reveal that the tyrosinated CTT is largely inaccessible under normal conditions but becomes more accessible upon changes to the tubulin conformational state induced by taxol treatment, MAP expression, or GTP-hydrolysis-deficient tubulin. The combination of live imaging, biochemical assays, and simulations suggests that the lattice conformation regulates the exposure of the CTT, providing a potential mechanism for modulating interactions with microtubule-associated proteins. The work addresses a highly topical question in the microtubule field and proposes a new conceptual link between lattice spacing and tail accessibility for tubulin post-translational modification.

      Strengths:

      (1) The study targets a highly relevant and emerging topic-the structural plasticity of the microtubule lattice and its regulatory implications.

      (2) The biosensor design represents a methodological advance, enabling direct visualization of CTT accessibility in living cells.

      (3) Integration of imaging, biochemical assays, and simulations provides a multi-scale perspective on lattice regulation.

      (4) The conceptual framework proposed lattice conformation as a determinant of post-translational modification accessibility is novel and potentially impactful for understanding microtubule regulation.

      Weaknesses:

      There are a number of weaknesses in the paper, many of which can be addressed textually. Some of the supporting evidence is preliminary and would benefit from additional experimental validation and clearer presentation before the conclusions can be considered fully supported.

      In particular, the authors should directly test in vitro whether Taxol addition can induce lattice exchange (see comments below).

    1. AbstractPhasing, the assignment of alleles to their respective parental chromosomes, is fundamental to studying genetic variation and identifying disease-causing variants. Traditional approaches, including statistical, pedigree-based, and read-based phasing, face challenges such as limited accuracy for rare variants, reliance on external reference panels, and constraints in regions with sparse genetic variation.To address these limitations, we developed TinkerHap, a novel and unique phasing algorithm that integrates a read-based phaser, based on a pairwise distance-based unsupervised classification, with external phased data, such as statistical or pedigree phasing. We evaluated TinkerHap’s performance against other phasing algorithms using 1,040 parent-offspring trios from the UK Biobank (Illumina short-reads) and GIAB Ashkenazi trio (PacBio long-reads). TinkerHap’s read-based phaser alone achieved higher phasing accuracies than all other algorithms with 95.1% for short-reads (second best: 94.8%) and 97.5% for long-reads (second best: 95.5%). Its hybrid approach further enhanced short-read performance to 96.3% accuracy and was able to phase 99.5% of all heterozygous sites. TinkerHap also extended haplotype block sizes to a median of 79,449 base-pairs for long-reads (second best: 68,303 bp) and demonstrated higher accuracy for both SNPs and indels. This combination of a robust read-based algorithm and hybrid strategy makes TinkerHap a uniquely powerful tool for genomic analyses.

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giaf138), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 3: Julia Markowski

      In the presented Technical Note "TinkerHap - A Novel Read-Based Phasing Algorithm with Integrated Multi-Method Support for Enhanced Accuracy" by Hartmann et al., the authors introduce TinkerHap, a new hybrid phasing tool that primarily relies on read-based phasing for both short- and long-read sequencing data, but can additionally incorporate externally phased haplotypes, enabling it to build upon phase information derived from existing statistical or pedigree-based phasing approaches. This hybrid approach addresses an important and timely challenge in the field: integrating the complementary strengths of different phasing strategies to improve the accuracy and span of haplotype blocks, particularly for rare variants, or in variant-sparse genomic regions. The authors clearly articulate the limitations of existing approaches and present their solution in a manner that is both elegant and accessible. Design features such as multiple output formats and compatibility with third-party tools demonstrate a practical awareness of user needs. The authors evaluate TinkerHap using both short-read and long-read state-of-the-art benchmarking datasets, and compare its performance against commonly used phasing tools, demonstrating improvements in both phasing accuracy and haplotype block lengths. Overall, this is a well-conceived and thoughtfully implemented contribution to the phasing community.

      While the manuscript is overall well written, there are a few areas where additional clarification or extension would improve its impact. I recommend the following revisions to help clarify key aspects of the method, enhance the generalizability of the evaluation, and align the manuscript more closely with journal guidelines.

      Major Comments * (1) Limited scope of benchmarking The evaluation on the highly polymorphic MHC class II region is appropriate for highlighting TinkerHap's strengths in phasing rare variants in variable regions. However, the current evaluation on short -read based phasing is based on a ∼700 kb region selected for its high variant density, which limits the generalizability of the findings. Since the manuscript emphasizes improved performance in regions with sparse genetic variation, it would strengthen the work to include chromosome-wide or genome-wide benchmarks, particularly on short-read data. This would also provide a more balanced comparison with tools like SHAPEIT5, which predictably underperform in the MHC class II region due to their reliance on population allele frequencies and linkage disequilibrium patterns that are less effective for rare or private variants. * (2) Coverage and scalability The manuscript describes TinkerHap as scalable, but since the algorithm relies on overlapping reads, it is unclear how its performance varies with sequencing depth. Including a figure or supplementary analysis showing phasing accuracy, runtime, and memory usage at different coverage levels (particularly for short-read data) would help support this claim and guide users on appropriate coverage requirements. * (3) Clarify algorithmic novelty It would be helpful to elaborate on how TinkerHap's read-based phasing algorithm differs from existing approaches such as the weighted Minimum Error Correction (wMEC) framework implemented in WhatsHap. For example, what specifically enables TinkerHap's read-based mode to produce longer haplotype blocks than other read-based tools? * (4) Data description A brief characterization of the input datasets, such as the sequencing depth, as well as the number and average genomic distance of heterozygous variants in the MHC class II region and the GIAB trio data would provide important context for interpreting the reported phasing accuracy and haplotype block lengths. * (5) Manuscript structure Since the algorithm itself is the core novel contribution, it should be part of the results section, as well as the description of the evaluation currently in placed in the discussion. According to GigaScience's Technical Note guidelines, the method section should be reserved for "any additional methods used in the manuscript, that are not part of the new work being described in the manuscript."

      Minor Comments * (a) Novelty of hybrid approach While TinkerHap's ability to integrate externally phased haplotypes is valuable, similar functionality exists in other tools, for example, SHAPEIT can accept pre-phased scaffolds (including those generated from read-based phasing), and WhatsHap supports trio-based phasing. Consider refining the language to more precisely describe what is uniquely implemented in TinkerHap's hybrid strategy. It would be interesting to see how the presented results of using SHAPEIT's phasing output as input for TinkerHap compare to an approach of feeding TinkerHap's read-based phasing results into SHAPEIT. * (b) Reference bias claim The introduction states that read-based phasing is "independent of reference bias." While this approach is generally less susceptible to reference bias than statistical phasing, bias can still arise during the read alignment stage, potentially affecting downstream phasing. This point should be clarified. * (c) GIAB datasets The abstract mentions only the GIAB Ashkenazi trio, but later the Chinese trio is included in the analysis as well. Please clarify whether results are averaged across the two datasets. * (d) Tool version citation Please clarify in the text that the comparison was made using SHAPEIT5, not an earlier version.

      Recommendation: Minor Revision With additional clarification on generalizability and coverage sensitivity, this manuscript will make a valuable contribution to the field.

    2. AbstractPhasing, the assignment of alleles to their respective parental chromosomes, is fundamental to studying genetic variation and identifying disease-causing variants. Traditional approaches, including statistical, pedigree-based, and read-based phasing, face challenges such as limited accuracy for rare variants, reliance on external reference panels, and constraints in regions with sparse genetic variation.To address these limitations, we developed TinkerHap, a novel and unique phasing algorithm that integrates a read-based phaser, based on a pairwise distance-based unsupervised classification, with external phased data, such as statistical or pedigree phasing. We evaluated TinkerHap’s performance against other phasing algorithms using 1,040 parent-offspring trios from the UK Biobank (Illumina short-reads) and GIAB Ashkenazi trio (PacBio long-reads). TinkerHap’s read-based phaser alone achieved higher phasing accuracies than all other algorithms with 95.1% for short-reads (second best: 94.8%) and 97.5% for long-reads (second best: 95.5%). Its hybrid approach further enhanced short-read performance to 96.3% accuracy and was able to phase 99.5% of all heterozygous sites. TinkerHap also extended haplotype block sizes to a median of 79,449 base-pairs for long-reads (second best: 68,303 bp) and demonstrated higher accuracy for both SNPs and indels. This combination of a robust read-based algorithm and hybrid strategy makes TinkerHap a uniquely powerful tool for genomic analyses.

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giaf138), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 2: Yilei Fu

      TinkerHap is a read-based phasing algorithm designed to accurately assign alleles to parental haplotypes using sequencing reads. General comments: 1. The manuscript would greatly benefit from the inclusion of a flowchart or schematic overview of the TinkerHap algorithm. Given that the method incorporates multiple components—including read-based phasing, pairwise distance-based unsupervised classification, and optional integration with statistical phasing tools like ShapeIT—a visual diagram would help readers grasp the workflow more intuitively. Major comments: 1. The authors are missing experiments for long-read based phasing. How does TinkerHap performs with ShapeIT on PacBio long-reads? I would suggest the authors using the same phasing method class as their short-read analysis: TinkerHap+ShapeIT; TinkerHap; WhatsHap; HapCUT2; ShapeIT. Also I believe ShapeIT is capable to take long-read SNV/INDEL calls as vcf. 2. Following up on the point 1, the experimental design of this study is quite skewed. WhatsHap is not suitable for short-read sequencing data. It does not make sense to apply WhatsHap on short-read data. 3. I would caution the authors to read and potentially compare with SAPPHIRE (https://doi.org/10.1371/journal.pgen.1011092). This is a method that developed by the ShapeIT team for incorporating long-read sequencing data and ShapeIT. 4. To better justify the hybrid strategy, I recommend adding an analysis of sites where TinkerHap and ShapeIT disagree. Are these differences due to reference bias, read coverage, variant type, or true ambiguity? Such an evaluation would help users understand when to rely on the read-based output vs. ShapeIT, and enhance confidence in the merging strategy. Minor comments: 1. I could see the versions of the software in the supplementary github, but I think it is also important to include those in the manuscript. For example, shapeIT 2-5 are having quite different functions. The citation for ShapeIT in the manuscript is for ShapeIT 2, but the program that has been used is for ShapeIT 5. 2. Need to mention the benchmarking hardware information for runtime comparison. 3. "...a novel and unique phasing algorithm..." -> "...a novel phasing algorithm..."

    1. Reviewer #1 (Public review):

      The remodeling of macromolecular substrates by AAA+ proteins is an essential aspect of life at the molecular scale, and understanding conserved and divergent features of substrate recognition across the AAA+ protein family remains an ongoing area of research. AAA+ proteins are highly modular and typically combine N-terminal recognition domain(s) with ATPase domain(s) to recognize and unfold some macromolecular target, such as dsDNA or protein substrates. This can be coupled to activity by additional C-terminal domains that further modify the substrate, such as a protease domain that hydrolyzes the extended, unstructured protein chain that emerges from the ATPase domain during substrate processing.

      This work focuses on one such AAA+ protease, LONP1. LONP1 is an essential AAA+ protein involved in mitochondrial proteostasis, and disruption of its function in vivo has serious developmental consequences. This work explores the processing of two new mitochondrial protein substrates (StAR, TFAM) by LONP1 and presents new conformational states of LONP1 with closed configurations and no substrate threaded through the ATPase pores. The quality of the reconstructions and models is very good. Critically, one of these states (LONP1C3) has a completely occluded ATPase pore from the N-terminal side of the ATPase ring, where three of the six NTDs/CCDs interact tightly to form a C3-symmetric substructure preventing substrate ingress. The authors note several key interactions between amino acids forming these substructures, and perform ATPase assays on mutant LONP1 proteins to determine hydrolysis rates in the absence or presence of substrate. These patterns are recapitulated in casein disassembly assays as well. Based on these results, the authors note that the mutants have differential effects depending on the "foldedness" of the substrate, and surmise that disruption of the C3-symmetric substructure from the EM experiments is responsible for these effects - an intriguing idea. In addition to the C3 state, the authors observe additional intermediates which they place on the same conformational coordinate. One such structure is the LONP1C2 state with two splits, hinting at a conformational transition from LONP1C3 to the closed/active state.

      Taken together, these results form the basis of an interesting story. However, I feel that more experimentation and analysis are needed to address several key points, or that the conclusions should be toned down. First and foremost, I note that while the hypothesis that the LONP1C3 state is a critical step in recognizing substrate "foldedness" is an interesting one, the claim is made solely on the basis of biochemical experiments with mutant LONP1, and that there is no substrate density associated with LONP1C3. In the absence of substrate density and/or structural data for the mutants, this seems like a very strong claim. More generally, the manuscript invokes the conformational landscape of LONP1C3 in multiple instances, but no such landscape is presented to show how LONP1C3 and the other states are quantitatively linked. Finally, I note the prevalence of ADP-only active sites in these intermediates, and am concerned that this might be related to the depletion of ATP under the on-grid reaction conditions. The inclusion of an ATP regeneration system may be a useful way to ensure that ATP/ADP concentrations are more physiological and that excessive ADP will not bias the conformations of the ring systems.

      In summary, I believe this manuscript is exciting but would benefit from a paring back of claims, or the inclusion of some additional data to fill in some of the conceptual gaps outlined above.

    2. Reviewer #2 (Public review):

      This paper by Mindrebo et al. reveals multiple novel conformations of the human LONP1 protease. AAA+ proteases, like LONP1, are needed for maintaining proteostasis in cells and organelles. While structures of fully active (closed) and fully inactive (open) conformations of LONP1 are now established, the dynamics between these states and how changes in conformations may contribute to or be triggered by substrates and nucleotides are unclear. In this work, the authors characterize a novel C3-symmetric state of LONP1 bound to TFAM (a native substrate), suggesting that this C3-state is an intermediate in the open to closed cycle, and make mutations to test this model biochemically. Deeper inspection of their TFAM-bound LONP1 dataset reveals additional conformations, including a C2-symmetric and two asymmetric intermediates. All these conformations are synthesized by the authors to propose a model for how LONP1 transitions from an inactive OFF state to an active ENZ state. There are clear, interesting structural aspects to this work, revealing alternate conformations to shed light on the dynamics of LONP1. However, some of the conclusions interpret well beyond the scope of the experiments shown, and this is discussed below.

      Overall, there are two major comments with the work as written that, if addressed, would make the results more compelling. First, the order of events and existence of intermediate states is primarily from static structural snapshots and fitting these structures to a possible mechanism. It would be ideal to have some biochemical or kinetic data supporting these steps and the existence of these intermediates. For example, the model is that the C3-state is an ADP-bound intermediate that blocks access and acts as a checkpoint for progression to the ENZ state of LONP1. The major evidence for this comes from a mutation (D449A) that fails to degrade TFAM as well as StAR or casein, which is taken as evidence that failure to form the C3 state reduces the ability to degrade more 'folded' substrates. A prediction of this model would be that destabilizing TFAM through mutation should improve D449A degradation. Ideally, other measures of conformational changes, such as FRET or HDX-MS, could be used to visualize this C3-state in unmutated LONP1 during the process of substrate engagement and degradation. At a minimum, using ATP hydrolysis as a proxy for forming the ENZ state and the assumption that different substrates will differentially promote formation of the C3-state means that measuring ATP hydrolysis of wt LONP1 with different substrates will be informative.

      The second major comment is that the primary evidence for the importance of the C3 state is a mutation (D449A) that, based on the cryoEM structure, is incompatible with this conformation but should not affect any other state. A concern that arises is whether this mutation is doing more than simply destabilizing the C3 state and affecting substrate recognition/enzymatic activity in some other manner. To address this point, the authors could perform cryoEM characterization of the D449A mutant, which should show reduced or no presence of the C3-state, but still an intact ability to form the closed ENZ state.

    1. AbstractBackground Soil ecosystems have long been recognized as hotspots of microbial diversity, but most estimates of their complexity remain speculative, relying on limited data and extrapolation from shallow sequencing. Here, we revisit this question using one of the deepest metagenomic sequencing efforts to date, applying 148 Gbp of Nanopore long-read and 122 Gbp of Illumina short-read data to a single forest soil sample.Results Our hybrid assembly reconstructed 837 metagenome-assembled genomes (MAGs), including 466 high- and medium-quality genomes, nearly all lacking close relatives among cultivated taxa. Rarefaction and k-mer analyses reveal that, even at this depth, we capture only a fraction of the extant diversity: nonparametric models project that over 10 Tbp would be required to approach saturation. These findings offer a quantitative, technology-enabled update to long-standing diversity estimates and demonstrate that conventional metagenomic sequencing efforts likely miss the majority of microbial and biosynthetic potential in soil. We further identify over 11,000 biosynthetic gene clusters (BGCs), >99% of which have no match in current databases, underscoring the breadth of unexplored metabolic capacity.Conclusions Taken together, our results emphasize both the power and the present limitations of metagenomics in resolving natural microbial complexity, and they provide a new baseline for evaluating future advances in microbial genome recovery, taxonomic classification, and natural product discovery.

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giaf135), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 2: Ameet Pinto

      The manuscript provides long-read mock community datasets from GridION and PromethION sequencing platforms along with draft genomes of mock community organisms sequenced on the Illumina Platform. The entire dataset is available for reuse by the research community and this is an extremely valuable resource that the authors have made available. While there are some analyses of the data included in the current manuscript, it is largely limited to summary statistics (which seems appropriate for a Data Note type manuscript) and some analyses of interest to the field (e.g., de novo metagenome assembly). It would have been helpful to have a more detailed evaluation of the de novo assembly and parameter optimization, but this may have been outside the scope of a Data Note type manuscript. I have some minor comments below to improve clarity of the manuscript.

      Minor comments: 1. Line 28-29: Would suggest that the authors provide the citation (15) without the statement in parenthesis or revised version of statement in parenthesis.

      "DNA extraction protocol" section 2. The last few lines were a little bit unclear. For instance: "45 ul (Even) and 225ul (Log) of the supernatant retained earlier…" It was a bit confusing. Possibly because the line "The standard was spun…before removing the supernatant and retaining." seems incomplete. I would suggest that the authors consider posting the entire protocol on protocols.io - as is quite possible that other groups may want to reproduce the sequencing step for these mock community standards. This would be particularly helpful as the authors suggest that the protocol was modified to increase fragment length.

      "Illumina sequencing" section: 3. Suggest that the authors improve clarity in this section by re-structuring this paragraph. For instance, early in paragraph it is stated that the pooled library was sequenced on four lanes on Illumina HiSeq 1500, but later stated that the even community was sequenced on a MiSeq.

      "Nanopore sequencing metrics" in results: 4. Table 2, Figure 3a. - please fix this to Figure 1a. 5. Figure 1B: The x-axis is "accuracy" while in this section Figure 1b is referred to as providing "quality scores". Please replace "quality scores" with "accuracy" for consistency. 6. Figure 1C: Please provide a legend mapping colors to "even" and "log". I realize this information is in Figure 1B, but would be helpful for the reader. Finally, there is no significant trend in sequencing speed over time. Considering this, would be easier to remove the Time component and just have a single panel with the GridION and PromethION sequencing speed for both even and log community in the same panel. It would make it easier to compare the different in sequencing speeds visually.

      "Illumina sequencing metrics" in results: 7. Table 5 is mentioned before Tables 3 and 4. Please correct this.

      "Nanopore mapping statistics" in results: 8. For Figure 2, consider also providing figure for the even community. 9. Further, it would be helpful to get clarity on where the data for Figure 2 is coming from. Is this from mapping of long-reads to mock community draft (I think so) or from the kraken analyses.

      "Nanopore metagenome assemblies" in results: 1. It is unclear how the genome completeness was estimated. 2. The consensus accuracy data is provided for all assemblies combined. Would be helpful if there was some discussion on accuracy of assemblies as a function of wtdgb2 parameters tested. There is some discussion of this in the "Discussion section", but would be helpful if this was laid out clearly in the results, with an additional appropriate figure/table.

    2. AbstractBackground Soil ecosystems have long been recognized as hotspots of microbial diversity, but most estimates of their complexity remain speculative, relying on limited data and extrapolation from shallow sequencing. Here, we revisit this question using one of the deepest metagenomic sequencing efforts to date, applying 148 Gbp of Nanopore long-read and 122 Gbp of Illumina short-read data to a single forest soil sample.Results Our hybrid assembly reconstructed 837 metagenome-assembled genomes (MAGs), including 466 high- and medium-quality genomes, nearly all lacking close relatives among cultivated taxa. Rarefaction and k-mer analyses reveal that, even at this depth, we capture only a fraction of the extant diversity: nonparametric models project that over 10 Tbp would be required to approach saturation. These findings offer a quantitative, technology-enabled update to long-standing diversity estimates and demonstrate that conventional metagenomic sequencing efforts likely miss the majority of microbial and biosynthetic potential in soil. We further identify over 11,000 biosynthetic gene clusters (BGCs), >99% of which have no match in current databases, underscoring the breadth of unexplored metabolic capacity.Conclusions Taken together, our results emphasize both the power and the present limitations of metagenomics in resolving natural microbial complexity, and they provide a new baseline for evaluating future advances in microbial genome recovery, taxonomic classification, and natural product discovery.

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giaf135), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 1: Lachlan Coin

      This is a great data resource, and will be invaluable to the community for testing/developing approaches for metagenome assembly. The aims are well described. Aside from a few queries I have below, the conclusions are largely supported by data shown; the manuscript is well written, and there are no statistical tests presented.

      Major comments: It seems that species assignment was done in two ways, one by using Kraken on the contigs (with a database of many bacterial/viral/fungal genomes) ; and also by mapping the reads directly to the illumina assemblies of the isolates in the mixture. It would be useful to be clearer in the results which approach was used in reporting the results. E.g. the sentence " We identify the presence of all 10 microbial species in the community, for both even and log samples, in expected proportions(Figure 2). " presumably relates to the analysis just mapping to the draft illumina assemblies?

      • Also, It seems a little surprising that there were no false positive identification of species not present in the mixture. Is this because this analysis is based on mapping to the draft illumina isolate assemblies only (see previous comment). Or, if based on kraken assignment of contigs, perhaps repetitive and/or short contigs were filtered out?
      • Could the authors present more statistics on the quality of the nanopore metagenomic assemblies, including the presence of misassemblies, any chimeric contigs, checkM completeness results; indel errors, mismatch errors, etc.
      • Also, can the authors confirm that the assemblies were done on the full nanopore dataset (rather than, for example, on each isolate separately after mapping the reads to each isolate draft illumina assembly).

      The authors write : " For the even community, using wtdgb2 with varying parameter choices, we were able to assemble seven of the bacteria into single contigs." , however this does not seem to be borne out by figure 3? I could only see 4 species with at least one single contig assembly. Perhaps the authors could spell out which species have a single contig assembly?

      Minor Comments:

      • In abstract "even and odd communities" should be ' evenly-distributed and log-distributed communities for clarity (this term is otherwise unclear to casual reader of abstract)
    1. AbstractPredicting essential genes is important for understanding the minimal genetic requirements of organisms, identifying disease-associated genes, and discovering potential drug targets. Wet-lab experiments for identifying essential genes are time-consuming and labor-intensive. Although various machine learning methods have been developed for essential gene prediction, both systematic testing with large collections of gene knockout data and rigorous benchmarking for efficient methods are very limited to date. Furthermore, current graph-based approaches require learning the entire gene interaction networks, leading to high computational costs, especially for large-scale networks. To address these issues, we propose EssSubgraph, an inductive representation learning method that integrates graph-structured network data with omics features for training graph neural networks. We used comprehensive lists of human essential genes distilled from the latest collection of knockout datasets for benchmarking. When applied to essential gene prediction with multiple types of biological networks, EssSubgraph achieved superior performance compared to existing graph-based and other models. The performance is more stable than other methods with respect to network structure and gene feature perturbations. Because of its inductive nature, EssSubgraph also enables predicting gene functions using dynamical networks with unseen nodes and it is scalable with respect to network sizes. Finally, EssSubgraph has better performance in cross-species essential gene prediction compared to other methods. Our results show that EssSubgraph effectively combines networks and omics data for accurate essential gene identification while maintaining computational efficiency. The source code and datasets used in this study are freely available at https://github.com/wenmm/EssSubgraph.

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giaf136), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 2: Ju Xiang

      This paper proposes an inductive graph neural network model EssSubgraph for prediction of mammalian essential genes by integrating protein-protein interaction (PPI) networks with multi-omics data. Experimental results demonstrate the performance of methods, with additional validation showing effective cross-species prediction and biological consistency of predicted essential genes through functional enrichment analysis. This work is interesting, but some questions need to be clarified before publication. (1)The literature review lacks discussion about inductive vs. transductive graph learning approaches. Expanding this background would better contextualize the model's technical contributions. (2)While PCA dimensions for expression features were optimized (Figure 2A-B), other key hyperparameters like sampling depth (K-hop) deserve similar systematic evaluation to ensure optimal configuration. (3)What is RuLu? How does the author handle the issue of sample imbalance? Does CONCAT mean that two vectors are connected end-to-end to become a vector? If yes, does it mean that the number of rows of W is set to 1 in order to generate the final prediction output? (4)How to perform the sampling of nodes in EssSubgraph? The explanation of 'Subgraph' in the method name is not sufficient. (5)What are 'Edge perturbation' and 'feature perturbations'? How to perform? What is the performance of the algorithm in this article when only the network structure is used or only gene expression data is used? Or say, on the basis of the network, does adding gene expression data bring performance improvements, and vice versa? (6)The computational efficiency analysis focuses on memory usage but omits critical metrics like training time and scalability with respect to batch size or sampling strategies. Is it appropriate to directly compare 'Memory efficiency and network scalability'? The same method may require different amounts of memory and computation time when using different encoding technologies. (7)Minor revisions: --"and can predict identities of genes which can then predict the identities of genes that were either included in the training network or are unseen nodes." --Lines 244-251, "We used the EssSubgraph model mentioned above." The logical relationship here needs to be optimized. --"The model is an inductive deep learning method that generates low-dimensional vector representations for nodes in graphs and can predict identities of genes which can then predict the identities of genes that were either included in the training network or are unseen nodes." It is not clear. --Suggest to supplement statistical data on 'high density'. In terms of existing networks, they generally may not be called high-density. --Placing the perturbation curves of different methods in the same figure is more convenient for comparing the stability of different methods.

    1. Reviewer #2 (Public review):

      Summary:

      This paper is an exciting follow-up to two recent publications in eLife: one from the same lab, reporting that slender forms can successfully infect tsetse flies (Schuster, S et al., 2021), and another independent study claiming the opposite (Ngoune, TMJ et al., 2025). Here, the authors address four criticisms raised against their original work: the influence of N-acetyl-glucosamine (NAG), the use of teneral and male flies, and whether slender forms bypass the stumpy stage before becoming procyclic forms.

      Strengths:

      We applaud the authors' efforts in undertaking these experiments and contributing to a better understanding of the T. brucei life cycle. The paper is well-written and the figures are clear.

      Weaknesses:

      We identified several major points that deserve attention.

      (1) What is a slender form? Slender-to-stumpy differentiation is a multi-step process, and most of these steps unfortunately lack molecular markers (Larcombe et al, 2023). In this paper, it is essential that the authors explicitly define slender forms. Which parameters were used? It is implicit that slender forms are replicative and GFP::PAD1-negative. Isn't it possible that some GFP::PAD1-negative cells were already transitioning toward stumpy forms, but not yet expressing the reporter? Transcriptomically, these would be early transitional cells that, upon exposure to "tsetse conditions" (in vitro or in vivo), could differentiate into PCF through an alternative pathway, potentially bypassing the stumpy stage (as suggested in Figure 4). Given the limited knowledge of early molecular signatures of differentiation, we cannot exclude the possibility that the slender forms used here included early differentiating cells. We suggest:

      1.1 Testing the commitment of slender forms (e.g., using the plating assay in Larcombe et al., 2023), assessing cell-cycle profile, and other parameters that define slender forms.

      1.2 In the Discussion, acknowledging the uncertainty of "what is a slender?" and being explicit about the parameters and assumptions.

      1.3 Clarifying in the Materials and Methods how cultures were maintained in the 3-4 days prior to tsetse infections, including daily cell densities. Ideally, provide information on GFP expression, cell cycle, and morphology. While this will not fully resolve the concern, it will allow future reinterpretation of the data when early molecular events are better understood.

      (2) Figure 1: This analysis lacks a positive control to confirm that NAG is working as expected. It would strengthen the paper if the authors showed that NAG improves stumpy infection. Once confirmed, the authors could discuss possible differences in the tsetse immune response to slender vs. stumpy forms to explain the absence of an effect on slender infections.

      (3) Figure 2. To conclude that teneral flies are less infected than non-teneral flies, data from Figures 1 and 2 must be directly comparable. Were these experiments performed simultaneously? Please clarify in the figure legends. Moreover, the non-teneral flies here are still relatively young (6-7 days old), limiting comparisons with Ngoune, TMJ et al. 2025, where flies were 2-3 weeks old.

      (4) Figure 3. The PCA plot (A) appears to suggest the opposite of the authors' interpretation: slender differentiation seems to proceed through a transcriptome closer to stumpy profiles. Plotting DEG numbers (panel C) is informative, but how were paired conditions selected? Besides, plotting of the number of DEGs between consecutive time points within and between parasite types is also necessary. There may also be better computational tools to assess temporal relationships. Finally, how does PAD1 transcript abundance change over time in both populations? It would also be important to depict the upregulation of procyclic-specific genes.

      (5) Could methylcellulose in the medium sensitize parasites to QS-signal, leading to more frequent and/or earlier differentiation, despite low densities? If so, cultures with vs. without methylcellulose might yield different proportions of early-differentiating (yet GFP-negative) parasites. This could explain discrepancies between the Engstler and Rotureau labs despite using the same strain. The field would benefit from reciprocal testing of culture conditions. Alternatively, the authors could compare infectivity and transcriptomes of their slender forms under three conditions: (i) in vitro with methylcellulose, (ii) in vitro without methylcellulose, and (iii) directly from mouse blood.

    1. AbstractSince its inception in 2019, the Tree of Life programme at the Wellcome Sanger Institute has released high-quality, chromosomally-resolved reference genome assemblies for over 2000 species. Tree of Life has at its core multiple teams, each of which are responsible for key components of the ‘genome engine’. One of these teams is the Tree of Life core laboratory, which is responsible for processing tissues across a wide range of species into high quality, high molecular weight DNA and intact RNA, and preparing tissues for Hi-C. Here, we detail the different workflows we have developed to successfully process a wide variety of species, covering plants, fungi, chordates, protists, arthropods, meiofauna and other metazoa. We summarise our success rates and describe how to best apply and combine the suite of current protocols, which are all publicly available at protocols.io.

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giaf119), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 2: Lars Podsiadlowski

      The Authors provide a profound overview over their aim to generate genome information for a wide range of species in the tree of life project. As a scientist with hands on experience on genome sequencing, I greatly appreciated all the information here, especially detail on the differences experienced with different taxa, as this is probably the most important lesson here, that there is high variation and strategies must be adapted to that. I am also happy that many of the approaches are also available as detailed online protocols, which really helps a lot in practical work. The selected examples of size profiles also give a good impression on what differences can be expected, e.g. with different extraction methods applied to the same species. Although detailed, I think that the authors provide a lot of relevant information here and would not change that. I did also not spot any errors or flaws in the text.

      One thing that might be changed is the title. From first reading it I expected to hear also about assembly strategies, as well as some comparisons and oddities of the yielded genomes. It is great to have the manuscript as it is, but I like to see it better reflected in the title that the main focus here is on the wet lab part, especially the extraction of good quality DNA/RNA.

      I have some issues with the figures: Fig. 7: there is no mention in the legend about the y-axis scale - I assume from the text that it refers to Gigabases? Figs. 8,9, 11-15: It is a bit confusing until I realised the log scale of the numbers. I would prefer to see it not with a log scale, but in a similar way as Fig. 6, with percentages on display, and an accompanying species number somewhere on the side. In the way it is shown now, the failed proportion looks so small and gives a wrong impression. Maybe overthink the colors, I would prefer another color for the Pass ULI, which is more similar in tone with Pass, because at the moment pass ULI and fail are similar in tone and brightness and appear as being opposed to the green "pass", while the difference between "fail" and the rest should be more pronounced in my view.

    2. AbstractSince its inception in 2019, the Tree of Life programme at the Wellcome Sanger Institute has released high-quality, chromosomally-resolved reference genome assemblies for over 2000 species. Tree of Life has at its core multiple teams, each of which are responsible for key components of the ‘genome engine’. One of these teams is the Tree of Life core laboratory, which is responsible for processing tissues across a wide range of species into high quality, high molecular weight DNA and intact RNA, and preparing tissues for Hi-C. Here, we detail the different workflows we have developed to successfully process a wide variety of species, covering plants, fungi, chordates, protists, arthropods, meiofauna and other metazoa. We summarise our success rates and describe how to best apply and combine the suite of current protocols, which are all publicly available at protocols.io.

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giaf119), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 1: Yuan Deng

      The manuscript focuses on the entire experimental processes involved in the generation of high-quality genomes and proposes a set of standardized and modular experimental process protocols. The innovation of these protocols is that they can be flexibly combined according to different taxa, tissue types and sample quality, which greatly improves the flexibility and efficiency of the experiment and provides a reference experimental process for researchers in this field to follow. The manuscript also explore the specific challenges and solutions of different taxa in the experimental procedure of sample processing, DNA extraction, shearing, cleaning, Hi-C and RNA extraction, providing valuable guidance for future research. Meanwhile, the manuscript reviews the experimental protocols for the production of genome data of more than 2,000 species, which is in line with the journal's focus on biological big data. Therefore, I consider the subject matter and content of this work are appropriate for publishing in this journal. I only have some minor requests for revision:

      1.Sample processing: (1) Sampling of rare and endangered species: for such a large-scale study of the "Tree of Life", it is bound to involve some species that are difficult to obtain conventional tissues, therefore the manuscript may include a section on how to select suitable tissues for subsequent experiments, especially for rare species. And is it possible to provide a prioritized list of tissues selection based on the difficulty of extracting high-quality DNA? (2) Processing and extraction of unconventional tissues: accordingly, it is recommended to add content regarding sample processing and extraction procedures for unconventional tissues, e.g., any particular methods to improve the quality of DNA extraction. (3) Sample contamination problem is often overlooked yet critical: how to reduce sample contamination problems in large-scale sample processing and other experimental processes? How to exclude sample or experimental contamination from data?

      2.Analyzing method limitations: while the manuscript mentions some challenges that may be encountered in the processing of samples from various taxa, there is little discussion on the limitations of those experimental methods. It is recommended to expand the content of the limitations of the methods, such as some methods may not work well for certain types of samples, or some steps may have factors that affect the accuracy of the results, so that readers can have a more comprehensive understanding of the scope of application and potential problems of the method.

      3.The manuscript is currently organized according to the experimental procedures, but some of the more relevant components could probably be consolidated to reduce redundant information and improve the readability. The authors studied the experimental conditions for different taxa in long read sequencing and Hi-C library preparation, but fail to emphasize their relevance in the introduction.

    1. ABSTRACTThe workflow management system Nextflow builds together with the nf-core community an essential ecosystem in Bioinformatics. However, ensuring the correctness and reliability of large and complex pipelines is challenging, since a unified and automated unit-style testing framework specific to Nextflow is still missing. To provide this crucial component to the community, we developed the testing framework nf-test. It introduces a modular approach that enables pipeline developers to test individual process blocks, workflow patterns and entire pipelines in insolation. nf-test is based on a similar syntax as Nextflow DSL 2 and provides unique features such as snapshot testing and smart testing to save resources by testing only changed modules. We show on different pipelines that these improvements minimize development time, reduce test execution time by up to 80% and enhance software quality by identifying bugs and issues early. Already adopted by dozens of pipelines, nf-test improves the robustness and reliability in pipeline development.

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giaf130), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 2: Katalin Ferenc

      1) General assessment of the work.

      It is a very nice addition to the scientific community, an important step towards standardizing the development and maintenance of software for bioinformatics pipelines. It is not a trivial task to adapt unit testing concepts to pipelines. nf-test has already been used by the community and has been in a feedback loop with the users. Thus, its usability has been constantly improving, both through the efforts of the developers and additional plugins from the user base, highlighting the ease of contribution to the nf-test software base. The text is well written and easy to follow. However, some concepts could be better described and discussed for the readers.

      2) Specific comments for revision:

      a) Major comments; - The authors should refer to pytest-workflow in the introduction, along with NFTest, as both are used for comparison. - Test coverage is helpful to identify which lines are vulnerable to changes. For the calculation of the test coverage in nf-test, indirect tests are considered. Does it mean that if a single integration test is written, then all called modules are considered covered? Please clarify or argue why this is a good strategy. - An interesting idea in nf-test is to use snapshot testing for modules, workflows, and pipelines. As the authors mention, this has been used in web development. According to the cited reference, it is especially used for frontend code and has been noted as a quick but fragile way of testing. This is because snapshot testing does not provide insight into the correctness of the code, but only asserts that there was no change. It is beneficial that this test checks for unexpected changes that unit tests might miss. In the "Code reduction through snapshot testing" section, the authors highlight cases when snapshot testing results in failed tests: 1) when there is a change in the code due to a bug, and 2) when default parameters are modified. We understand that snapshot testing in the context of pipeline development is useful in two scenarios: 1. when the pipeline itself is being refactored, the output of each module should stay the same. In this case, snapshot testing is used to fix the output of the tools, and a failing test highlights that the Nextflow code wrapping the tools is incorrectly integrated (i.e., connected to each other). 2. pipeline / module versioning requires knowledge about changes in the underlying tools. In this case, snapshot testing helps because any failure in the tests flags a change. As there is no oracle, one would not know if the bug was introduced or fixed. However, from the pipeline development perspective, the only thing that matters is that there should be a new version. According to our understanding, in any other case, a more traditional approach should be preferred, where there is an oracle knowing about expected file formats, content, or errors. Otherwise, there is a risk of adding many tests that unnecessarily fail, causing increased development time. Please add explicit discussion about these scenarios, or other ones based on your insights, highlighting when snapshot testing is applicable/appropriate during pipeline development. Please add a summary of other types of tests (e.g., assertions about file or channel content, verification of tool execution given input data, and error handling checks) that can be run within the nf-test framework. b) Minor comments: - In the "evaluation and validation" section, the authors describe that they ran tests in nf-core/modules between github versions. Please clarify that these modules were already covered by tests. - Table 4 is referenced in the Discussion section. It would be better to move the comparison between tools to the Results section. - On page 16, typo: "queuing system" - Figure 2 title typo: "nf-tet" - Figure 2: please add comments about the time cost of adding tests during the development, as it is highlighted on the figure. - Page 22 typo: "savings areis calculated" - Abstract: "Build on…" should be "Built on…" - Shouldn't TM2 linked to M3 be TM3 in Figure 1?

    2. ABSTRACTThe workflow management system Nextflow builds together with the nf-core community an essential ecosystem in Bioinformatics. However, ensuring the correctness and reliability of large and complex pipelines is challenging, since a unified and automated unit-style testing framework specific to Nextflow is still missing. To provide this crucial component to the community, we developed the testing framework nf-test. It introduces a modular approach that enables pipeline developers to test individual process blocks, workflow patterns and entire pipelines in insolation. nf-test is based on a similar syntax as Nextflow DSL 2 and provides unique features such as snapshot testing and smart testing to save resources by testing only changed modules. We show on different pipelines that these improvements minimize development time, reduce test execution time by up to 80% and enhance software quality by identifying bugs and issues early. Already adopted by dozens of pipelines, nf-test improves the robustness and reliability in pipeline development.

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giaf130), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 1: Jose Espinosa-Carrasco

      The article presents nf-test, a new modular and automated testing framework designed specifically for Nextflow workflows, a widely used workflow management system in bioinformatics. nf-test aims to help developers improve the reliability and maintainability of complex Nextflow pipelines. The framework includes very useful features such as snapshot testing, which assesses the computational repeatability of the results produced by the execution of a pipeline or its components and smart testing which optimises computational resources by only executing tests on the parts of the pipeline that were modified, reducing overall run time. Notably, nf-test can be integrated into CI workflows and has already been adopted by the nf-core community, demonstrating its utility and maturity in real-world scenarios

      General comments:

      The manuscript could benefit from reordering some sections to follow a more consistent structure and by removing redundant explanations. I think it would be nice to include one limitation of nf-test, the fact that reproducing previous results does not necessarily imply biological correctness. This point is not entirely clear in the current version of the manuscript (see my comment below). Another aspect that could improve the manuscript is the inclusion of at least one reference or explanation of how nf-test can be applied outside nf-core pipelines, as all the provided examples are currently restricted to nf-core.

      Specific comments:

      On page 3, the sentence "Thus, maintenance requires substantial time and effort to manually verify that the pipeline continues to produce scientifically valid results" could be more precise. I would argue that identical results across versions do not guarantee scientific validity; they merely confirm consistency with previous outputs. True scientific validity requires comparison against a known ground truth or standard.

      On page 4, in the sentence "It is freely available, and extensive documentation is provided on the website", I think it would be nice to include the link to the documentation.

      In the "Evaluation and Validation" section (page 8), it would be helpful to briefly state the goal of each evaluated test, as is done with the nf-gwas example. ou could include something similar for the nf-core/fetchngs and modules examples (e.g. to assess resource optimization through smart testing). Also, the paragraph references the "--related-tests" option, which could benefit from a short explanation of what it does. Lastly, the order in which the pipelines are presented in this section differs from the order in the Results, which makes the structure a bit confusing.

      The sections titled "Unit testing in nf-test", "Test case execution", "Smart testing and parallelization", "Snapshot testing", and "Extensions for bioinformatics" seem more appropriate for the Materials and Methods section, as they describe the design and functionality of nf-test rather than reporting actual results. Please ignore this comment if the current structure follows specific journal formatting requirements that I may not be aware of.

      The Snapshot testing discussion in the Results section feels somewhat repetitive with its earlier explanation. Consider combining both discussions or restructuring the content to reduce duplication.

      On page 11, the sentence "In these cases, MD5 sums cannot be used and validating the dynamic output content can be time-intensive" is not entirely clear to me, does it mean that it is time consuming to implement the test for this kind of files or that the validation of the files is time consuming?

      On page 12, the sentence "Second, we analyzed the last 500 commits..." is confusing because this is actually the third point in the "Evaluation and Validation" section, as mentioned before. reordering would improve clarity.

      On page 14, the authors state "However, changes (b) and (c) lead to incorrect output results without breaking the pipeline. Thus, these are the worst-case scenarios for a pipeline developer." While this is mostly true, I would also add that a change in parameters may produce different, but not necessarily incorrect, results—some may even be more biologically meaningful. I suggest to acknowledge this.

      Typos:

      In the abstract: "Build on a similar syntax as Nextflow DSL2" should be corrected to "Built on a similar syntax as Nextflow DSL2".

      In the legend of Figure 2 (page 19): "nf-tet" should be "nf-test".

      In the legend of Table 2: "Time savings areis calculated..." should be "Time savings are calculated..."

      Recommendation:

      Given the relevance and technical contributions of the manuscript, I recommend its publication after addressing the minor revisions summarized above.

    1. AbstractCryogenic electron microscopy (cryoEM) has revolutionized structural biology by enabling atomic-resolution visualization of biomacromolecules. To automate atomic model building from cryoEM maps, artificial intelligence (AI) methods have emerged as powerful tools. Although high-quality, task-specific datasets play a critical role in AI-based modeling, assembling such resources often requires considerable effort and domain expertise. We present CryoDataBot, an automated pipeline that addresses this gap. It streamlines data retrieval, preprocessing, and labeling, with fine-grained quality control and flexible customization, enabling efficient generation of robust datasets. CryoDataBot’s effectiveness is demonstrated through improved training efficiency in U-Net models and rapid, effective retraining of CryoREAD, a widely used RNA modeling tool. By simplifying the workflow and offering customizable quality control, CryoDataBot enables researchers to easily tailor dataset construction to the specific objectives of their models, while ensuring high data quality and reducing manual workload. This flexibility supports a wide range of applications in AI-driven structural biology.

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giaf127), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 3: Nabin Giri

      The paper presents a flexible, integrated framework for filtering and generating customizable cryo-EM training datasets. It builds upon previously available strategies for preparing cryo-EM datasets for AI-based methods, extending them with a user-friendly interface that allows researchers to enter query parameters, interact directly with the Electron Microscopy Data Bank (EMDB), extract and parse relevant metadata, apply quality control measures, and retrieve associated structural data (cryo-EM maps and atomic models).

      While the manuscript improves upon Cryo2StructData and similar data pipelines used in ModelAngelo/DeepTracer, the innovation claim would be strengthened by a deeper technical comparison, for example quantifying the performance impact of each quality control step in isolation. Some filtering and preprocessing concepts (e.g., voxel resampling, redundancy handling) are not entirely new, so a more explicit discussion of how CryoDataBot's implementations differ from prior work and why these differences matter would improve the manuscript. I do not think its challenging to change the resampling or the grid division parameter on the scripts provided by Cryo2StructData github repo or scripts available in ModelAngelo github repo.

      The benchmarking is mainly limited to ribosome datasets. While this choice is understandable for demonstration purposes, the generalizability to other macromolecules (e.g., membrane proteins, small complexes) is not shown. This can include a small-scale test on a different class of structures (e.g., protein's C-alpha positions, backbone atom position or amino acid type prediction (more difficult one) could strengthen the claim of broad applicability. Since the technical innovation limited, this can help to improve the paper.

      The authors state that CryoDataBot ensures reproducibility and provides datasets for AI-method benchmarking. However, EMDB entries can be updated over time (e.g., through reprocessing, resolution improvements, model re-fitting, or correction of atomic coordinates). In my opinion, in the strict sense, reproducibility (producing identical datasets) depends on versioning of EMDB/PDB entries. Without version locking, CryoDataBot ensures procedural reproducibility but not data immutability. The manuscript should either explain how reproducibility is maintained (e.g., version control, archived snapshots) or clarify that reproducibility refers to the workflow, not necessarily the exact dataset content, unless version dataset are provided, as done in Cryo2StructData.

      Some other concerns: (1) The "Generating Structural Labels" section has missing technical details. Please provide more information on how the labels are generated, including labeling radius selection, and how ambiguities are resolved if any encountered. A suggestions on how the user should determine the radius and also the grid size (64^3 or other) would be beneficial. (2) The manuscript states on the adaptive density normalization part : "This method is more flexible and removes more noise than the fixed-threshold approaches commonly used in prior studies." What does noise and signals mean here? - there is a separate body of AI-based works developed for reducing noise such as DeepEMhancer, EMReady to name few. Any metric to support this claim? (3) The manuscript states: "To assess dataset redundancy, we analyzed structural similarity between entries based on InterPro (IPR) domain annotations." Is this a new approach introduced here, or an established practice? How does it compare with sequence-based similarity measures? Or Structure-based similarity such as Foldseek? (4) The statement "underscoring the dataset's superior quality and informativeness" is strong. Is it possible to provide more concrete, quantitative evidence to support this, ideally beyond the U-Net training metrics.? (5) Is there a case where there is multiple PDB IDs for the cryo-EM density map? If so how is a specific atomic model chosen in such case?

    2. AbstractCryogenic electron microscopy (cryoEM) has revolutionized structural biology by enabling atomic-resolution visualization of biomacromolecules. To automate atomic model building from cryoEM maps, artificial intelligence (AI) methods have emerged as powerful tools. Although high-quality, task-specific datasets play a critical role in AI-based modeling, assembling such resources often requires considerable effort and domain expertise. We present CryoDataBot, an automated pipeline that addresses this gap. It streamlines data retrieval, preprocessing, and labeling, with fine-grained quality control and flexible customization, enabling efficient generation of robust datasets. CryoDataBot’s effectiveness is demonstrated through improved training efficiency in U-Net models and rapid, effective retraining of CryoREAD, a widely used RNA modeling tool. By simplifying the workflow and offering customizable quality control, CryoDataBot enables researchers to easily tailor dataset construction to the specific objectives of their models, while ensuring high data quality and reducing manual workload. This flexibility supports a wide range of applications in AI-driven structural biology.

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giaf127), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 2: Dong Si

      This paper discusses CryoDataBot, which creates cryoEM datasets for the use of training with the abilities to filter out based on redundancy, MMF and other user defined parameters. Here are some comments:

      • The data labeling just has helix, sheet, coil, and RNA. The labeling should also consider DNA and other structures.

      • The introduction of a Volume Overlap Fraction (VOF) score to validate map-model fitness (MMF) is a novel method to assess global alignment. However, VOF relies on summing and binarizing 2D projections which may have limitations. It is not clear how sensitive the VOF score is to the binarization process or how it handles complex, non-globular shapes. The paper would be strengthened if the authors could provide more justification for this specific metric over other global 3D correlation scores. An analysis of specific examples of map-model pairs that were discarded by the VOF score but not by the Q-score would be informative.

      • The authors acknowledge the trade-off between higher precision and lower recall that results from overly stringent filtering. While increased precision clearly benefits tasks like model refinement, the resulting reduced recall could be a significant hinder de novo modeling which is dependent upon capturing the entirety of a structure, even with lower confidence. This point could be elaborated on. Is this an area for future work, .e.g. developing pre-configured filtering settings for various downstream tasks, like a Precision vs. Recall bias setting? This might increase utility based on application.

      • The retraining of CryoREAD is a practical validation of the pipeline's utility for RNA modeling, however the experimental dataset used is exclusively from ribosomes. Ribosomes were selected because they contain both protein and RNA and are abundant in the EMDB but they may not represent the full diversity of RNA structures. The authors rightly note that training set composition affects performance. It would be helpful to further discuss the potential shortcomings of an exclusively ribosome-based training set and possible impact to the retrained CryoREAD model's use validating other classes of RNA.

      • The author should consider benchmarking on the other SOTA protein-RNA-DNA modeling tools. Right now it is only benchmarked on their own CryoREAD which is just a RNA/DNA modeling tool.

      • I tried installing CryoDataBot and looks like it requires python version 3.8 or higher but isn't specified anywhere in the paper or the site.

      • Many references and citations are off and wrong.

    3. AbstractCryogenic electron microscopy (cryoEM) has revolutionized structural biology by enabling atomic-resolution visualization of biomacromolecules. To automate atomic model building from cryoEM maps, artificial intelligence (AI) methods have emerged as powerful tools. Although high-quality, task-specific datasets play a critical role in AI-based modeling, assembling such resources often requires considerable effort and domain expertise. We present CryoDataBot, an automated pipeline that addresses this gap. It streamlines data retrieval, preprocessing, and labeling, with fine-grained quality control and flexible customization, enabling efficient generation of robust datasets. CryoDataBot’s effectiveness is demonstrated through improved training efficiency in U-Net models and rapid, effective retraining of CryoREAD, a widely used RNA modeling tool. By simplifying the workflow and offering customizable quality control, CryoDataBot enables researchers to easily tailor dataset construction to the specific objectives of their models, while ensuring high data quality and reducing manual workload. This flexibility supports a wide range of applications in AI-driven structural biology.

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giaf127), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 1: Ashwin Dhakal

      The authors introduce CryoDataBot, a GUI‐driven pipeline for automatically curating cryo EM map / model pairs into machine learning-ready datasets. The study is timely and addresses a real bottleneck in AI driven atomic model building. The manuscript is generally well written and benchmarking experiments (U Net and CryoREAD retraining). Nevertheless, several conceptual and presentation issues should be resolved before the work is suitable for publication:

      1 All quantitative tests focus on ribosome maps in the 3-4 Å range. Because ribosomes are unusually large and RNA rich, it is unclear whether the curation criteria (especially Q score ≥ 0.4 and VOF ≥ 0.82) generalise to smaller or lower resolution particles. Please include at least one additional macromolecule class (e.g. membrane proteins or spliceosomes) or justify why the current benchmark is sufficient.

      2 The manuscript adopts fixed thresholds (Q score 0.4; 70 % similarity; VOF 0.82) yet does not show how sensitive downstream model performance is to these values. A short ablation (e.g. sweep the Q score from 0.3-0.6) would help readers reuse the tool sensibly.

      3 Table 1 claims CryoDataBot "addresses omissions" of Cryo2StructData, but no quantitative head to head benchmarking is provided (e.g. train the same U Net on Cryo2StructData). Please add such a comparison or temper the claim.

      4 For voxel wise classification, F1 scores are affected by severe class imbalance (Nothing ≫ Helix/Sheet/Coil/RNA). Report per class support (number of positive voxels) and consider complementary instance level or backbone trace metrics.

      5 In Fig. 4 the authors show that poor recall/precision partly stems from erroneous deposited models. Quantify how often this occurs across the 18 map test set and discuss implications for automated QC inside CryoDataBot.

      6 The authors note improved precision but slightly reduced recall in CryoDataBot-trained models. This is explained, but strategies to mitigate this tradeoff are not discussed. Could ensemble learning, soft labeling, or multi-resolution data alleviate the recall drop?

    1. AbstractBackground Technological advances in sequencing and computation have allowed deep exploration of the molecular basis of diseases. Biological networks have proven to be a useful framework for interrogating omics data and modeling regulatory gene and protein interactions. Large collaborative projects, such as The Cancer Genome Atlas (TCGA), have provided a rich resource for building and validating new computational methods resulting in a plethora of open-source software for downloading, pre-processing, and analyzing those data. However, for an end-to-end analysis of regulatory networks a coherent and reusable workflow is essential to integrate all relevant packages into a robust pipeline.Findings We developed tcga-data-nf, a Nextflow workflow that allows users to reproducibly infer regulatory networks from the thousands of samples in TCGA using a single command. The workflow can be divided into three main steps: multi-omics data, such as RNA-seq and methylation, are downloaded, preprocessed, and lastly used to infer regulatory network models with the netZoo software tools. The workflow is powered by the NetworkDataCompanion R package, a standalone collection of functions for managing, mapping, and filtering TCGA data. Here we show how the pipeline can be used to study the differences between colon cancer subtypes that could be explained by epigenetic mechanisms. Lastly, we provide pre-generated networks for the 10 most common cancer types that can be readily accessed.Conclusions tcga-data-nf is a complete yet flexible and extensible framework that enables the reproducible inference and analysis of cancer regulatory networks, bridging a gap in the current universe of software tools.

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giaf126), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 2: Jérôme Salignon

      This manuscript presents tcga-data-nf, a Nextflow-based pipeline for downloading, preprocessing, and analyzing TCGA multi-omic data, with a focus on gene regulatory network (GRN) inference. The workflow integrates established bioinformatics tools (PANDA, DRAGON, and LIONESS) and adheres to best practices for reproducibility through containerization (Docker, Conda, and Nextflow profiles). The authors demonstrate the utility of their pipeline by applying it to colorectal cancer subtypes, identifying potential regulatory interactions in TGF-β signaling. The manuscript is well-written and well-structured and provides sufficient methodological details, as well as Jupyter notebooks, for reproducibility. However, there are some areas that require clarification and improvement for acceptance in GigaScience, particularly regarding the scope of the tool, the quality of the inferred regulatory networks, the case study figure, benchmarking, statistical validation, and parameters.

      Major comments:

      • While the pipeline is well designed and executed, the overall impact of the tool feels somewhat limited, especially for a journal like GigaScience, due to its pretty specific application to building GRNs in TCGAs, the relatively small number of parameters, the support of only 2 omics type, and the lack of novel algorithms. To increase the impact of this tool I would recommend adding functionalities, such as:

      o Supporting additional tools. A great strength of the pipeline is the integration with the Network Zoo (NetZoo) ecosystem. However, only three tools are included from NetZoo. Including additional tools would likely increase the scope of users interested in using the pipeline. In particular, an important weakness of the current pipeline is that it is not possible to conduct differential analysis between different networks, which prevents users from identifying the most significant differences between two networks of interest (e.g., CMS2 vs CMS4). The NetZoo contains different tools to conduct such analyses, such as Alpaca 1 or Crane 2, thus this may be implemented to make the pipeline more useful to a broader user base.

      o Adding parameters. A strength of the pipeline is the ability to customize it using various parameters. However, as such the pipeline does not offer many parameters. It would be beneficial to make the pipeline a bit more customizable. For example, novel parameters could be: adding options for excluding selected samples, using different batch correction methods, different methods to map CpGs to genes, additional normalization methods, and additional quality controls (e.g., PCA for methylation samples, md5sum checks). These are just examples and do not need to be all implemented but adding some extra parameters would help make the pipeline more appealing and customizable to various users.

      • The quality of the inferred regulatory networks is hard to judge. There are no direct comparisons with any other tools.

      o For instance, it is mentioned in the text that GRAND networks were derived using a fixed set of parameters, but it could be helpful to show a direct comparison between GRNs built from your tools with those from GRAND. This could reveal how the ability to customize GRNs using the pipeline's parameters helps in getting better biological insights.

      o Alternatively, or in addition, one could compare how networks built by your method fare in comparison to networks built from other methods, like RegEnrich 3 or NetSeekR 4, in terms of biological insights, accuracy, scalability, speed, functionalities and/or memory usage.

      o Another angle to judge the regulatory networks would be to check in a case study if the predicted gene interactions between disease and control networks are enriched in disease and gene-gene interactions databases, such as DisGeNet 5.

      • Figure 2 needs re-work:

      o Panel A and C: text is too small. "tf" should be written TF. "oi" should have another name. These panels might be moved to the supplements.

      o Panel D is confusing. Without significance it is hard to understand what the point of this panel is. I can see that certain TFs are cited in the main text but without information about significance, these may seem like cherry-picking. The legends states: Annotation of all TFs in cluster D (columns) to the Reactome parent term. "Immune system" and "Cellular respondes to stimuli" are more consistenly involved in cluster D, in comparison to cluster A.. However, this is a key result which should be shown in a main figure, not in Figure S6. I would also recommend using a -log scale when displaying the p-values to highlight the most significant entries.

      o Panel E is quite confusing; first, the color coding is unclear. For instance, what represents blue, purple and red colors? Second, what represents the edges' widths? I would recommend using different shapes for the methylation and expression nodes to reduce the number of colors, and adding a color legend. I would also consider merging the two graphs and representing in color the difference in the edge values so the reader can directly see the key differences.

      • Benchmarking analysis could be included to show the runtime and memory requirement for each pipeline step. It would also be beneficial to analyze a larger dataset than colon cancer to assess the scalability.

      • Statistical analysis: If computationally feasible, permutation testing could be implemented to quantify the robustness of inferred regulatory interactions. Also, in the method section, it should be clarified that FDR correction was applied for pathway enrichment analysis.

      Minor comments:

      • I am not sure why duplicate samples are discarded in the pipeline. Why not add counts for RNA-Seq and averaging beta values? I would expect that to yield more robust results.

      • It is a bit unclear in what context the NetworkDataCompanion tool could be used outside the workflow. It is also unclear how it helps with quality controls. Please clarify these aspects.

      • The manuscript is well-written, but words are sometimes missing or wrongly written, it needs careful re-read.

      • The expression '"same-same"' is unclear to me.

      • In this sentence: "Some of "same-same" genes (STAT5A, CREB3L1"…, I am not sure in which table or figure I can find this result?

      • Text is too small in the Directed Acyclic Graph, especially in Figure S4. Also, I would recommend adding the Directed Acyclic Graphs from Figure S1-S4 to the online documentation.

      • Regarding the code, I was puzzled to see a copyConfigFiles process. Also, there are files in bin/r/local_assets, these should be located in assets. And the container for the singularity and docker profile is likely the same, this should be clarified in the code.

      • It is recommended to remove the "defaults" channel from the list of channels declared in the containers/conda_envs/analysis.yml file. Please see information about that here https://www.anaconda.com/blog/is-conda-free and here https://www.theregister.com/2024/08/08/anaconda_puts_the_squeeze_on/.

      Additional comments (which do not need to be addressed):

      • Future work may consider enabling the use of the pipeline to build GRNs from other data sources than TCGA (i.e., nf-netzoo). Recount3 data is already being parsed for GTEx and TCGA samples, so it might be relatively easy to adapt the pipeline so that it can be used on any arbitrary recount3 dataset. Similarly, it could be useful if one could specify a dataset on the recountmethylation database 6 to build GRNs. While these unimodal datasets could not be used with the DRAGON method they would still benefit from all other features of the pipeline.

      • Using a nf-core template would enable better structure of the code and increase the visibility of the tool. Also using multiple containers is usually easier to maintain and update than a single large container, especially when a single tool needs to be updated or when modifying part of the pipeline. Another comment is that the code contains many comments which are not to explain the code but more like quick draft which makes the code harder to read by others.

      References 1. Padi, M., and Quackenbush, J. (2018). Detecting phenotype-driven transitions in regulatory network structure. npj Syst Biol Appl 4, 1-12. https://doi.org/10.1038/s41540-018-0052-5. 2. Lim, J.T., Chen, C., Grant, A.D., and Padi, M. (2021). Generating Ensembles of Gene Regulatory Networks to Assess Robustness of Disease Modules. Front. Genet. 11. https://doi.org/10.3389/fgene.2020.603264. 3. Tao, W., Radstake, T.R.D.J., and Pandit, A. (2022). RegEnrich gene regulator enrichment analysis reveals a key role of the ETS transcription factor family in interferon signaling. Commun Biol 5, 1-12. https://doi.org/10.1038/s42003-021-02991-5. 4. Srivastava, H., Ferrell, D., and Popescu, G.V. (2022). NetSeekR: a network analysis pipeline for RNA-Seq time series data. BMC Bioinformatics 23, 54. https://doi.org/10.1186/s12859-021-04554-1. 5. Hu, Y., Guo, X., Yun, Y., Lu, L., Huang, X., and Jia, S. (2025). DisGeNet: a disease-centric interaction database among diseases and various associated genes. Database 2025, baae122. https://doi.org/10.1093/database/baae122. 6. Maden, S.K., Walsh, B., Ellrott, K., Hansen, K.D., Thompson, R.F., and Nellore, A. (2023). recountmethylation enables flexible analysis of public blood DNA methylation array data. Bioinformatics Advances 3, vbad020. https://doi.org/10.1093/bioadv/vbad020.

    2. AbstractBackground Technological advances in sequencing and computation have allowed deep exploration of the molecular basis of diseases. Biological networks have proven to be a useful framework for interrogating omics data and modeling regulatory gene and protein interactions. Large collaborative projects, such as The Cancer Genome Atlas (TCGA), have provided a rich resource for building and validating new computational methods resulting in a plethora of open-source software for downloading, pre-processing, and analyzing those data. However, for an end-to-end analysis of regulatory networks a coherent and reusable workflow is essential to integrate all relevant packages into a robust pipeline.Findings We developed tcga-data-nf, a Nextflow workflow that allows users to reproducibly infer regulatory networks from the thousands of samples in TCGA using a single command. The workflow can be divided into three main steps: multi-omics data, such as RNA-seq and methylation, are downloaded, preprocessed, and lastly used to infer regulatory network models with the netZoo software tools. The workflow is powered by the NetworkDataCompanion R package, a standalone collection of functions for managing, mapping, and filtering TCGA data. Here we show how the pipeline can be used to study the differences between colon cancer subtypes that could be explained by epigenetic mechanisms. Lastly, we provide pre-generated networks for the 10 most common cancer types that can be readily accessed.Conclusions tcga-data-nf is a complete yet flexible and extensible framework that enables the reproducible inference and analysis of cancer regulatory networks, bridging a gap in the current universe of software tools.

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giaf126), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 1: Xi Chen

      Fanfani et al. present tcga-data-nf, a Nextflow pipeline that streamlines the download, preprocessing, and network inference of TCGA bulk data (gene expression and DNA methylation). Alongside this pipeline, they introduce NetworkDataCompanion (NDC), an R package designed to unify tasks such as sample filtering, identifier mapping, and normalization. By leveraging modern workflow tools—Nextflow, Docker, and conda—they aim to provide a platform that is both reproducible and transparent. The authors illustrate the pipeline's utility with a colon cancer subtype example, showing how multi-omics networks (inferred via PANDA, DRAGON, and LIONESS) may help pinpoint epigenetic factors underlying more aggressive tumor phenotypes. Overall, this work addresses a clear need for standardized approaches in large-scale cancer bioinformatics. While tcga-data-nf promises a valuable resource, the following issues should be addressed more thoroughly before publication: 1. While PANDA, DRAGON, and LIONESS form a cohesive system, they were all developed by the same research group. To strengthen confidence, please include head-to-head comparisons with other GRN inference methods (e.g., ARACNe, GENIE3, Inferelator). A small benchmark dataset with known ground-truth (or partial experimental validation) would be especially valuable. 2. Although the manuscript identifies intriguing TFs and pathways, it lacks confirmation through orthogonal data or experiments. If available, consider including ChIP-seq or CRISPR-based evidence to reinforce at least a subset of inferred regulatory interactions. Even an in silico overlap with known TF-binding sites or curated gene sets would help validate the predictions. 3. PANDA and DRAGON emphasize correlation/partial correlation, so they may overlook nonlinear or combinatorial regulation. If feasible, please provide any preliminary steps taken to capture nonlinearities or discuss approaches that could be integrated into the pipeline. 4. LIONESS reconstructs a network for each sample in a leave-one-out manner, which can be demanding for large cohorts. The paper does not mention runtime or memory requirements. Adding a Methods subsection with approximate CPU/memory benchmarks (e.g., "On an HPC cluster with X cores, building LIONESS networks for 500 samples took Y hours") is recommended to guide prospective users. 5. Currently, the pipeline only covers promoter methylation and standard gene expression, yet TCGA and related projects include other data types (e.g., miRNA, proteomics, histone modifications). If possible, offer a brief example or instructions on adding new omics layers, even conceptually. 6. Recent methods often target single-cell RNA-seq, but tcga-data-nf is geared toward bulk datasets. Please clarify limitations and potential extensions for single-cell or multi-region tumor data. This would help readers understand whether (and how) the pipeline could be adapted to newer high-resolution profiles. Minor point: 1. Provide clear guidance on cutoffs for low-expressed genes, outlier samples, and methylation missing-value imputation. 2. Consider expanding the supplement with a "quick-start" guide, offering step-by-step usage examples. 3. Ensure stable version tagging in your GitHub repository so that readers can reproduce the exact pipeline described in the manuscript.

    1. AbstractBackground Single-cell RNA-seq suffers from unwanted technical variation between cells, caused by its complex experiments and shallow sequencing depths. Many conventional normalization methods try to remove this variation by calculating the relative gene expression per cell. However, their choice of the Maximum Likelihood estimator is not ideal for this application.Results We present GTestimate, a new normalization method based on the Good-Turing estimator, which improves upon conventional normalization methods by accounting for unobserved genes. To validate GTestimate we developed a novel cell targeted PCR-amplification approach (cta-seq), which enables ultra-deep sequencing of single cells. Based on this data we show that the Good-Turing estimator improves relative gene expression estimation and cell-cell distance estimation. Finally, we use GTestimate’s compatibility with Seurat workflows to explore three common example data-sets and show how it can improve downstream results.Conclusion By choosing a more suitable estimator for the relative gene expression per cell, we were able to improve scRNA-seq normalization, with potentially large implications for downstream results. GTestimate is available as an easy-to-use R-package and compatible with a variety of workflows, which should enable widespread adoption.

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giaf084), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 2: Amichai Painsky

      This paper introduces a Good-Turing (GT) estimation scheme for relative gene expression estimation and cell-cell distance estimation. The proposed methods, namely GTestimate, claims to improve upon conventional normalization methods by accounting for unobserved genes. The idea behind this contribution is fairly straightforward - since the relative gene expression is of large alphabet, a GT estimator is expected to preform better than a naive ML approach. However, I am not convinced that the authors applied it correctly. First, the proposed GT estimator (as appears in (GT)) in the text), assigns a zero estimate to unobserved genes (Cg = 0). This contradicts the entire essence of using a GT estimator. Second, it makes no since to use this expression for every Cg > 0. In fact, any reasonable GT based estimator applies GT for relatively small Cg, and ML estimator for large Cg. See [1] for a through discussion. The choice of a threshold between "small" and "large" Cg's is subject to many studied (for example [2], [1]), but it makes no sense to use the above expression for any Cg. Finally, notice that if N_{Cg} > 0 for some g but N_{Cg+1} = 0, the proposed estimator is not defined. There exists several smoothing solutions for such cases (for example [3]), but they need to be properly discussed. to conclude, I am not sure what is the effect of these issues on the experiments in the paper, which makes it difficult to assess the results.

      REFERENCES

      [1] A. Painsky, "Convergence guarantees for the good-turing estimator," Journal of Machine Learning Research, vol. 23, no. 279, pp. 1-37, 2022. [2] E. Drukh and Y. Mansour, "Concentration bounds for unigram language models." Journal of Machine Learning Research, vol. 6, no. 8, 2005. [3] W. A. Gale and G. Sampson, "Good-Turing frequency estimation without tears," Journal of quantitative linguistics, vol. 2, no. 3, pp. 217-237, 1995.

    2. AbstractBackground Single-cell RNA-seq suffers from unwanted technical variation between cells, caused by its complex experiments and shallow sequencing depths. Many conventional normalization methods try to remove this variation by calculating the relative gene expression per cell. However, their choice of the Maximum Likelihood estimator is not ideal for this application.Results We present GTestimate, a new normalization method based on the Good-Turing estimator, which improves upon conventional normalization methods by accounting for unobserved genes. To validate GTestimate we developed a novel cell targeted PCR-amplification approach (cta-seq), which enables ultra-deep sequencing of single cells. Based on this data we show that the Good-Turing estimator improves relative gene expression estimation and cell-cell distance estimation. Finally, we use GTestimate’s compatibility with Seurat workflows to explore three common example data-sets and show how it can improve downstream results.Conclusion By choosing a more suitable estimator for the relative gene expression per cell, we were able to improve scRNA-seq normalization, with potentially large implications for downstream results. GTestimate is available as an easy-to-use R-package and compatible with a variety of workflows, which should enable widespread adoption.

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giaf084), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 1: Gregory Schwartz

      In this manuscript, Fahrenberger et al. propose a new scRNA-seq normalization method to more accurately report UMI counts of individual cells. They specifically use a Good-Turing estimator, compared with a more commonly used Maximum Likelihood estimator, to adjust raw UMI counts. Using their own cta-seq, a cell targeted PCR-amplification strategy, as ground truth, they compare their estimator with a traditional size-corrected estimator. Furthermore, they illustrate downstream changes using their method, including changes to clustering results and spatial transcriptomic readouts. The manuscript was a clear read and presents an interesting alternative solution to an often overlooked, but important, problem. However, there are some aspects of the manuscript that need to be addressed. Some major content missing includes comparisons with more widely-used normalization methods throughout the manuscript, and better ground truth data sets in their downstream analysis. Specific comments are as follows:

      l. 34: To my knowledge, most groups do not use a single division by total UMI count as the only normalization. Seurat has NormalizeData, but also heavily promotes scTransform, a completely different method. Many use log transform (as I believe was done here), some use quantile transform, others use regression techniques etc. It was odd to see these standard normalizations missing in comparisons. The authors should use such standard procedures to demonstrate the superiority of GT.

      l. 42: Is there a justification for the successor function being applied within the frequency ((cg + 1) / total) instead of outside ((cg / total) + 1) as is expected with the Good-Turing estimation?

      Furthermore, there is typically a smoothing function for erratic N_cg values, which I would expect with single-cell data. In the methods there is a brief mention of linear smoothing, but that would imply that the GT equation is misleading and oversimplified. The actual equation should be included in the main text to avoid confusion.

      l. 58: Compared to 16,965 reads average per cell, what is the equivalent for the ultra-deep sequencing (not 23 million reads, as that is not 7.4 fold increase)?

      I am not entirely convinced on the use of cta-seq as a ground-truth for the cells, especially in comparison with ML. The authors should show that cta-seq has similar UMI and gene count distributions to more popular scRNA-seq technologies (e.g. 10x Chromium) or the application may be specific to cta-seq only.

      l. 110: Instead of using unknown classification data sets, there are existing cell-sorted data sets with ground truths (many even on the 10x website). The authors should use these data sets to compare downstream analysis.

      l. 125: The spatial transcriptomic results were very subjective, with no statistical hypotheses. The entire manuscript is missing any sort of statistics when comparing methods, which is a major flaw and should be rectified. Here specifically, the color scale stops at 3, but does this carry over to the relative differential expression? The claim is that it is constant, but if they are all greater than 3 then they must be quite variable, so it is surprising to see such a constant value of 0. Maybe the complete color scale should be shown on all figures to clarify this.

      From my understanding of the manuscript, the 18 cells for analysis and comparison were chosen based on a typical Seurat analysis. This technique introduces a range of biases into the comparison and makes the argument a bit circular.

      For a bias example, the top 2000 most variable genes were used, suggesting that entire classes of genes may be ignored even when highly or lowly expressed, such as housekeeping genes.

      There also appears to be many steps that were not entire justified outside of a "typical analysis", for example excluding a cluster in the analysis (just because it was not that large?), only selection 18 cells (why 6 from each cluster?), removing cells with less than 1000 expressed genes or over 8% mitochrondrial reads (this may be an issue, and removing specific cell types or proliferating cells, this should be a bivariate removal with justification). All of these filterings remove generalizeability of GT.

      Supplementary Figures in the text hyperlink to the main figures which is confusing. More importantly, the caption of Supplementary Figures read "Figure" rather than "Supplementary Figures".

    1. What if your biggest competitive asset is not how fast AI helps you work, but how well you question what it produces?

      The idea that AI isn't all-knowing, but rather we should doubt it and apply ourselves as it was made by humans after all.

    2. Resilience has become something of a corporate buzzword, but genuine resilience can help organizations adapt to AI.

      We need to resist AI in a sort of way, as if we do not, it will eventually be our downfall.

    3. But in the rush to adopt AI, some organizations are overlooking the real impact it can have on workers and company culture.

      AI is impacting all of us immensely, both visibly and invisibly, from taking jobs from citizens to creating new jobs for others.

  2. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. The maxim "less contact, less learning" succinctly summarizes the argu-ments supporting students' exposure to quality language models and in-struction. 42 fo·lea~n a°qanguage-we'tr,"'orre"'ftrrmflave-st1stained inter~1?t10ns \~t edueated-~ative-speakers--of-Englrnn, as \vellasgooct1angu_a~ i~s~~~l(-tion. Students can only learn the new language in the style to which they are exposed. If an English-language learner lives and talks daily with Eng-lish speakers in a boarding school in London, she will learn a very different kind of English and sound very different than if she had been immersed in a public school in Atlanta, Sidney, or Toronto. Likewise, someone hoping to improve their Spanish-speaking skills will sound very different ,1fter ,in extended study-abroad stay in Madrid, Mexico City, Santo Domingo, or Buenos Aires.

      “Less contact, less learning.” The key to learning a language lies not in mere classroom hours or memorized vocabulary, but in sustained interaction with high-quality language input. In other words, language proficiency is shaped within authentic contexts, not through isolated grammar drills. The example illustrates how different English or Spanish learning environments cultivate entirely distinct linguistic styles and pronunciation traits, revealing the social and contextual nature of language acquisition. From an educational perspective, this passage reminds teachers that language instruction cannot rely solely on textbooks or exams. Instead, educators should create rich communicative situations that allow students to truly “immerse” themselves in the language and culture. Simultaneously, it reflects the structural inequality faced by immigrant students in language learning—if they lack sustained interaction with native speakers, they are effectively deprived of the conditions necessary for language development.

    2. Clearly, if we are to expect newcomer students to learn English, as they and we would like them to, our schools need to do a better job of develop-ing educational contexts that will make it happen. Our focus at the begin-ning of the study was very student-centered; we considered the resources the students brought with them, the engagement they brought to the task, as well as the educational contexts they encountered. But while these fac-tors certainly contribute to language acquisition, the schools also play a fundamental role in whether students learn English. Our findings parallel those of Gary Orfield, Guadalupe Valdes, Laurie Olsen, and others who have insightfully described the intense physical and linguistic segregation that many newcomer immigrant students encounter. 54 While there have been some attempts to address the needs of students coming in at the ele-mentary level, there has been a lamentable and disconcerting absence of ef-forts to meet the needs of English-language learners arriving at the second-ary school level.55 This gap absolutely needs to be addressed if we wish to harness the energies of all of our newcomer students.

      Immigrant students require at least seven to ten years of high-quality learning environments to truly master “academic English,” yet current education policies demand they pass standardized tests within three years. This unrealistic expectation not only creates psychological pressure but also systematically produces “losers.” It reveals how U.S. education policies prioritize “measurable outcomes” over fairness and growth within the long-term learning process. This “time violence” exemplifies how the education system sacrifices marginalized groups under the logic of efficiency. When annotating this passage, one might reflect on whether educational assessment should shift toward “developmental support” rather than “elimination-based screening.”

    3. Today, immigration is once again a momentous social force, compelling Americans to face the challenge and opportunity of integrating and har-nessing the energy of the greatest number of immigrants in the nation's history. By 2005 there were well over 35 million immigrants in the United States-some 12.4 percent of the U.S. population.

      American society has long harbored cultural anxieties and identity insecurities regarding immigration. The author notes that Americans' concerns over whether immigrants “are willing to learn English” are not a new phenomenon, but rather a recurring “political discourse” that resurfaces during periods of economic and social upheaval. At its core, this anxiety stems from fears about national identity and cultural purity. Learning English here is treated as a symbol measuring “loyalty” and “degree of Americanization,” rather than a matter of linguistic ability. This reflects how language is politically employed as an “assimilation tool,” maintaining the stability of social power structures by creating distinctions between “good immigrants” and “bad immigrants.” When annotating this passage, consider: Is learning English truly an educational goal, or an institutionalized social expectation?

  3. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. . My parents tried to talk to my teacher about it, but it was kind of hard. They don’t really speak much English and my teacher wasn’t much of a help either. She cancelled a couple meetings with them and, you know, they were taking time off work to go, so they felt bad, like she wasn’t respecting their time. When they fi nally met she really scared them with stories about teachers being attacked by students and that she didn’t feel safe there. They ended up taking me out of school a couple weeks later.

      Parents struggled to communicate with teachers due to limited English proficiency. Hoping to understand the situation through face-to-face interaction, they were further marginalized by the teacher's negligence and fear-mongering narrative. The teacher's repeated cancellations not only reflect a disregard for immigrant families' time and effort but also reveal the system's implicit exclusion of non-native English-speaking parents. More alarmingly, when this teacher used the story of “students attacking teachers” to intimidate parents, she effectively transformed the educational space into a realm of distrust and fear, misleading parents into believing their children were unsafe at school. Ultimately, the student's forced withdrawal from school reveals how structural discrimination, through the accumulation of everyday interactions, quietly deprives immigrant families of educational opportunities. This narrative prompts reflection: true inclusive education occurs not only within the classroom but hinges on whether teachers are willing to listen to every family with respect and equality.

    2. What would be most benefi cial for the successful transitions of undocu-mented immigrant students are school structures and cultures that facilitate positive interactions between students, teachers, and staff, allowing those at all levels to develop school-based social capital and build relationships of trust so critical to their success. By investing in a baseline of support for all students, schools could develop support structures necessary to facilitate more targeted outreach to undocumented students. This is not only a social justice issue, but an economic imperative for the nation

      Institutional support and social capital play a pivotal role in the educational transition of undocumented immigrant students. The author argues that relying solely on individual teachers' compassion or students' personal efforts is insufficient; true change stems from systemic adjustments to school structures and cultures. When schools foster an atmosphere that encourages interaction, trust, and inclusion, the connections formed among students, teachers, and administrators create a “school-based social capital” that prevents undocumented students from remaining isolated. Notably, the author elevates this issue to the levels of social justice and economics, arguing that supporting undocumented students is not only a moral obligation but also vital to the nation's future development. This framing transcends narrow humanitarian perspectives on immigrant education, instead proposing a broader vision for structural reform. It reminds us that educational equity and societal prosperity are interdependent.

    3. Together with six siblings and her two parents, she came to the U.S. when she was just nine years old. Flor’s formative years were diffi cult and shaped in her a sense of ambivalence about the future. She realized from an early age that her lack of papers— papeles—would keep her from the good jobs she dreamed of as a child. She also felt like an outsider at school, internalizing a belief that no one was looking out for her—that she was on her own.

      Flor realized at a young age that “lack of papers” was not merely a legal issue but a form of enduring social exclusion, fostering a sense of “ambivalence” about her future. This internalized feeling of ‘invisibility’ led her to develop a survival strategy of “isolating herself” in school—believing she must face everything alone. This narrative reveals how immigrant status shapes one's self-perception and social positioning at a psychological level, while also exposing the profound impact of institutional exclusion (such as immigration restrictions) on educational opportunities and life aspirations. Flor's story is not an isolated case, but rather a microcosm of the struggles faced by countless undocumented students navigating the American education system.

    1. We see this coun-ter-narrative as a crucial element in the development of a systematic analysis of the racism, classism, and linguicism that permeate much of urban educa-tion as well as in the development of culturally relevant curricula

      Racism, classism, and linguicism are pervasive in urban education, and schools' overemphasis on “monolingual literacy standards” perpetuates these inequalities. By demonstrating how families and communities serve as children's “invisible classrooms,” the author calls on teachers to redefine their roles—not merely as knowledge transmitters, but as cultural bridge-builders. By acknowledging and leveraging students' home literacy experiences—such as religious practices, games, and bilingual storytelling—teachers can make education truly inclusive and socially just.

    2. We came to understand that there is a distinction between places as the actual locations while spaces are constructed by human actors who are, in turn, shaped by those spaces in fluid and reciprocal pro-cesses.

      This passage reveals the theoretical significance of the author's adoption of the “spatial turn”—she distinguishes between ‘place’ and “space.” Place refers to physical existence, while space is a product of social and cultural actions. In other words, literacy spaces are not naturally occurring; they are co-created by family members through daily interactions, language, objects, and emotions. For instance, Benny's bedroom or Miguel's library experience are not merely “places,” but learning “spaces” imbued with meaning through their engagement. This reminds educators that literacy development occurs not only in classrooms but also within children's daily lives. Those seemingly ordinary corners—the dining table, the church pew, the computer desk—are all vital educational settings.

    3. We planned to investigate both the places outside of school, in their homes and communities, where the two children and their families accessed literacy resources and the formal and informal literacy interactions that they con-structed there. In this way, we hoped to problematize the common privileging of school-centered literacy and education, challenge the discriminatory

      The author explicitly states that her research does not aim to replicate the conventional narrative of “resource scarcity in impoverished families,” but rather to construct a counter-narrative revealing how low-income Latino families proactively create literacy opportunities. The key term here is “agency”—meaning families and children are not passive recipients but active knowledge constructors. This perspective overturns the previous school-centered, standardized literacy view rooted in white middle-class norms. It also prompts us to rethink the true meaning of “educational equity”: equity does not mean having every child learn in the same way, but ensuring that every culture's learning methods are seen and respected.

    1. However, Arturo is failing as a reader in both English and Spanish. Ms. Stewart, Arturo’s English teacher, views him as a disengaged reader, not mak-ing progress, and not having the English vocabulary to engage with the chapter books that they are read-ing. Arturo is placed in the group with the lowest reading level. The stories they read are not complex, and the work in the group is mostly about vocabu-lary buildup. Ms. Stewart blames Arturo’s slow prog-ress on his Spanish. Similarly, Ms. Medina, Arturo’s Spanish teacher, believes that he does not have suf-ficient Spanish-language vocabulary to make sense of the Spanish-language chapter books. For Ms. Medina, raised and educated in Colombia through university, Arturo’s Spanish is simply deficient

      A shift in educators' perspectives can profoundly impact students' reading abilities. Initially, teachers evaluated Arturo's English and Spanish skills separately, concluding he “failed in both languages.” However, when educators began creating “cross-language spaces” in the classroom—allowing students to freely switch between English and Spanish for performances and discussions—Arturo demonstrated rich critical thinking and cultural insight. This transformation underscores the pivotal role of teacher attitudes in language education—students' “proficiency” is often not lacking, but obscured by narrow assessment methods. The author uses this case to remind us: educational equity lies not merely in offering bilingual programs, but in whether teachers can genuinely understand, respect, and enter students' linguistic worlds.

    2. I start with Paco, the 3-year-old bilingual child whose mother is a U.S.-born Latina woman and whose father is a U.S.-born white man. The mother grew up in a bilingual home, the father in a monolingual one, but he studied Spanish in high school. The family is comfortable in a translanguaging space, where their use of English and Spanish is unbounded, dynamic, and fluid and adapts to meet the communicative expectations of the many different people who enter the home.

      Paco's example vividly demonstrates the naturalness of multilingual practices in early childhood language development. While reading Jorge el Curioso, he freely mixed English and Spanish, using gestures and sounds to express the story—a behavior encouraged and praised in the home environment rather than corrected. This illustrates that language learning itself is multimodal, emotionally charged, and physically engaged, rather than a rigid accumulation of grammar rules. When annotating this passage, note the author's implicit critique: formal schooling often stifles such free expression, transforming children from “language creators” into “language conformists.” Paco's multilingual reading practice at home reminds us that authentic language education should center on comprehension and expression, not solely on linguistic correctness.

    3. In this article, I argue that the act of reading does not depend on the language of the written text or even on the concept of a named language such as English or Spanish. Rather, the act of reading is about readers assembling all their meaning-making resources and acting on them to read themselves.

      The process of reading does not depend on the “designation language” used in the text (such as English or Spanish), but rather on how readers utilize their entire linguistic repertoire to comprehend the text. This perspective challenges the assumption of “language compartmentalization” in traditional language education, proposing a more fluid and authentic approach to understanding. For Hispanic bilingual students, this cross-linguistic perspective holds profound significance, as it acknowledges their natural switching between two cultures and languages as a strength rather than a flaw. It also prompts reflection on the drawbacks of an educational system overly fixated on “linguistic purity”—where schools often view language mixing as “distraction,” when in fact it embodies the very essence of bilingual thinking and creativity.

    1. But told to whom? Who is the reader I’m addressing when I am writing in English?

      My question: I wonder how writers from countries in war can tell their true stories when they write in English, which is not their first language. Do they lose part of their real voice? Or maybe writing in English helps them reach more people and fight back against silence. Can writing in another language be a kind of power or it loses the originality of the story?

      This question makes me think about how translation and writing can change how stories are heard and understood.

    2. To translate a text is to enter into the most intimate relationship with it possible. It is the translator’s body, almost more so than the translator’s mind, that is the vessel of transfer.

      Why it’s beautiful to me: This line feels beautiful because it turns the act of translation into something alive and human. Mounzer describes translation not as a mechanical task but as a relationship of empathy and feeling , almost like giving life to someone else’s experience inside your own body. As a reader, I find that image powerful because it shows that language connects people emotionally, and not just intellectually.

    3. When you say the word catastrophe, no one need ever ask which one it is you mean
      1. A place in the article where you have a question - try to make the question relevant to things we've been talking about in class, or relevant to your own life and interests.

      One of the most significant interests I have is colonial studies. To paraphrase a famous quote from Malcom X, I find it incredibly interesting to examine the wound left by the knife of colonialism, and how it still effects the global south, in spite of the fact that many people refuse to admit that there is a wound. Through this interest I have learned a decent amount of history about many countries, like Botswana, Egypt, Chile; but what's funny to me (as someone who is Arab) is that I have a huge gap in knowledge when it comes to the history (in particular post-ottoman history) of the Arab world, especially the Levant. So the entire time I was reading this article I was searching in my brain to say if there were any particular conflicts in the area, unfortunately there is a nearly infinite amount of those, that she could be referencing but I couldn't put a finger on it.

      All this to say, I am very interested to know which conflicts she has personally experienced in the region.

    4. There is a violence in undoing someone’s words and reconstituting them in a vocabulary foreign to them, a vocabulary of your own choosing
      1. A sentence, expression, or paragraph that you felt told a very important and deep truth - what makes this truth important or special to you?

      I think this particular quote really hits at something I consider to be very true and does so in a very literarily rich way. Everyone in our class is bilingual but I'm not sure how many people in our class have dual identities like I do. English and Arabic are not just languages to me they represent two very different parts of my identitity and my life, and so her description of the sometimes visceral nature of translation. I experience it every day, I would say about 50% of the Arabic I understand I don't undertsand through the Arabic language itself, it has to be filtered through and translated into English in my head for me to properly understand it. As for when I am speaking, I would say 80% of the Arabic I speak does not come from words or feelings that naturally come to me from the Arabic language, they come to me in English and I have to translate it. In the process, I feel like the words lose their ability to express my emotions, this stripping of their true meaning is what this quote really captures very well.

    5. They were light in English, yes, but also cumbersome and huge. Giant styrofoam shapes
      1. A sentence, expression, or paragraph that you found beautiful - why is it personally beautiful to you?

      I found this particular quote beautiful b/c for whatever reason styrofoam is one of those things thats very tactile-ally memorable to me. Its one of those things whose feeling i can instantly imagine once its called to my attention, and the way the auhor uses it here is really beautiful in my opinion. Its such a great way to convey this unique feeling she is describing, where something is both light but still is a burden and awkard to move with.

    1. The girls rejected mainstream spaces where they often felt marginalized and isolated, such as the ‘Main Street,’ a popular place to sit during lunch, recess, and after school. ‘Main Street’ was a ‘big hallway’ with tall ceilings and many windows located near the main school entrance. It reflected the racial, ethnic, and class diversity of Maple High. It was packed with many groups of students who often sat together based on race, class, and/or gender.

      They perceive the “Main Street” corridor in the main building as representing the school's social hierarchy and aesthetic power center—a sphere to which they do not belong. This rejection is not merely an avoidance of campus social structures but a symbolic critique of society: they refuse to conform to mainstream definitions of ‘attractiveness’ or “popularity,” instead choosing self-defined communities. By actively withdrawing from mainstream spaces, they forge new meaning and security within the “non-mainstream.” This behavior reveals how adolescents express social identity and cultural resistance through seemingly simple “spatial choices” in everyday campus life.

    2. The girls also co-invented a pan-Asian fused language in which Japanese functioned as an Esperanto, an international language. It was their version of ‘language crossing’ (rampton, 1995), using a language that did not ‘belong’ to them. Early in my fieldwork, I was surprised to hear the students use some Japanese words among themselves. While there were no Japanese students or teachers at Maple High, the school offered Japanese as a general language course, and many of the girls took it. Those who had fairly high Japanese skills through taking classes and/or actively watching Japanese dramas, movies, and anime took an active role in using Japanese words such as ‘nani’ (what?), ‘genki?’ (how are you?), and ‘onegai’ (please) with their friends. As the only proximal native Japanese speaker, they happily used a mix of English and Japanese when communicating with me and asked me to teach them Japanese. I often saw the girls carry binders, notebooks, and post-it notes with Japanese words (e.g. their names in Japanese) on them. One day after school, Mino and her basement friends spent time together at a nearby mall writing words and drawing pictures on Meli’s arms, hands, and legs. Mino later showed me a picture she drew on Meli’s arm: a cute rabbit face, which she called an ‘Asian face,’ with the Japanese word ‘kawaii’ written above it

      These girls have created a hybrid language blending elements of Japanese, English, Tagalog, and even Korean to express intimacy and identity among themselves. This linguistic practice demonstrates that they are not passively absorbing mainstream English culture, but actively constructing a multi-layered “pan-Asian cultural identity.” Simultaneously, it reveals the power dynamics underlying language—their choice of Japanese partly stems from Japanese culture's elevated status in global trends. This “cultural borrowing” serves as both a means of self-expression and a reflection of global cultural inequalities. This complexity lies at the heart of the tension inherent in cultural hybridity.

    3. Since the main goal of this study was to capture the experiences of Asian American girls, I did not include most of the other Basement Group students in my research. There may be gender, ethnic, and/or racial differences that are not reflected in this study. As an exception, I talked with Savannah and Meli, two Salvadoran immigrant girls who were close friends with the Asian American girls and part of the core members of the Basement Community. Their perspectives helped deepen my understanding of the experiences of the main participants

      The author focuses mainly on Asian American girls but includes insights from two Salvadoran immigrant girls to broaden the perspective. This shows an effort to include diverse voices and recognize that gender and ethnicity can shape school experiences in different ways.

    1. I give it most of the credit for the fact that ours isthe wealthiest, most technologically advanced, and most socially just soci-ety in human history, not to mention the fact that we have with ease be-come a military superpower .... The rest of the world is quite rightlyimpressed with us, and it is thus no accident that the United States ofAmerica has become the biggest single exporter of public law in the his-tory of humankind.

      I can't help but think that parts of this attitude expressed by Calabresi is debatable not just in light of the condition of the US in the present day, but even when he made these comments in 1998. Many would certainly disagree that the US is/was the "most socially just society in human history" nor was it the case that the US "with ease [became] a military superpower."

    Annotators

    1. Large circuits slow down proving time

      This is true but not in this case. Doing this splitting wires slows down the proving time, but we have to do.

    2. Purpose and Problem Solved The Finalizer bridges the gap between symbolic execution and concrete circuit generation: Problem 1: Symbolic → Concrete Conversion During execution, the Synthesizer works with symbolic pointers (e.g., StackPt, MemoryPt) The backend prover needs concrete numerical wire connections Solution: Finalizer converts all symbolic references into actual wire indices and constraint equations Problem 2: Circuit Optimization Raw placement data from execution can be inefficient (redundant wires, unused connections) Large circuits slow down proving time EVM uses 256-bit values but Circom's finite field is 254-bit (field overflow risk) Solution: PlacementRefactor optimizes wire sizes, removes unnecessary connections, and splits 256-bit values into two 128-bit limbs for field compatibility Problem 3: Backend Integration Frontend and backend use different data structures Backend needs standardized JSON format for circuit loading Solution: Permutation class generates JSON files that match backend's expected schema Problem 4: Witness Data Management Circuit needs both structure (permutation) and concrete values (witness) Witness data must align with circuit wire indices Solution: Generates permutation.json (structure) and placement-specific witness files

      I think this introduction can be moved to the "Execution Flow" section.

    1. Be sure not to put this off. Theabove is what has to be communicated

      This is a very straightforward but very accurate and effective letter. He makes all the best arguments, pointing out that the English are either openly selling drugs out of their enormous greed, or are either too unwilling or inefficient to control what their own traders are doing. Its funny how clearly it is just a kindly worded diss. He's pretty much saying that the English can either stop being evil or otherwise China will not give them awesome stuff.

    2. enowned for his competencein administering fiscal matters and public works, and his skill at governance.

      Likely also a philosopher / political philosopher. China had a very unique focus on the learning and philosophy of governance. I should consult my reading from PS10, but as I recall there was a rich history of political philosophy and development. Confucius himself was a statesman. Even the Dao which advocates for a solitary life gives governing advice. That is all to say that the Chinese had a rich history of political philosophy as well as great respect for good statesmen. There was a very different, service oriented, attitude which contrasts with the European binary of government by tyrant or republic.

    Annotators

    1. hough the President is by far the stronger of the two offices, the Presidentand Prime Minister to some degree share executive power.

      president stronger than PM but they to some degree share executive power

    2. There are a variety of differenttheories for why this might be so, but a dominant one is the idea that when the presidentdoes not enjoy the support of a majority of the legislature

      the reasons for troubled democracy in Latin America could be that when the president does NOT enjoy the support of the majority of the legislature, it can lead to constituional breakdown

    3. monarchical,

      A monarchy is a government in which a single person (the monarch) rules, but according to fixed and established laws.

      Principle: Honor — the motivation of nobles to serve the king and maintain hierarchy.

      The monarch’s power is limited by tradition, law, or institutions (like courts or parliaments).

      Example: France under Louis XIV, or Britain under a constitutional monarchy

    4. The constitutional struggles between the king and parliament in England in theseventeenth century gave rise to the related, but distinct, idea of a functional separationof powers, which is the core of the modern doctrine.

      Constitutional struggle between king & parliament in England in the 17th century gave rise to the idea of a functional separation of powers

    1. This... this line is chilling. Palantir, Curtis Yarvin, etc. It also ignores that Jim Crow wasn't passed by the majority public opinion. It was enacted into law by a small group of elected officials elected by the majority of the white voting public. Most people don't vote, and in many areas, the population that wasn't White was greater or equal to the White population, but still didn't have a say. The system is not actually the majority opinion. Additionally, information is still filtered by someone, with all their biases, and the biases they built into the computer

    Annotators

    1. Next, randomly combine one word from each list and spend time brainstorming around the mini-story they suggest.

      I really like how this strategy pushes people to think outside the box. But I do have concerns about how efficient it is because it depends on “randomly combining” ideas.

    2. Next, randomly combine one word from each list and spend time brainstorming around the mini-story they suggest.

      I love how this strategy pushes people to think outside the box. But I do have concerns about how efficient it is because it depends on "randomly combining" ideas.

    3. place them in a table, with the original words at the top of each column and the alternatives you have brainstormed arranged in columns below them.

      Nothing wrong with this one, but I found it more complicated than the other techniques in this article.

    4. If you’re trying to generate new product ideas, select images that are broadly evocative of your product category. Be sure to include some random or irrelevant images in your selections as well, because sometimes those types of stimuli can lead to the most creative solutions.

      This is perfect for visual learners and can definitely get ideas going. Usually when I'm trying to brainstorm I just see what comes up in my head but using images could help get to where I want faster. I'll try this out for sure.

    1. Communities are made up of individuals and each individual comes from a unique background, maybe a different country, perhaps speaks and feels comfortable in different languages, has their own level of education, family status, and more.

      L- I love how this shows us how different but alike we all are.

    2. It was a lot of physical labor but it was a collaborative day which saw everyone work together to achieve success.

      S- I found this surprising because when most people encounter difficulty they tend to give up

    1. An academic coach/advisor uses GenAI to draft a tailored study plan for a student struggling in STEM courses. Then, the coach reviews and edits the plan to ensure fit. They also two strategies appropriate for the student, which GenAI missed, and have worked well for other STEM students.

      I think last sentence is missing words or was edited but no longer makes sense

    1. Reviewer #1 (Public review):

      This is a re-review following an author revision. I will go point-by-point in response to my original critiques and the authors' responses. I appreciate the authors taking the time to thoughtfully respond to the reviewer critiques.

      Query 1. Based on the authors' description of their contribution to the algorithm design, it sounds like a hyperparameter search wrapped around existing software tools. I think that the use of their own language to describe these modules is confusing to potential users as well as unintentionally hides the contributions of the original LigBuilder developers. The authors should just explain the protocol plainly using language that refers specifically to the established software tools. Whether they use LigBuilder or something else, at the end of the day the description is a protocol for a specific use of an existing software rather than the creation of a new toolkit.

      Query 2. I see. Correct me if I am mistaken, but it seems as though the authors are proposing using the Authenticator to identify the best distributions of compounds based on an in silico oracle (in this case, Vina score), and train to discriminate them. This is similar to training QSAR models to predict docking scores, such as in the manuscript I shared during the first round of review. In principle, one could perform this in successive rounds to create molecules that are increasingly composed of features that yield higher docking scores. This is an established idea that the authors demonstrate in a narrow context, but it also raises concern that one is just enriching for compounds with e.g., an abundance of hydrogen bond donors and acceptors. Regarding points (4) and (5), it is unclear to me how the authors perform train/test splits on unlabeled data with supervised machine learning approaches in this setting. This seems akin to a Y-scramble sanity check. Finally, regarding the discussion on the use of experimental data or FEP calculations for the determination of HABs and LABs, I appreciate the authors' point; however, the concern here is that in the absence of any true oracle the models will just learn to identify and/or generate compounds that exploit limitations of docking scores. Again, please correct me if I am mistaken. It is unclear to me how this advances previous literature in CADD outside of the specific context of incorporating some ideas into a GPCR-Gprotein framework.

      Query 3. The authors mention that the hyperparameters for the ML models are just the package defaults in the absence of specification by the user. I would be helpful to know specifically what the the hyperparameters were for the benchmarks in this study; however, I think a deeper concern is still that these models are almost certainly far overparameterized given the limited training data used for the models. It is unclear why the authors did not just build a random forest classifier to discriminate their HABs and LABs using ligand- or protein-ligand interaction fingerprints or related ideas.

      Query 4. It is good, and expected, that increasing the fraction of the training set size in a random split validation all the way to 100% would allow the model to perfectly discriminate HABs and LABs. This does not demonstrate that the model has significant enrichment in prospective screening, particularly compared to simpler methods. The concern remains that these models are overparameterized and insufficiently validated. The authors did not perform any scaffold splits or other out-of-distribution analysis.

      Query 5. The authors contend that Gcoupler uniquely enables training models when data is scarce and ultra-large screening libraries are unavailable. Today, it is rather straightforward to dock a minimum of thousands of compounds. Using tools such as QuickVina2-GPU (https://pubs.acs.org/doi/10.1021/acs.jcim.2c01504), it is possible to quite readily dock millions in a day with a single GPU and obtain the AutoDock Vina score. GPU-acclerated Vina has been combined with cavity detection tools likely multiple times, including here (https://arxiv.org/abs/2506.20043). There are multiple cavity detection tools, including the ones the authors use in their protocol.

      Query 6. The authors contend that the simulations are converged, but they elected not to demonstrate stability in the predicting MM/GBSA binding energies with block averaging across the trajectory. This could have been done through the existing trajectories without additional simulation.

    2. Reviewer #1 (Public review):

      This is a re-review following an author revision. I will go point-by-point in response to my original critiques and the authors' responses. I appreciate the authors taking the time to thoughtfully respond to the reviewer critiques.

      Query 1. Based on the authors' description of their contribution to the algorithm design, it sounds like a hyperparameter search wrapped around existing software tools. I think that the use of their own language to describe these modules is confusing to potential users as well as unintentionally hides the contributions of the original LigBuilder developers. The authors should just explain the protocol plainly using language that refers specifically to the established software tools. Whether they use LigBuilder or something else, at the end of the day the description is a protocol for a specific use of an existing software rather than the creation of a new toolkit.

      Query 2. I see. Correct me if I am mistaken, but it seems as though the authors are proposing using the Authenticator to identify the best distributions of compounds based on an in silico oracle (in this case, Vina score), and train to discriminate them. This is similar to training QSAR models to predict docking scores, such as in the manuscript I shared during the first round of review. In principle, one could perform this in successive rounds to create molecules that are increasingly composed of features that yield higher docking scores. This is an established idea that the authors demonstrate in a narrow context, but it also raises concern that one is just enriching for compounds with e.g., an abundance of hydrogen bond donors and acceptors. Regarding points (4) and (5), it is unclear to me how the authors perform train/test splits on unlabeled data with supervised machine learning approaches in this setting. This seems akin to a Y-scramble sanity check. Finally, regarding the discussion on the use of experimental data or FEP calculations for the determination of HABs and LABs, I appreciate the authors' point; however, the concern here is that in the absence of any true oracle the models will just learn to identify and/or generate compounds that exploit limitations of docking scores. Again, please correct me if I am mistaken. It is unclear to me how this advances previous literature in CADD outside of the specific context of incorporating some ideas into a GPCR-Gprotein framework.

      Query 3. The authors mention that the hyperparameters for the ML models are just the package defaults in the absence of specification by the user. I would be helpful to know specifically what the the hyperparameters were for the benchmarks in this study; however, I think a deeper concern is still that these models are almost certainly far overparameterized given the limited training data used for the models. It is unclear why the authors did not just build a random forest classifier to discriminate their HABs and LABs using ligand- or protein-ligand interaction fingerprints or related ideas.

      Query 4. It is good, and expected, that increasing the fraction of the training set size in a random split validation all the way to 100% would allow the model to perfectly discriminate HABs and LABs. This does not demonstrate that the model has significant enrichment in prospective screening, particularly compared to simpler methods. The concern remains that these models are overparameterized and insufficiently validated. The authors did not perform any scaffold splits or other out-of-distribution analysis.

      Query 5. The authors contend that Gcoupler uniquely enables training models when data is scarce and ultra-large screening libraries are unavailable. Today, it is rather straightforward to dock a minimum of thousands of compounds. Using tools such as QuickVina2-GPU (https://pubs.acs.org/doi/10.1021/acs.jcim.2c01504), it is possible to quite readily dock millions in a day with a single GPU and obtain the AutoDock Vina score. GPU-acclerated Vina has been combined with cavity detection tools likely multiple times, including here (https://arxiv.org/abs/2506.20043). There are multiple cavity detection tools, including the ones the authors use in their protocol.

      Query 6. The authors contend that the simulations are converged, but they elected not to demonstrate stability in the predicting MM/GBSA binding energies with block averaging across the trajectory. This could have been done through the existing trajectories without additional simulation.

    3. Author response:

      The following is the authors’ response to the original reviews

      Public Reviews:

      Reviewer #1 (Public review):

      Summary

      Query: In this manuscript, the authors introduce Gcoupler, a Python-based computational pipeline designed to identify endogenous intracellular metabolites that function as allosteric modulators at the G protein-coupled receptor (GPCR) - Gα protein interface. Gcoupler is comprised of four modules:

      I. Synthesizer - identifies protein cavities and generates synthetic ligands using LigBuilder3

      II. Authenticator - classifies ligands into high-affinity binders (HABs) and low-affinity binders (LABs) based on AutoDock Vina binding energies

      III. Generator - trains graph neural network (GNN) models (GCM, GCN, AFP, GAT) to predict binding affinity using synthetic ligands

      IV. BioRanker - prioritizes ligands based on statistical and bioactivity data

      The authors apply Gcoupler to study the Ste2p-Gpa1p interface in yeast, identifying sterols such as zymosterol (ZST) and lanosterol (LST) as modulators of GPCR signaling. Our review will focus on the computational aspects of the work. Overall, we found the Gcoupler approach interesting and potentially valuable, but we have several concerns with the methods and validation that need to be addressed prior to publication/dissemination.

      We express our gratitude to Reviewer #1 for their concise summary and commendation of our work. We sincerely apologize for the lack of sufficient detail in summarizing the underlying methods employed in Gcoupler, as well as its subsequent experimental validations using yeast, human cell lines, and primary rat cardiomyocyte-based assays.

      We wish to state that substantial improvements have been made in the revised manuscript, every section has been elaborated upon to enhance clarity. Please refer to the point-by-point response below and the revised manuscript.

      Query: (1) The exact algorithmic advancement of the Synthesizer beyond being some type of application wrapper around LigBuilder is unclear. Is the grow-link approach mentioned in the methods already a component of LigBuilder, or is it custom? If it is custom, what does it do? Is the API for custom optimization routines new with the Synthesizer, or is this a component of LigBuilder? Is the genetic algorithm novel or already an existing software implementation? Is the cavity detection tool a component of LigBuilder or novel in some way? Is the fragment library utilized in the Synthesizer the default fragment library in LigBuilder, or has it been customized? Are there rules that dictate how molecule growth can occur? The scientific contribution of the Synthesizer is unclear. If there has not been any new methodological development, then it may be more appropriate to just refer to this part of the algorithm as an application layer for LigBuilder.

      We appreciate Reviewer #1's constructive suggestion. We wish to emphasize that

      (1) The LigBuilder software comprises various modules designed for distinct functions. The Synthesizer in Gcoupler strategically utilizes two of these modules: "CAVITY" for binding site detection and "BUILD" for de novo ligand design.

      (2) While both modules are integral to LigBuilder, the Synthesizer plays a crucial role in enabling their targeted, automated, and context-aware application for GPCR drug discovery.

      (3) The CAVITY module is a structure-based protein binding site detection program, which the Synthesizer employs for identifying ligand binding sites on the protein surface.

      (4) The Synthesizer also leverages the BUILD module for constructing molecules tailored to the target protein, implementing a fragment-based design strategy using its integrated fragment library.

      (5) The GROW and LINK methods represent two independent approaches encompassed within the aforementioned BUILD module.

      Author response image 1.

      Schematic representation of the key strategy used in the Synthesizer module of Gcoupler.

      Our manuscript details the "grow-link" hybrid approach, which was implemented using a genetic algorithm through the following stages:

      (1) Initial population generation based on a seed structure via the GROW method.

      (2) Selection of "parent" molecules from the current population for inclusion in the mating pool using the LINK method.

      (3) Transfer of "elite" molecules from the current population to the new population.

      (4) Population expansion through structural manipulations (mutation, deletion, and crossover) applied to molecules within the mating pool.

      Please note, the outcome of this process is not fixed, as it is highly dependent on the target cavity topology and the constraint parameters employed for population evaluation. Synthesizer customizes generational cycles and optimization parameters based on cavity-specific constraints, with the objective of either generating a specified number of compounds or comprehensively exploring chemical diversity against a given cavity topology.

      While these components are integral to LigBuilder, Synthesizer's innovation lies

      (1) in its programmatic integration and dynamic adjustment of these modules.

      (2) Synthesizer distinguishes itself not by reinventing these algorithms, but by their automated coordination, fine-tuning, and integration within a cavity-specific framework.

      (3) It dynamically modifies generation parameters according to cavity topology and druggability constraints, a capability not inherently supported by LigBuilder.

      (4) This renders Synthesizer particularly valuable in practical scenarios where manual optimization is either inefficient or impractical.

      In summary, Synthesizer offers researchers a streamlined interface, abstracting the technical complexities of LigBuilder and thereby enabling more accessible and reproducible ligand generation pipelines, especially for individuals with limited experience in structural or cheminformatics tools.

      Query: (2) The use of AutoDock Vina binding energy scores to classify ligands into HABs and LABs is problematic. AutoDock Vina's energy function is primarily tuned for pose prediction and displays highly system-dependent affinity ranking capabilities. Moreover, the HAB/LAB thresholds of -7 kcal/mol or -8 kcal/mol lack justification. Were these arbitrarily selected cutoffs, or was benchmarking performed to identify appropriate cutoffs? It seems like these thresholds should be determined by calibrating the docking scores with experimental binding data (e.g., known binders with measured affinities) or through re-scoring molecules with a rigorous alchemical free energy approach.

      We again express our gratitude to Reviewer #1 for these inquiries. We sincerely apologize for the lack of sufficient detail in the original version of the manuscript. In the revised manuscript, we have ensured the inclusion of a detailed rationale for every threshold utilized to prioritize high-affinity binders. Please refer to the comprehensive explanation below, as well as the revised manuscript, for further details.

      We would like to clarify that:

      (1) The Authenticator module is not solely reliant on absolute binding energy values for classification. Instead, it calculates binding energies for all generated compounds and applies a statistical decision-making layer to define HAB and LAB classes.

      (2) Rather than using fixed thresholds, the module employs distribution-based methods, such as the Empirical Cumulative Distribution Function (ECDF), to assess the overall energy landscape of the compound set. We then applied multiple statistical tests to evaluate the HAB and LAB distributions and determine an optimal, data-specific cutoff that balances class sizes and minimizes overlap.

      (3) This adaptive approach avoids rigid thresholds and instead ensures context-sensitive classification, with safeguards in place to maintain adequate representation of both classes for downstream model training, and in this way, the framework prioritizes robust statistical reasoning over arbitrary energy cutoffs and aims to reduce the risks associated with direct reliance on Vina scores alone.

      (4) To assess the necessity and effectiveness of the Authenticator module, we conducted a benchmarking analysis where we deliberately omitted the HAB and LAB class labels, treating the compound pool as a heterogeneous, unlabeled dataset. We then performed random train-test splits using the Synthesizer-generated compounds and trained independent models.

      (5) The results from this approach demonstrated notably poorer model performance, indicating that arbitrary or unstructured data partitioning does not effectively capture the underlying affinity patterns. These experiments highlight the importance of using the statistical framework within the Authenticator module to establish meaningful, data-driven thresholds for distinguishing High- and Low-Affinity Binders. The cutoff values are thus not arbitrary but emerge from a systematic benchmarking and validation process tailored to each dataset.

      Please note: While calibrating docking scores with experimental binding affinities or using rigorous methods like alchemical free energy calculations can improve precision, these approaches are often computationally intensive and reliant on the availability of high-quality experimental data, a major limitation in many real-world screening scenarios.

      In summary, the primary goal of Gcoupler is to enable fast, scalable, and broadly accessible screening, particularly for cases where experimental data is sparse or unavailable. Incorporating such resource-heavy methods would not only significantly increase computational overhead but also undermine the framework’s intended usability and efficiency for large-scale applications. Instead, our workflow relies on statistically robust, data-driven classification methods that balance speed, generalizability, and practical feasibility.

      Query: (3) Neither the Results nor Methods sections provide information on how the GNNs were trained in this study. Details such as node features, edge attributes, standardization, pooling, activation functions, layers, dropout, etc., should all be described in detail. The training protocol should also be described, including loss functions, independent monitoring and early stopping criteria, learning rate adjustments, etc.

      We again thank Reviewer #1 for this suggestion. We would like to mention that in the revised manuscript, we have added all the requested details. Please refer to the points below for more information.

      (1) The Generator module of Gcoupler is designed as a flexible and automated framework that leverages multiple Graph Neural Network architectures, including Graph Convolutional Model (GCM), Graph Convolutional Network (GCN), Attentive FP, and Graph Attention Network (GAT), to build classification models based on the synthetic ligand datasets produced earlier in the pipeline.

      (2) By default, Generator tests all four models using standard hyperparameters provided by the DeepChem framework (https://deepchem.io/), offering a baseline performance comparison across architectures. This includes pre-defined choices for node features, edge attributes, message-passing layers, pooling strategies, activation functions, and dropout values, ensuring reproducibility and consistency. All models are trained with binary cross-entropy loss and support default settings for early stopping, learning rate, and batch standardization where applicable.

      (3) In addition, Generator supports model refinement through hyperparameter tuning and k-fold cross-validation (default: 3 folds). Users can either customize the hyperparameter grid or rely on Generator’s recommended parameter ranges to optimize model performance. This allows for robust model selection and stability assessment of tuned parameters.

      (4) Finally, the trained models can be used to predict binding probabilities for user-supplied compounds, making it a comprehensive and user-adaptive tool for ligand screening.

      Based on the reviewer #1 suggestion, we have now added a detailed description about the Generator module of Gcoupler, and also provided relevant citations regarding the DeepChem workflow.

      Query: (4) GNN model training seems to occur on at most 500 molecules per training run? This is unclear from the manuscript. That is a very small number of training samples if true. Please clarify. How was upsampling performed? What were the HAB/LAB class distributions? In addition, it seems as though only synthetically generated molecules are used for training, and the task is to discriminate synthetic molecules based on their docking scores. Synthetic ligands generated by LigBuilder may occupy distinct chemical space, making classification trivial, particularly in the setting of a random split k-folds validation approach. In the absence of a leave-class-out validation, it is unclear if the model learns generalizable features or exploits clear chemical differences. Historically, it was inappropriate to evaluate ligand-based QSAR models on synthetic decoys such as the DUD-E sets - synthetic ligands can be much more easily distinguished by heavily parameterized ligand-based machine learning models than by physically constrained single-point docking score functions.

      We thank reviewer #1 for these detailed technical queries. We would like to clarify that:

      (1) The recommended minimum for the training set is 500 molecules, but users can add as many synthesized compounds as needed to thoroughly explore the chemical space related to the target cavity.

      (2) Our systematic evaluation demonstrated that expanding the training set size consistently enhanced model performance, especially when compared to AutoDock docking scores. This observation underscores the framework's scalability and its ability to improve predictive accuracy with more training compounds.

      (3) The Authenticator module initially categorizes all synthesized molecules into HAB and LAB classes. These labeled molecules are then utilized for training the Generator module. To tackle class imbalance, the class with fewer data points undergoes upsampling. This process aims to achieve an approximate 1:1 ratio between the two classes, thereby ensuring balanced learning during GNN model training.

      (4) The Authenticator module's affinity scores are the primary determinant of the HAB/LAB class distribution, with a higher cutoff for HABs ensuring statistically significant class separation. This distribution is also indirectly shaped by the target cavity's topology and druggability, as the Synthesizer tends to produce more potent candidates for cavities with favorable binding characteristics.

      (5) While it's true that synthetic ligands may occupy distinct chemical space, our benchmarking exploration for different sites on the same receptor still showed inter-cavity specificity along with intra-cavity diversity of the synthesized molecules.

      (6) The utility of random k-fold validation shouldn't be dismissed outright; it provides a reasonable estimate of performance under practical settings where class boundaries are often unknown. Nonetheless, we agree that complementary validation strategies like leave-class-out could further strengthen the robustness assessment.

      (7) We agree that using synthetic decoys like those from the DUD-E dataset can introduce bias in ligand-based QSAR model evaluations if not handled carefully. In our workflow, the inclusion of DUD-E compounds is entirely optional and only considered as a fallback, specifically in scenarios where the number of low-affinity binders (LABs) synthesized by the Synthesizer module is insufficient to proceed with model training.

      (8) The primary approach relies on classifying generated compounds based on their derived affinity scores via the Authenticator module. However, in rare cases where this results in a heavily imbalanced dataset, DUD-E compounds are introduced not as part of the core benchmarking, but solely to maintain minimal class balance for initial model training. Even then, care is taken to interpret results with this limitation in mind. Ultimately, our framework is designed to prioritize data-driven generation of both HABs and LABs, minimizing reliance on synthetic decoys wherever possible.

      Author response image 2.

      Scatter plots depicting the segregation of High/Low-Affinity Metabolites (HAM/LAM) (indicated in green and red) identified using Gcoupler workflow with 100% training data. Notably, models trained on lesser training data size (25%, 50%, and 75% of HAB/LAB) severely failed to segregate HAM and LAM (along Y-axis). X-axis represents the binding affinity calculated using IC4-specific docking using AutoDock.

      Based on the reviewer #1’s suggestion, we have now added all these technical details in the revised version of the manuscript.

      Query: (5) Training QSAR models on docking scores to accelerate virtual screening is not in itself novel (see here for a nice recent example: https://www.nature.com/articles/s43588-025-00777-x), but can be highly useful to focus structure-based analysis on the most promising areas of ligand chemical space; however, we are perplexed by the motivation here. If only a few hundred or a few thousand molecules are being sampled, why not just use AutoDock Vina? The models are trained to try to discriminate molecules by AutoDock Vina score rather than experimental affinity, so it seems like we would ideally just run Vina? Perhaps we are misunderstanding the scale of the screening that was done here. Please clarify the manuscript methods to help justify the approach.

      We acknowledge the effectiveness of training QSAR models on docking scores for prioritizing chemical space, as demonstrated by the referenced study (https://www.nature.com/articles/s43588-025-00777-x) on machine-learning-guided docking screen frameworks.

      We would like to mention that:

      (1) While such protocols often rely on extensive pre-docked datasets across numerous protein targets or utilize a highly skewed input distribution, training on as little as 1-10% of ligand-protein complexes and testing on the remainder in iterative cycles.

      (2) While powerful for ultra-large libraries, this approach can introduce bias towards the limited training set and incur significant overhead in data curation, pre-computation, and infrastructure.

      (3) In contrast, Gcoupler prioritizes flexibility and accessibility, especially when experimental data is scarce and large pre-docked libraries are unavailable. Instead of depending on fixed docking scores from external pipelines, Gcoupler integrates target-specific cavity detection, de novo compound generation, and model training into a self-contained, end-to-end framework. Its QSAR models are trained directly on contextually relevant compounds synthesized for a given binding site, employing a statistical classification strategy that avoids arbitrary thresholds or precomputed biases.

      (4) Furthermore, Gcoupler is open-source, lightweight, and user-friendly, making it easily deployable without the need for extensive infrastructure or prior docking expertise. While not a complete replacement for full-scale docking in all use cases, Gcoupler aims to provide a streamlined and interpretable screening framework that supports both focused chemical design and broader chemical space exploration, without the computational burden associated with deep learning docking workflows.

      (5) Practically, even with computational resources, manually running AutoDock Vina on millions of compounds presents challenges such as format conversion, binding site annotation, grid parameter tuning, and execution logistics, all typically requiring advanced structural bioinformatics expertise.

      (6) Gcoupler's Authenticator module, however, streamlines this process. Users only need to input a list of SMILES and a receptor PDB structure, and the module automatically handles compound preparation, cavity mapping, parameter optimization, and high-throughput scoring. This automation reduces time and effort while democratizing access to structure-based screening workflows for users without specialized expertise.

      Ultimately, Gcoupler's motivation is to make large-scale, structure-informed virtual screening both efficient and accessible. The model serves as a surrogate to filter and prioritize compounds before deeper docking or experimental validation, thereby accelerating targeted drug discovery.

      Query: (6) The brevity of the MD simulations raises some concerns that the results may be over-interpreted. RMSD plots do not reliably compare the affinity behavior in this context because of the short timescales coupled with the dramatic topological differences between the ligands being compared; CoQ6 is long and highly flexible compared to ZST and LST. Convergence metrics, such as block averaging and time-dependent MM/GBSA energies, should be included over much longer timescales. For CoQ6, the authors may need to run multiple simulations of several microseconds, identify the longest-lived metastable states of CoQ6, and perform MM/GBSA energies for each state weighted by each state's probability.

      We appreciate Reviewer #1's suggestion regarding simulation length, as it is indeed crucial for interpreting molecular dynamics (MD) outcomes. We would like to mention that:

      (1) Our simulation strategy varied based on the analysis objective, ranging from short (~5 ns) runs for preliminary or receptor-only evaluations to intermediate (~100 ns) and extended (~550 ns) runs for receptor-ligand complex validation and stability assessment.

      (2) Specifically, we conducted three independent 100 ns MD simulations for each receptor-metabolite complex in distinct cavities of interest. This allowed us to assess the reproducibility and persistence of binding interactions. To further support these observations, a longer 550 ns simulation was performed for the IC4 cavity, which reinforced the 100 ns findings by demonstrating sustained interaction stability over extended timescales.

      (3) While we acknowledge that even longer simulations (e.g., in the microsecond range) could provide deeper insights into metastable state transitions, especially for highly flexible molecules like CoQ6, our current design balances computational feasibility with the goal of screening multiple cavities and ligands.

      (4) In our current workflow, MM/GBSA binding free energies were calculated by extracting 1000 representative snapshots from the final 10 ns of each MD trajectory. These configurations were used to compute time-averaged binding energies, incorporating contributions from van der Waals, electrostatic, polar, and non-polar solvation terms. This approach offers a more reliable estimate of ligand binding affinity compared to single-point molecular docking, as it accounts for conformational flexibility and dynamic interactions within the binding cavity.

      (5) Although we did not explicitly perform state-specific MM/GBSA calculations weighted by metastable state probabilities, our use of ensemble-averaged energy estimates from a thermally equilibrated segment of the trajectory captures many of the same benefits. We acknowledge, however, that a more rigorous decomposition based on metastable state analysis could offer finer resolution of binding behavior, particularly for highly flexible ligands like CoQ6, and we consider this a valuable direction for future refinement of the framework.

      Reviewer #2 (Public review):

      Summary:

      Query: Mohanty et al. present a new deep learning method to identify intracellular allosteric modulators of GPCRs. This is an interesting field for e.g. the design of novel small molecule inhibitors of GPCR signalling. A key limitation, as mentioned by the authors, is the limited availability of data. The method presented, Gcoupler, aims to overcome these limitations, as shown by experimental validation of sterols in the inhibition of Ste2p, which has been shown to be relevant molecules in human and rat cardiac hypertrophy models. They have made their code available for download and installation, which can easily be followed to set up software on a local machine.

      Strengths:

      Clear GitHub repository

      Extensive data on yeast systems

      We sincerely thank Reviewer #2 for their thorough review, summary, and appreciation of our work. We highly value their comments and suggestions.

      Weaknesses:

      Query: No assay to directly determine the affinity of the compounds to the protein of interest.

      We thank Reviewer #2 for raising these insightful questions. During the experimental design phase, we carefully accounted for validating the impact of metabolites in the rescue response by pheromone.

      We would like to mention that we performed an array of methods to validate our hypothesis and observed similar rescue effects. These assays include:

      a. Cell viability assay (FDA/PI Flourometry-based)

      b. Cell growth assay

      c. FUN1<sup>TM</sup>-based microscopy assessment

      d. Shmoo formation assays

      e. Mating assays

      f. Site-directed mutagenesis-based loss of function

      g. ransgenic reporter-based assay

      h. MAPK signaling assessment using Western blot.

      i. And via computational techniques.

      Concerning the in vitro interaction studies of Ste2p and metabolites, we made significant efforts to purify Ste2p by incorporating a His tag at the N-terminal. Despite dedicated attempts over the past year, we were unsuccessful in purifying the protein, primarily due to our limited expertise in protein purification for this specific system. As a result, we opted for genetic-based interventions (e.g., point mutants), which provide a more physiological and comprehensive approach to demonstrating the interaction between Ste2p and the metabolites.

      Author response image 3.

      (a) Affinity purification of Ste2p from Saccharomyces cerevisiae. Western blot analysis using anti-His antibody showing the distribution of Ste2p in various fractions during the affinity purification process. The fractions include pellet, supernatant, wash buffer, and sequential elution fractions (1–4). Wild-type and ste2Δ strains served as positive and negative controls, respectively. (b) Optimization of Ste2p extraction protocol. Ponceau staining (left) and Western blot analysis using anti-His antibody (right) showing Ste2p extraction efficiency. The conditions tested include lysis buffers containing different concentrations of CHAPS detergent (0.5%, 1%) and glycerol (10%, 20%).

      Furthermore, in addition to the clarification above, we have added the following statement in the discussion section to tone down our claims: “A critical limitation of our study is the absence of direct binding assays to validate the interaction between the metabolites and Ste2p. While our results from genetic interventions, molecular dynamics simulations, and docking studies strongly suggest that the metabolites interact with the Ste2p-Gpa1 interface, these findings remain indirect. Direct binding confirmation through techniques such as surface plasmon resonance, isothermal titration calorimetry, or co-crystallization would provide definitive evidence of this interaction. Addressing this limitation in future work would significantly strengthen our conclusions and provide deeper insights into the precise molecular mechanisms underlying the observed phenotypic effects.”

      We request Reviewer #2 to kindly refer to the assays conducted on the point mutants created in this study, as these experiments offer robust evidence supporting our claims.

      Query: In conclusion, the authors present an interesting new method to identify allosteric inhibitors of GPCRs, which can easily be employed by research labs. Whilst their efforts to characterize the compounds in yeast cells, in order to confirm their findings, it would be beneficial if the authors show their compounds are active in a simple binding assay.

      We express our gratitude and sincere appreciation for the time and effort dedicated by Reviewer #2 in reviewing our manuscript. We are confident that our clarifications address the reviewer's concerns.

      Reviewer #3 (Public review):

      Summary:

      Query: In this paper, the authors introduce the Gcoupler software, an open-source deep learning-based platform for structure-guided discovery of ligands targeting GPCR interfaces. Overall, this manuscript represents a field-advancing contribution at the intersection of AI-based ligand discovery and GPCR signaling regulation.

      Strengths:

      The paper presents a comprehensive and well-structured workflow combining cavity identification, de novo ligand generation, statistical validation, and graph neural network-based classification. Notably, the authors use Gcoupler to identify endogenous intracellular sterols as allosteric modulators of the GPCR-Gα interface in yeast, with experimental validations extending to mammalian systems. The ability to systematically explore intracellular metabolite modulation of GPCR signaling represents a novel and impactful contribution. This study significantly advances the field of GPCR biology and computational ligand discovery.

      We thank and appreciate Reviewer #3 for vesting time and efforts in reviewing our manuscript and for appreciating our efforts.

      Recommendations for the authors:

      Reviewing Editor Comments:

      We encourage the authors to address the points raised during revision to elevate the assessment from "incomplete" to "solid" or ideally "convincing." In particular, we ask the authors to improve the justification for their methodological choices and to provide greater detail and clarity regarding each computational layer of the pipeline.

      We are grateful for the editors' suggestions. We have incorporated significant revisions into the manuscript, providing comprehensive technical details to prevent any misunderstandings. Furthermore, we meticulously explained every aspect of the computational workflow.

      Reviewer #2 (Recommendations for the authors):

      Query: Would it be possible to make the package itself pip installable?

      Yes, it already exists under the testpip repository and we have now migrated it to the main pip. Please access the link from here: https://pypi.org/project/gcoupler/

      Query: I am confused by the binding free energies reported in Supplementary Figure 8. Is the total DG reported that of the protein-ligand complex? If that is the case, the affinities of the ligands would be extremely high. They are also very far off from the reported -7 kcal/mol active/inactive cut-off.

      We thank Reviewer #2 for this query. We would like to mention that we have provided a detailed explanation in the point-by-point response to Reviewer #2's original comment. Briefly, to clarify, the -7 kcal/mol active/inactive cutoff mentioned in the manuscript refers specifically to the docking-based binding free energies (ΔG) calculated using AutoDock or AutoDock Vina, which are used for compound classification or validation against the Gcoupler framework.

      In contrast, the binding free energies reported in Supplementary Figure 8 are obtained through the MM-GBSA method, which provides a more detailed and physics-based estimate of binding affinity by incorporating solvation and enthalpic contributions. It is well-documented in the literature that MM-GBSA tends to systematically underestimate absolute binding free energies when compared to experimental values (10.2174/1568026616666161117112604; Table 1).

      Author response image 4.

      Scatter plot comparing the predicted binding affinity calculated by Docking and MM/GBSA methods, against experimental ΔG (10.1007/s10822-023-00499-0)

      Our use of MM-GBSA is not to match experimental ΔG directly, but rather to assess relative binding preferences among ligands. Despite its limitations in predicting absolute affinities, MM-GBSA is known to perform better than docking for ranking compounds by their binding potential. In this context, an MM-GBSA energy value still reliably indicates stronger predicted binding, even if the numerical values appear extremely higher than typical experimental or docking-derived cutoffs.

      Thus, the two energy values, docking-based and MM-GBSA, serve different purposes in our workflow. Docking scores are used for classification and thresholding, while MM-GBSA energies provide post hoc validation and a higher-resolution comparison of binding strength across compounds.

      To corroborate their findings, can the authors include direct binding affinity assays for yeast and human Ste2p? This will help in establishing whether the observed phenotypic effects are indeed driven by binding of the metabolites.

      We thank Reviewer #2 for raising these insightful questions. During the experimental design phase, we carefully accounted for validating the impact of metabolites in the rescue response by pheromone.

      We would like to mention that we performed an array of methods to validate our hypothesis and observed similar rescue effects. These assays include:

      a. Cell viability assay (FDA/PI Flourometry- based)

      b. Cell growth assay

      c. FUN1<sup>TM</sup>-based microscopy assessment

      d. Shmoo formation assays

      e. Mating assays

      f. Site-directed mutagenesis-based loss of function

      g. Transgenic reporter-based assay

      h. MAPK signaling assessment using Western blot.

      i. And via computational techniques.

      Concerning the in vitro interaction studies of Ste2p and metabolites, we made significant efforts to purify Ste2p by incorporating a His tag at the N-terminal. Despite dedicated attempts over the past year, we were unsuccessful in purifying the protein, primarily due to our limited expertise in protein purification for this specific system. As a result, we opted for genetic-based interventions (e.g., point mutants), which provide a more physiological and comprehensive approach to demonstrating the interaction between Ste2p and the metabolites.

      Furthermore, in addition to the clarification above, we have added the following statement in the discussion section to tone down our claims: “A critical limitation of our study is the absence of direct binding assays to validate the interaction between the metabolites and Ste2p. While our results from genetic interventions, molecular dynamics simulations, and docking studies strongly suggest that the metabolites interact with the Ste2p-Gpa1 interface, these findings remain indirect. Direct binding confirmation through techniques such as surface plasmon resonance, isothermal titration calorimetry, or co-crystallization would provide definitive evidence of this interaction. Addressing this limitation in future work would significantly strengthen our conclusions and provide deeper insights into the precise molecular mechanisms underlying the observed phenotypic effects.”

      We request Reviewer #2 to kindly refer to the assays conducted on the point mutants created in this study, as these experiments offer robust evidence supporting our claims.

      Did the authors perform expression assays to make sure the mutant proteins were similarly expressed to wt?

      We thank reviewer #2 for this comment. We would like to mention that:

      (1) In our mutants (S75A, T155D, L289K)-based assays, all mutants were generated using integration at the same chromosomal TRP1 locus under the GAL1 promoter and share the same C-terminal CYC1 terminator sequence used for the reconstituted wild-type (rtWT) construct, thus reducing the likelihood of strain-specific expression differences.

      (2) Furthermore, all strains were grown under identical conditions using the same media, temperature, and shaking parameters. Each construct underwent the same GAL1 induction protocol in YPGR medium for identical durations, ensuring uniform transcriptional activation across all strains and minimizing culture-dependent variability in protein expression.

      (3) Importantly, both the rtWT and two of the mutants (T155D, L289K) retained α-factor-induced cell death (PI and FUN1-based fluorometry and microscopy; Figure 4c-d) and MAPK activation (western blot; Figure 4e), demonstrating that the mutant proteins are expressed at levels sufficient to support signalling.

      Reviewer #3 (Recommendations for the authors):

      My comments that would enhance the impact of this method are:

      (1) While the authors have compared the accuracy and efficiency of Gcoupler to AutoDock Vina, one of the main points of Gcoupler is the neural network module. It would be beneficial to have it evaluated against other available deep learning ligand generative modules, such as the following: 10.1186/s13321-024-00829-w, 10.1039/D1SC04444C.

      Thank you for the observation. To clarify, our benchmarking of Gcoupler’s accuracy and efficiency was performed against AutoDock, not AutoDock Vina. This choice was intentional, as AutoDock is one of the most widely used classical techniques in computer-aided drug design (CADD) for obtaining high-resolution predictions of ligand binding energy, binding poses, and detailed atomic-level interactions with receptor residues. In contrast, AutoDock Vina is primarily optimized for large-scale virtual screening, offering faster results but typically with lower resolution and limited configurational detail.

      Since Gcoupler is designed to balance accuracy with computational efficiency in structure-based screening, AutoDock served as a more appropriate reference point for evaluating its predictions.

      We agree that benchmarking against other deep learning-based ligand generative tools is important for contextualizing Gcoupler’s capabilities. However, it's worth noting that only a few existing methods focus specifically on cavity- or pocket-driven de novo drug design using generative AI, and among them, most are either partially closed-source or limited in functionality.

      While PocketCrafter (10.1186/s13321-024-00829-w) offers a structure-based generative framework, it differs from Gcoupler in several key respects. PocketCrafter requires proprietary preprocessing tools, such as the MOE QuickPrep module, to prepare protein pocket structures, limiting its accessibility and reproducibility. In addition, PocketCrafter’s pipeline stops at the generation of cavity-linked compounds and does not support any further learning from the generated data.

      Similarly, DeepLigBuilder (10.1039/D1SC04444C) provides de novo ligand generation using deep learning, but the source code is not publicly available, preventing direct benchmarking or customization. Like PocketCrafter, it also lacks integrated learning modules, which limits its utility for screening large, user-defined libraries or compounds of interest.

      Additionally, tools like AutoDesigner from Schrödinger, while powerful, are not publicly accessible and hence fall outside the scope of open benchmarking.

      Author response table 1.

      Comparison of de novo drug design tools. SBDD refers to Structure-Based Drug Design, and LBDD refers to Ligand-Based Drug Design.

      In contrast, Gcoupler is a fully open-source, end-to-end platform that integrates both Ligand-Based and Structure-Based Drug Design. It spans from cavity detection and molecule generation to automated model training using GNNs, allowing users to evaluate and prioritize candidate ligands across large chemical spaces without the need for commercial software or advanced coding expertise.

      (2) In Figure 2, the authors mention that IC4 and IC5 potential binding sites are on the direct G protein coupling interface ("This led to the identification of 17 potential surface cavities on Ste2p, with two intracellular regions, IC4 and IC5, accounting for over 95% of the Ste2p-Gpa1p interface (Figure 2a-b, Supplementary Figure 4j-n)..."). Later, however, in Figure 4, when discussing which residues affect the binding of the metabolites the most, the authors didn't perform MD simulations of mutant STE2 and just Gpa1p (without metabolites present). It would be beneficial to compare the binding of G protein with and without metabolites present, as these interface mutations might be affecting the binding of G protein by itself.

      Thank you for this insightful suggestion. While we did not perform in silico MD simulations of the mutant Ste2-Gpa1 complex in the absence of metabolites, we conducted experimental validation to functionally assess the impact of interface mutations. Specifically, we generated site-directed mutants (S75A, L289K, T155D) and expressed them in a ste2Δ background to isolate their effects.

      As shown in the Supplementary Figure, these mutants failed to rescue cells from α-factor-induced programmed cell death (PCD) upon metabolite pre-treatment. This was confirmed through fluorometry-based viability assays, FUN1<sup>TM</sup> staining, and p-Fus3 signaling analysis, which collectively monitor MAPK pathway activation (Figure 4c–e).

      Importantly, the induction of PCD in response to α-factor in these mutants demonstrates that G protein coupling is still functionally intact, indicating that the mutations do not interfere with Gpa1 binding itself. However, the absence of rescue by metabolites strongly suggests that the mutated residues play a direct role in metabolite binding at the Ste2p–Gpa1p interface, thus modulating downstream signaling.

      While further MD simulations could provide structural insight into the isolated mutant receptor–G protein interaction, our experimental data supports the functional relevance of metabolite binding at the identified interface.

      (3) While the experiments, performed by the authors, do support the hypothesis that metabolites regulate GPCR signaling, there are no experiments evaluating direct biophysical measurements (e.g., dissociation constants are measured only in silicon).

      We thank Reviewer #3 for raising these insightful comments. We would like to mention that we performed an array of methods to validate our hypothesis and observed similar rescue effects. These assays include:

      a. Cell viability assay (FDA/PI Flourometry- based)

      b. Cell growth assay

      c. FUN1<sup>TM</sup>-based microscopy assessment

      d. Shmoo formation assays

      e. Mating assays

      f. Site-directed mutagenesis-based loss of function

      g. Transgenic reporter-based assay

      h. MAPK signaling assessment using Western blot.

      i. And via computational techniques.

      Concerning the direct biophysical measurements of Ste2p and metabolites, we made significant efforts to purify Ste2p by incorporating a His tag at the N-terminal, with the goal of performing Microscale Thermophoresis (MST) and Isothermal Titration Calorimetry (ITC) measurements. Despite dedicated attempts over the past year, we were unsuccessful in purifying the protein, primarily due to our limited expertise in protein purification for this specific system. As a result, we opted for genetic-based interventions (e.g., point mutants), which provide a more physiological and comprehensive approach to demonstrating the interaction between Ste2p and the metabolites.

      Furthermore, in addition to the clarification above, we have added the following statement in the discussion section to tone down our claims: “A critical limitation of our study is the absence of direct binding assays to validate the interaction between the metabolites and Ste2p. While our results from genetic interventions, molecular dynamics simulations, and docking studies strongly suggest that the metabolites interact with the Ste2p-Gpa1 interface, these findings remain indirect. Direct binding confirmation through techniques such as surface plasmon resonance, isothermal titration calorimetry, or co-crystallization would provide definitive evidence of this interaction. Addressing this limitation in future work would significantly strengthen our conclusions and provide deeper insights into the precise molecular mechanisms underlying the observed phenotypic effects.”

      (4) The authors do not discuss the effects of the metabolites at their physiological concentrations. Overall, this manuscript represents a field-advancing contribution at the intersection of AI-based ligand discovery and GPCR signaling regulation.

      We thank reviewer #3 for this comment and for recognising the value of our work. Although direct quantification of intracellular free metabolite levels is challenging, several lines of evidence support the physiological relevance of our test concentrations.

      - Genetic validation supports endogenous relevance: Our genetic screen of 53 metabolic knockout mutants showed that deletions in biosynthetic pathways for these metabolites consistently disrupted the α-factor-induced cell death, with the vast majority of strains (94.4%) resisting the α-factor-induced cell death, and notably, a subset even displayed accelerated growth in the presence of α‑factor. This suggests that endogenous levels of these metabolites normally provide some degree of protection, supporting their physiological role in GPCR regulation.

      - Metabolomics confirms in vivo accumulation: Our untargeted metabolomics analysis revealed that α-factor-treated survivors consistently showed enrichment of CoQ6 and zymosterol compared to sensitive cells. This demonstrates that these metabolites naturally accumulate to protective levels during stress responses, validating their biological relevance.

    1. especially if your team is predominantly introverts. Give each participant a sheet of paper and ask them to generate three ideas in five minutes. Pass all papers to the right. Ask each participant to build on his colleague’s ideas, improving them or using them as inspiration to generate another three ideas. Continue passing papers to the right until they reach their original participant.

      This very useful to me because as an introvert, I do not always come up with ideas very quickly. However, once I have something to work from, the ideas do not stop flowing. I have never done this before but I would love to try this method.

    1. "right to privacy."

      The difference between a general "right to privacy" and what the court argues is outlined in the 4th amendment is very interesting to me. What privacy does the 4th amendment protect and from who does it protect citizens? The government can't invade our privacy, but can other citizens? What does this mean about private investigators?

    Annotators

  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Elon Musk [@elonmusk]. Trashing accounts that you hate will cause our algorithm to show you more of those accounts, as it is keying off of your interactions. Basically saying if you love trashing *that* account, then you will probably also love trashing *this* account. Not actually wrong lol. January 2023. URL: https://twitter.com/elonmusk/status/1615194151737520128 (visited on 2023-12-07).

      This is a very interesting algorithm choice by Elon Musk, as I find it strange that he made it so interacting with accounts you dislike will cause you too see more of them. The basic concept of it makes any normal person assume that this would deter people from his app "X", but it actually makes sense when you think about how much drama, controversies, and hate is prevalent within that app. I think he is using this strategy to basically "rage bait" people to engaging more with the app by causing them to try to win internet battles, etc.

    2. Elon Musk [@elonmusk]. Trashing accounts that you hate will cause our algorithm to show you more of those accounts, as it is keying off of your interactions. Basically saying if you love trashing *that* account, then you will probably also love trashing *this* account. Not actually wrong lol. January 2023. URL: https://twitter.com/elonmusk/status/1615194151737520128 (visited on 2023-12-07).

      This source is criticizing how Elon musk is trying to control X by trashing specific accounts that he hates, but the X algorithm is just recommending more things that he hates. Thus, creating a hate filled trash fest of a recommendation that he helped create.

    3. Petter Törnberg. How digital media drive affective polarization through partisan sorting. Proceedings of the National Academy of Sciences, 119(42):e2207159119, October 2022. URL: https://www.pnas.org/doi/10.1073/pnas.2207159119 (visited on 2023-12-07), doi:10.1073/pnas.2207159119.

      It struck me how relevant this paper is to the chapter’s point that recommendation algorithms don’t just serve content but shape what we see and how we interpret it. The study shows how digital media can drive affective polarization via “partisan sorting” — which nicely connects to the chapter’s warning that algorithms can deepen divisions by reinforcing “you vs them” dynamics.

    4. BBC. YouTube aids flat earth conspiracy theorists, research suggests. BBC, February 2019. URL: https://www.bbc.com/news/technology-47279253 (visited on 2023-12-07). { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { name: "python3", path: "./ch11_recommendations" }, predefinedOutput: true } kernelName = 'python3'

      You tube is a big part in the spreading the idea of the flat Earth to people online. Of course You tube is full of information but also misinformation, but the Youtube algorithm makes it all too easy to funnel users down a conspiracy theory rabbit hole. After interviewing people at fat earth conventions, they found that many of them got the idea from Youtube videos. They propose that the only way to fight misinformation on Youtube is to make accurate, informative videos themselves, which I would argue is happening a lot on You tube today.

    5. Lauren Feiner. DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising. CNBC, June 2022. URL: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html (visited on 2023-12-07).

      Many people think that algorithms are just "automatically running numbers", without emotions or biases. However, in this news story, Facebook was investigated by the US Department of Justice for its advertising recommendation system being suspected of discrimination, and later reached a settlement. The real irony is that no one was sitting there manually excluding certain groups; it was just the platform's "optimization logic" that automated, scaled, and executed this bias at a low cost. In other words, discrimination doesn't require a bad person; it only needs an "efficiency-first, click-through rate-king" advertising system. On the surface, it seems to be helping businesses find "the most likely people to see the ads", but in reality, it has set up invisible thresholds in the real world, preventing certain groups from ever seeing the same opportunities. As a result, "technological neutrality" has become a nice-sounding but empty slogan.

    6. Zack Whittaker. Facebook won't let you opt out of its phone number 'look up' setting. TechCrunch, March 2019. URL: https://techcrunch.com/2019/03/03/facebook-phone-number-look-up/ (visited on 2023-12-07).

      This article shows a possible violation of privacy right that Facebook made. Facebook makes 'allowing others to look up users' profile by phone number ' by default in order to recommend friends. However, these recommended friends might be ex or abuser, as this chapter says earlier. Also, the users cannot cancel this setting. But based on this article, this is currently changed. But users are still worrying about if their phone numbers can really be hidden completely.

    1. Sometimes though, individuals are still blamed for systemic problems. For example, Elon Musk, who has the power to change Twitters recommendation algorithm, blames the users for the results: Fig. 11.4 A tweet [k5] from current Twitter owner Elon Musk blaming users for how the recommendation algorithm interprets their behavior

      This tweet my Elon is interesting, because while this could just be me, it feels like Elon is in favor of this "you get more accounts that you hate" mechanic on the site. Makes sense since hate and malice is what gets people to stay on sites longer, but still, its funny how the person with the most power in this situation is actively blaming the user for outcomes completely in his control.

    2. Individuals still have responsibility with how they behave, but the system itself may be set up so that individual efforts cannot not be overcome the problems in the system.

      We always emphasize how these social media algorithms let people become addicted to scrolling information streams and wasting all our time. However, we should reflect on who's taking the responsibility to manage our own time. The answer is very clear, ourselves. And what we post on social media should also take responsibility for any consequences, even though the algorithm provided the information. Therefore, I believe it is significant to improve the algorithm and regulate our own behaviors online to create a more friendly and moral virtual society on the platform.

    3. when these guidelines were followed, they had have racially biased (that is, racist) outcomes regardless of intent or bias of the individual judges.

      The most heart-wrenching aspect of this statement lies in the fact that it reveals the problem does not stem from a single "bad judge", but rather the entire system itself is inherently biased. In other words, sometimes, without anyone intentionally discriminating, the rules themselves will automatically enforce discrimination, and even make people believe that this is a "normal" or "neutral" procedure.

    1. Time since posting (e.g., show newer posts, or remind me of posts that were made 5 years ago today)

      My experience with this explanation of what algorithms can do is exactly this. I know with snapchat specifically, I have watched their recommendation algorithm grow significantly over the years. Their saved gallery section now has "x year(s) ago today" which is awesome- I like the recommendation algorithm snapchat has going. Now with instagram, they have something similar but that idea did come off snapchats algorithm,

    2. recommendation algorithm, which is an algorithm (a series of steps or rules, such as in a computer program) that recommends posts for users to see, people for users to follow, ads for users to view, or reminders for users.

      This recommendation algorithm may seem like something helpful but I think it is part of and facilitates the evil side of social media in my opinion. The evil side of social media that I how found is the addictive side and the way that social media companies go about creating their apps is to make them addictive. The more time users spend on their app the more they can sell ads for and so they want to keep people's attention as long as possible. This recommendation algorithm aids in this by recommending posts to users that it thinks will keep them on the app and get their attention the most.

    3. I'll say I've had both good and bad experiences with the recommendations pushed by social media sites. I love shopping online, so it sometimes recommends to me some nice clothing and jewelry brands, which is pretty delightful. But for the negative side, it sometimes shows me advertisements for egg donors, which may cause short-term harm and long-term risk, which I am completely not interested in.

    4. What experiences do you have of social media sites making particularly good recommendations for you? What experiences do you have of social media sites making particularly bad recommendations for you?

      I feel like algorithms are very on and off when it comes to recommendations. One day, it can help me find a channel that I really enjoy and the other it could feel like there’s nothing that interests me. I think more times than not, it knows relatively what I want to watch but I think its addictive nature is harmful. The amount of time that I spend on Youtube lets the platform learn more about what I want to watch when in order to recommend things to me.

    1. he following table provides a summary of the list methods shown above. The column labeled result gives an explanation as to what the return value is as it relates to the new value of the list. The word mutator means that the list is changed by the method but nothing is returned (actually None is returned). A hybrid method is one that not only changes the list but also returns a value as its result. Finally, if the result is simply a return, then the list is unchanged by the method. Be sure to experiment with these methods to gain a better understanding of what they do. Method Parameters Result Description append item mutator Adds a new item to the end of a list insert position, item mutator Inserts a new item at the position given pop none hybrid Removes and returns the last item pop position hybrid Removes and returns the item at position sort none mutator Modifies a list to be sorted reverse none mutator Modifies a list to be in reverse order index item return idx Returns the position of first occurrence of item count item return ct Returns the number of occurrences of item remove item mutator Removes the first occurrence of item

      SaveFprReference

    1. Clear cookies from your browser.

      Question: I have done this before in the past to free up space on computer, but I still see popups related to searches even if I clear my cache and cookies. Are the cookies always going to generate personal ads based on what I searched even if cleared?

    2. Cookies—small pieces of data with a unique ID placed on your device by websites—are online tracking tools that enable this to happen. Cookies can store your website-specific browsing behaviour and any site-specific customization

      Comment: I never really considered cookies to be the reason why I see relevant topics on other websites. For example, when I google something and then two minutes later see it on my tiktok I always joke "our phones can hear us" but no, it's actually us looking it up and the cookies generating it across platforms.

    1. ana-logical

      Relating to, based on, or expressed through analogy — a comparison between two things that are similar in some respects but otherwise different, used to explain, clarify, or reason about a concept by drawing parallels.

    1. Democracies thrive when politicians believe they are better off playing by the rules of that game — even when they lose elections — because that’s the way to maximize their self-interest over time.

      But, what changed?

    1. The foes of Charles the Fat and his uncle and predecessor, Charles the Bald, that helped end the Carolingian dynasty were the Vikings, a seafaring people originally from Scandinavia.

      It’s crazy to see that the Vikings didn’t just raid but they reshaped Europe and helped bring an end tot he powerful dynasty.

    1. Reviewer #1 (Public review):

      The authors have implemented several clarifications in the text and improved the connection between their findings and previous work. As stated in my initial review, I had no major criticisms of the previous version of the manuscript, and I continue to consider this a solid and well-written study. However, the revised manuscript still largely reiterates existing findings and does not offer novel conceptual or experimental advances. It supports previous conclusions suggesting a likely conserved sex determination locus in aculeate hymenopterans, but does so without functional validation (i.e., via experimental manipulation) of the candidate locus in O. biroi. I also wish to clarify that I did not intend to imply that functional assessments in the Pan et al. study were conducted in more than one focal species; my previous review explicitly states that the locus's functional role was validated in the Argentine ant.

    2. Author response:

      The following is the authors’ response to the original reviews

      Reviewer #1 (Public review):

      This study investigates the sex determination mechanism in the clonal ant Ooceraea biroi, focusing on a candidate complementary sex determination (CSD) locus-one of the key mechanisms supporting haplodiploid sex determination in hymenopteran insects. Using whole genome sequencing, the authors analyze diploid females and the rarely occurring diploid males of O. biroi, identifying a 46 kb candidate region that is consistently heterozygous in females and predominantly homozygous in diploid males. This region shows elevated genetic diversity, as expected under balancing selection. The study also reports the presence of an lncRNA near this heterozygous region, which, though only distantly related in sequence, resembles the ANTSR lncRNA involved in female development in the Argentine ant, Linepithema humile (Pan et al. 2024). Together, these findings suggest a potentially conserved sex determination mechanism across ant species. However, while the analyses are well conducted and the paper is clearly written, the insights are largely incremental. The central conclusion - that the sex determination locus is conserved in ants - was already proposed and experimentally supported by Pan et al. (2024), who included O. biroi among the studied species and validated the locus's functional role in the Argentine ant. The present study thus largely reiterates existing findings without providing novel conceptual or experimental advances.

      Although it is true that Pan et al., 2024 demonstrated (in Figure 4 of their paper) that the synteny of the region flanking ANTSR is conserved across aculeate Hymenoptera (including O. biroi), Reviewer 1’s claim that that paper provides experimental support for the hypothesis that the sex determination locus is conserved in ants is inaccurate. Pan et al., 2024 only performed experimental work in a single ant species (Linepithema humile) and merely compared reference genomes of multiple species to show synteny of the region, rather than functionally mapping or characterizing these regions.

      Other comments:

      The mapping is based on a very small sample size: 19 females and 16 diploid males, and these all derive from a single clonal line. This implies a rather high probability for false-positive inference. In combination with the fact that only 11 out of the 16 genotyped males are actually homozygous at the candidate locus, I think a more careful interpretation regarding the role of the mapped region in sex determination would be appropriate. The main argument supporting the role of the candidate region in sex determination is based on the putative homology with the lncRNA involved in sex determination in the Argentine ant, but this argument was made in a previous study (as mentioned above).

      Our main argument supporting the role of the candidate region in sex determination is not based on putative homology with the lncRNA in L. humile. Instead, our main argument comes from our genetic mapping (in Fig. 2), and the elevated nucleotide diversity within the identified region (Fig. 4). Additionally, we highlight that multiple genes within our mapped region are homologous to those in mapped sex determining regions in both L. humile and Vollenhovia emeryi, possibly including the lncRNA.

      In response to the Reviewer’s assertion that the mapping is based on a small sample size from a single clonal line, we want to highlight that we used all diploid males available to us. Although the primary shortcoming of a small sample size is to increase the probability of a false negative, small sample sizes can also produce false positives. We used two approaches to explore the statistical robustness of our conclusions. First, we generated a null distribution by randomly shuffling sex labels within colonies and calculating the probability of observing our CSD index values by chance (shown in Fig. 2). Second, we directly tested the association between homozygosity and sex using Fisher’s Exact Test (shown in Supplementary Fig. S2). In both cases, the association of the candidate locus with sex was statistically significant after multiple-testing correction using the Benjamini-Hochberg False Discovery Rate. These approaches are clearly described in the “CSD Index Mapping” section of the Methods.

      We also note that, because complementary sex determination loci are expected to evolve under balancing selection, our finding that the mapped region exhibits a peak of nucleotide diversity lends orthogonal support to the notion that the mapped locus is indeed a complementary sex determination locus.

      The fourth paragraph of the results and the sixth paragraph of the discussion are devoted to explaining the possible reasons why only 11/16 genotyped males are homozygous in the mapped region. The revised manuscript will include an additional sentence (in what will be lines 384-388) in this paragraph that includes the possible explanation that this locus is, in fact, a false positive, while also emphasizing that we find this possibility to be unlikely given our multiple lines of evidence.

      In response to Reviewer 1’s suggestion that we carefully interpret the role of the mapped region in sex determination, we highlight our careful wording choices, nearly always referring to the mapped locus as a “candidate sex determination locus” in the title and throughout the manuscript. For consistency, the revised manuscript version will change the second results subheading from “The O. biroi CSD locus is homologous to another ant sex determination locus but not to honeybee csd” to “O. biroi’s candidate CSD locus is homologous to another ant sex determination locus but not to honeybee csd,” and will add the word “candidate” in what will be line 320 at the beginning of the Discussion, and will change “putative” to “candidate” in what will be line 426 at the end of the Discussion.

      In the abstract, it is stated that CSD loci have been mapped in honeybees and two ant species, but we know little about their evolutionary history. But CSD candidate loci were also mapped in a wasp with multi-locus CSD (study cited in the introduction). This wasp is also parthenogenetic via central fusion automixis and produces diploid males. This is a very similar situation to the present study and should be referenced and discussed accordingly, particularly since the authors make the interesting suggestion that their ant also has multi-locus CSD and neither the wasp nor the ant has tra homologs in the CSD candidate regions. Also, is there any homology to the CSD candidate regions in the wasp species and the studied ant?

      In response to Reviewer 1’s suggestion that we reference the (Matthey-Doret et al. 2019) study in the context of diploid males being produced via losses of heterozygosity during asexual reproduction, the revised manuscript will include (in what will be lines 123-126) the highlighted portion of the following sentence: “Therefore, if O. biroi uses CSD, diploid males might result from losses of heterozygosity at sex determination loci (Fig. 1C), similar to what is thought to occur in other asexual Hymenoptera that produce diploid males (Rabeling and Kronauer 2012; Matthey-Doret et al. 2019).”

      We note, however, that in their 2019 study, Matthey-Doret et al. did not directly test the hypothesis that diploid males result from losses of heterozygosity at CSD loci during asexual reproduction, because the diploid males they used for their mapping study came from inbred crosses in a sexual population of that species.

      We address this further below, but we want to emphasize that we do not intend to argue that O. biroi has multiple CSD loci. Instead, we suggest that additional, undetected CSD loci is one possible explanation for the absence of diploid males from any clonal line other than clonal line A. In response to Reviewer 1’s suggestion that we reference the (Matthey-Doret et al. 2019) study in the context of multilocus CSD, the revised manuscript version will include the following additional sentence in the fifth paragraph of the discussion (in what will be lines 372-374): “Multi-locus CSD has been suggested to limit the extent of diploid male production in asexual species under some circumstances (Vorburger 2013; Matthey-Doret et al. 2019).”

      Regarding Reviewer 2’s question about homology between the putative CSD loci from the (Matthey-Doret et al. 2019) study and O. biroi, we note that there is no homology. The revised manuscript version will have an additional Supplementary Table (which will be the new Supplementary Table S3) that will report the results of this homology search. The revised manuscript will also include the following additional sentence in the Results, in what will be lines 172-174: “We found no homology between the genes within the O. biroi CSD index peak and any of the genes within the putative L. fabarum CSD loci (Supplementary Table S3).”

      The authors used different clonal lines of O. biroi to investigate whether heterozygosity at the mapped CSD locus is required for female development in all clonal lines of O. biroi (L187-196). However, given the described parthenogenesis mechanism in this species conserves heterozygosity, additional females that are heterozygous are not very informative here. Indeed, one would need diploid males in these other clonal lines as well (but such males have not yet been found) to make any inference regarding this locus in other lines.

      We agree that a full mapping study including diploid males from all clonal lines would be preferable, but as stated earlier in that same paragraph, we have only found diploid males from clonal line A. We stand behind our modest claim that “Females from all six clonal lines were heterozygous at the CSD index peak, consistent with its putative role as a CSD locus in all O. biroi.” In the revised manuscript version, this sentence (in what will be lines 199-201) will be changed slightly in response to a reviewer comment below: “All females from all six clonal lines (including 26 diploid females from clonal line B) were heterozygous at the CSD index peak, consistent with its putative role as a CSD locus in all O. biroi.”

      Reviewer #2 (Public review):

      The manuscript by Lacy et al. is well written, with a clear and compelling introduction that effectively conveys the significance of the study. The methods are appropriate and well-executed, and the results, both in the main text and supplementary materials, are presented in a clear and detailed manner. The authors interpret their findings with appropriate caution.

      This work makes a valuable contribution to our understanding of the evolution of complementary sex determination (CSD) in ants. In particular, it provides important evidence for the ancient origin of a non-coding locus implicated in sex determination, and shows that, remarkably, this sex locus is conserved even in an ant species with a non-canonical reproductive system that typically does not produce males. I found this to be an excellent and well-rounded study, carefully analyzed and well contextualized.

      That said, I do have a few minor comments, primarily concerning the discussion of the potential 'ghost' CSD locus. While the authors acknowledge (line 367) that they currently have no data to distinguish among the alternative hypotheses, I found the evidence for an additional CSD locus presented in the results (lines 261-302) somewhat limited and at times a bit difficult to follow. I wonder whether further clarification or supporting evidence could already be extracted from the existing data. Specifically:

      We agree with Reviewer 2 that the evidence for a second CSD locus is limited. In fact, we do not intend to advocate for there being a second locus, but we suggest that a second CSD locus is one possible explanation for the absence of diploid males outside of clonal line A. In our initial version, we intentionally conveyed this ambiguity by titling this section “O. biroi may have one or multiple sex determination loci.” However, we now see that this leads to undue emphasis on the possibility of a second locus. In the revised manuscript, we will split this into two separate sections: “Diploid male production differs across O. biroi clonal lines” and “O. biroi lacks a tra-containing CSD locus.”

      (1) Line 268: I doubt the relevance of comparing the proportion of diploid males among all males between lines A and B to infer the presence of additional CSD loci. Since the mechanisms producing these two types of males differ, it might be more appropriate to compare the proportion of diploid males among all diploid offspring. This ratio has been used in previous studies on CSD in Hymenoptera to estimate the number of sex loci (see, for example, Cook 1993, de Boer et al. 2008, 2012, Ma et al. 2013, and Chen et al., 2021). The exact method might not be applicable to clonal raider ants, but I think comparing the percentage of diploid males among the total number of (diploid) offspring produced between the two lineages might be a better argument for a difference in CSD loci number.

      We want to re-emphasize here that we do not wish to advocate for there being two CSD loci in O. biroi. Rather, we want to explain that this is one possible explanation for the apparent absence of diploid males outside of clonal line A. We hope that the modifications to the manuscript described in the previous response help to clarify this.

      Reviewer 2 is correct that comparing the number of diploid males to diploid females does not apply to clonal raider ants. This is because males are vanishingly rare among the vast numbers of females produced. We do not count how many females are produced in laboratory stock colonies, and males are sampled opportunistically. Therefore, we cannot report exact numbers. However, we will add the highlighted portion of the following sentence (in what will be lines 268-270) to the revised manuscript: “Despite the fact that we maintain more colonies of clonal line B than of clonal line A in the lab, all the diploid males we detected came from clonal line A.”

      (2) If line B indeed carries an additional CSD locus, one would expect that some females could be homozygous at the ANTSR locus but still viable, being heterozygous only at the other locus. Do the authors detect any females in line B that are homozygous at the ANTSR locus? If so, this would support the existence of an additional, functionally independent CSD locus.

      We thank the reviewer for this suggestion, and again we emphasize that we do not want to argue in favor of multiple CSD loci. We just want to introduce it as one possible explanation for the absence of diploid males outside of clonal line A.

      The 26 sequenced diploid females from clonal line B are all heterozygous at the mapped locus, and the revised manuscript will clarify this in what will be lines 199-201. Previously, only six of those diploid females were included in Supplementary Table S2, and that will be modified accordingly.

      (3) Line 281: The description of the two tra-containing CSD loci as "conserved" between Vollenhovia and the honey bee may be misleading. It suggests shared ancestry, whereas the honey bee csd gene is known to have arisen via a relatively recent gene duplication from fem/tra (10.1038/nature07052). It would be more accurate to refer to this similarity as a case of convergent evolution rather than conservation.

      In the sentence that Reviewer 2 refers to, we are representing the assertion made in the (Miyakawa and Mikheyev 2015) paper in which, regarding their mapping of a candidate CSD locus that contains two linked tra homologs, they write in the abstract: “these data support the prediction that the same CSD mechanism has indeed been conserved for over 100 million years.” In that same paper, Miyakawa and Mikheyev write in the discussion section: “As ants and bees diverged more than 100 million years ago, sex determination in honey bees and V. emeryi is probably homologous and has been conserved for at least this long.”

      As noted by Reviewer 2, this appears to conflict with a previously advanced hypothesis: that because fem and csd were found in Apis mellifera, Apis cerana, and Apis dorsata, but only fem was found in Mellipona compressipes, Bombus terrestris, and Nasonia vitripennis, that the csd gene evolved after the honeybee (Apis) lineage diverged from other bees (Hasselmann et al. 2008). However, it remains possible that the csd gene evolved after ants and bees diverged from N. vitripennis, but before the divergence of ants and bees, and then was subsequently lost in B. terrestris and M. compressipes. This view was previously put forward based on bioinformatic identification of putative orthologs of csd and fem in bumblebees and in ants [(Schmieder et al. 2012), see also (Privman et al. 2013)]. However, subsequent work disagreed and argued that the duplications of tra found in ants and in bumblebees represented convergent evolution rather than homology (Koch et al. 2014). Distinguishing between these possibilities will be aided by additional sex determination locus mapping studies and functional dissection of the underlying molecular mechanisms in diverse Aculeata.

      Distinguishing between these competing hypotheses is beyond the scope of our paper, but the revised manuscript will include additional text to incorporate some of this nuance. We will include these modified lines below (in what will be lines 287-295), with the additions highlighted:

      “A second QTL region identified in V. emeryi (V.emeryiCsdQTL1) contains two closely linked tra homologs, similar to the closely linked honeybee tra homologs, csd and fem (Miyakawa and Mikheyev 2015). This, along with the discovery of duplicated tra homologs that undergo concerted evolution in bumblebees and ants (Schmieder et al. 2012; Privman et al. 2013) has led to the hypothesis that the function of tra homologs as CSD loci is conserved with the csd-containing region of honeybees (Schmieder et al. 2012; Miyakawa and Mikheyev 2015). However, other work has suggested that tra duplications occurred independently in honeybees, bumblebees, and ants (Hasselmann et al. 2008; Koch et al. 2014), and it remains to be demonstrated that either of these tra homologs acts as a primary CSD signal in V. emeryi.”

      (4) Finally, since the authors successfully identified multiple alleles of the first CSD locus using previously sequenced haploid males, I wonder whether they also observed comparable allelic diversity at the candidate second CSD locus. This would provide useful supporting evidence for its functional relevance.

      As is already addressed in the final paragraph of the results and in Supplementary Fig. S4, there is no peak of nucleotide diversity in any of the regions homologous to V.emeryiQTL1, which is the tra-containing candidate sex determination locus (Miyakawa and Mikheyev 2015). In the revised manuscript, the relevant lines will be 307-310. We want to restate that we do not propose that there is a second candidate CSD locus in O. biroi, but we simply raise the possibility that multi-locus CSD *might* explain the absence of diploid males from clonal lines other than clonal line A (as one of several alternative possibilities).

      Overall, these are relatively minor points in the context of a strong manuscript, but I believe addressing them would improve the clarity and robustness of the authors' conclusions.

      Reviewer #3 (Public review):

      Summary:

      The sex determination mechanism governed by the complementary sex determination (CSD) locus is one of the mechanisms that support the haplodiploid sex determination system evolved in hymenopteran insects. While many ant species are believed to possess a CSD locus, it has only been specifically identified in two species. The authors analyzed diploid females and the rarely occurring diploid males of the clonal ant Ooceraea biroi and identified a 46 kb CSD candidate region that is consistently heterozygous in females and predominantly homozygous in males. This region was found to be homologous to the CSD locus reported in distantly related ants. In the Argentine ant, Linepithema humile, the CSD locus overlaps with an lncRNA (ANTSR) that is essential for female development and is associated with the heterozygous region (Pan et al. 2024). Similarly, an lncRNA is encoded near the heterozygous region within the CSD candidate region of O. biroi. Although this lncRNA shares low sequence similarity with ANTSR, its potential functional involvement in sex determination is suggested. Based on these findings, the authors propose that the heterozygous region and the adjacent lncRNA in O. biroi may trigger female development via a mechanism similar to that of L. humile. They further suggest that the molecular mechanisms of sex determination involving the CSD locus in ants have been highly conserved for approximately 112 million years. This study is one of the few to identify a CSD candidate region in ants and is particularly noteworthy as the first to do so in a parthenogenetic species.

      Strengths:

      (1) The CSD candidate region was found to be homologous to the CSD locus reported in distantly related ant species, enhancing the significance of the findings.

      (2) Identifying the CSD candidate region in a parthenogenetic species like O. biroi is a notable achievement and adds novelty to the research.

      Weaknesses

      (1) Functional validation of the lncRNA's role is lacking, and further investigation through knockout or knockdown experiments is necessary to confirm its involvement in sex determination.

      See response below.

      (2) The claim that the lncRNA is essential for female development appears to reiterate findings already proposed by Pan et al. (2024), which may reduce the novelty of the study.

      We do not claim that the lncRNA is essential for female development in O. biroi, but simply mention the possibility that, as in L. humile, it is somehow involved in sex determination. We do not have any functional evidence for this, so this is purely based on its genomic position immediately adjacent to our mapped candidate region. We agree with the reviewer that the study by Pan et al. (2024) decreases the novelty of our findings. Another way of looking at this is that our study supports and bolsters previous findings by partially replicating the results in a different species.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      L307-308 should state homozygous for either allele in THE MAJORITY of diploid males.

      This will be fixed in the revised manuscript, in what will be line 321.

      Reviewer #3 (Recommendations for the authors):

      The association between heterozygosity in the CSD candidate region and female development in O. biroi, along with the high sequence homology of this region to CSD loci identified in two distantly related ant species, is not sufficient to fully address the evolution of the CSD locus and the mechanisms of sex determination.

      Given that functional genetic tools, such as genome editing, have already been established in O. biroi, I strongly recommend that the authors investigate the role of the lncRNA through knockout or knockdown experiments and assess its impact on the sex-specific splicing pattern of the downstream tra gene.

      Although knockout experiments of the lncRNA would be illuminating, the primary signal of complementary sex determination is heterozygosity. As is clearly stated in our manuscript and that of (Pan et al. 2024), it does not appear to be heterozygosity within the lncRNA that induces female development, but rather heterozygosity in non-transcribed regions linked to the lncRNA. Therefore, future mechanistic studies of sex determination in O. biroi, L. humile, and other ants should explore how homozygosity or heterozygosity of this region impacts the sex determination cascade, rather than focusing (exclusively) on the lncRNA.

      With this in mind, we developed three sets of guide RNAs that cut only one allele within the mapped CSD locus, with the goal of producing deletions within the highly variable region within the mapped locus. This would lead to functional hemizygosity or homozygosity within this region, depending on how the cuts were repaired. We also developed several sets of PCR primers to assess the heterozygosity of the resultant animals. After injecting 1,162 eggs over several weeks and genotyping the hundreds of resultant animals with PCR, we confirmed that we could induce hemizygosity or homozygosity within this region, at least in ~1/20 of the injected embryos. Although it is possible to assess the sex-specificity of the splice isoform of tra as a proxy for sex determination phenotypes (as done by (Pan et al. 2024)), the ideal experiment would assess male phenotypic development at the pupal stage. Therefore, over several more weeks, we injected hundreds more eggs with these reagents and reared the injected embryos to the pupal stage. However, substantial mortality was observed, with only 12 injected eggs developing to the pupal stage. All of these were female, and none of them had been successfully mutated.

      In conclusion, we agree with the reviewer that functional experiments would be useful, and we made extensive attempts to conduct such experiments. However, these experiments turned out to be extremely challenging with the currently available protocols. Ultimately, we therefore decided to abandon these attempts.  

      We opted not to include these experiments in the paper itself because we cannot meaningfully interpret their results. However, we are pleased that, in this response letter, we can include a brief description for readers interested in attempting similar experiments.

      Since O. biroi reproduces parthenogenetically and most offspring develop into females, observing a shift from female- to male-specific splicing of tra upon early embryonic knockout of the lncRNA would provide much stronger evidence that this lncRNA is essential for female development. Without such functional validation, the authors' claim (lines 36-38) seems to reiterate findings already proposed by Pan et al. (2024) and, as such, lacks sufficient novelty.

      We have responded to the issue of “lack of novelty” above. But again, the actual CSD locus in both O. biroi and L. humile appears to be distinct from (but genetically linked to) the lncRNA, and we have no experimental evidence that the putative lncRNA in O. biroi is involved in sex determination at all. Because of this, and given the experimental challenges described above, we do not currently intend to pursue functional studies of the lncRNA.

      References

      Hasselmann M, Gempe T, Schiøtt M, Nunes-Silva CG, Otte M, Beye M. 2008. Evidence for the evolutionary nascence of a novel sex determination pathway in honeybees. Nature 454:519–522.

      Koch V, Nissen I, Schmitt BD, Beye M. 2014. Independent Evolutionary Origin of fem Paralogous Genes and Complementary Sex Determination in Hymenopteran Insects. PLOS ONE 9:e91883.

      Matthey-Doret C, van der Kooi CJ, Jeffries DL, Bast J, Dennis AB, Vorburger C, Schwander T. 2019. Mapping of multiple complementary sex determination loci in a parasitoid wasp. Genome Biology and Evolution 11:2954–2962.

      Miyakawa MO, Mikheyev AS. 2015. QTL mapping of sex determination loci supports an ancient pathway in ants and honey bees. PLOS Genetics 11:e1005656.

      Pan Q, Darras H, Keller L. 2024. LncRNA gene ANTSR coordinates complementary sex determination in the Argentine ant. Science Advances 10:eadp1532.

      Privman E, Wurm Y, Keller L. 2013. Duplication and concerted evolution in a master sex determiner under balancing selection. Proceedings of the Royal Society B: Biological Sciences 280:20122968.

      Rabeling C, Kronauer DJC. 2012. Thelytokous parthenogenesis in eusocial Hymenoptera. Annual Review of Entomology 58:273–292.

      Schmieder S, Colinet D, Poirié M. 2012. Tracing back the nascence of a new sex-determination pathway to the ancestor of bees and ants. Nature Communications 3:1–7.

      Vorburger C. 2013. Thelytoky and Sex Determination in the Hymenoptera: Mutual Constraints. Sexual Development 8:50–58.

    1. better understand how one instance of poor time management can trigger a cascading situation with disastrous results, imagine that a student has an assignment due in a business class. She knows that she should be working on it, but she isn’t quite in the mood. Instead she convinces herself that she should think a little more about what she needs to complete the assignment and decides to do so while looking at social media or maybe playing a couple more rounds of a game on her phone. In a little while, she suddenly realizes that she has become distracted and the evening has slipped away. She has little time left to work on her assignment. She stays up later than usual trying to complete the assignment but cannot finish it. Exhausted, she decides that she will work on it in the morning during the hour she had planned to study for her math quiz. She knows there will not be enough time in the morning to do a good job on the assignment, so she decides that she will put together what she has and hope she will at least receive a passing grade.

      this is going over a girl with poor time managements.

    2. A very unfortunate but all-too-common situation in higher education is the danger students face from poor time management. Many college administrators that work directly with students are aware that a single mishap or a case of poor time management can set into motion a series of events that can seriously jeopardize a student’s success. In some of the more extreme instances, the student may even fail to graduate because of it.

      talking about the poor time management skills.

    1. Reviewer #2 (Public review):

      This is a nice paper focused on the response of microglia to different clinical stages of prion infection in acute brain slices. The key here is the use of time-lapse imaging, which captures the dynamics of microglial surveillance, including morphology, migration, and intracellular neuron-microglial contacts. The authors use a myeloid GFP-labeled transgenic mouse to track microglia in SSLOW-infected brain slices, quantifying differences in motility and microglial-neuron interactions via live fluorescence imaging. Interesting findings include the elaborate patterns of motility among microglia, the distinct types and duration of intracellular contacts, the potential role of calcium signaling in facilitating hypermobility, and the fact that this motion-promoting status is intrinsic to microglia, persisting even after the cells have been isolated from infected brains. Although largely a descriptive paper, there are mechanistic insights, including the role of calcium in supporting movement of microglia, where bursts of signaling are identified even within the time-lapse format, and inhibition studies that implicate the purinergic receptor and calcium transient regulator P2Y6 in migratory capacity.

      Strengths:

      (1) The focus on microglia activation and activity in the context of prion disease is interesting.

      (2) Two different prions produce largely the same response.

      (3) Use of time-lapse provides insight into the dynamics of microglia, distinguishing between types of contact - mobility vs motility - and providing insight into the duration/transience and reversibility of extensive somatic contacts that include brief and focused connections in addition to soma envelopment.

      (4) Imaging window selection (3 hours) guided by prior publications documenting preserved morphology, activity, and gene expression regulation up to 4 hours.

      (5) The distinction between high mobility and low mobility microglia is interesting, especially given that hyper mobility seems to be an innate property of the cells.

      (6) The live-imaging approach is validated by fixed tissue confocal imaging.

      (7) The variance in duration of neuron/microglia contacts is interesting, although there is no insight into what might dictate which status of interaction predominates.

      (8) The reversibility of the enveloping action, that is not apparently a commitment to engulfment, is interesting, as is the fact that only neurons are selected for this activity.

      (9) The calcium studies use the fluorescent dye calbryte-590 to pick up neuronal and microglial bursts - prolonged bursts are detected in enveloped neurons and in the hyper-mobile microglia - the microglial lead is followed up using MRS-2578 P2Y6 inhibitor that blunts the mobility of the microglia.

      Weaknesses:

      (1) The number of individual cells tracked has been provided, but not the number of individual mice. The sex of the mice is not provided.

      (2) The statistical approach is not clear; was each cell treated as a single observation?

      (3) The potential for heterogeneity among animals has not been addressed.

      (4) Validation of prion accumulation at each clinical stage of the disease is not provided.

      (5) How were the numerous captures of cells handled to derive morphological quantitative values? Based on the videos, there is a lot of movement and shape-shifting.

      (6) While it is recognized that there are limits to what can be measured simultaneously with live imaging, the authors appear to have fixed tissues from each time point too - it would be very interesting to know if the extent or prion accumulation influences the microglial surveillance - i.e., do the enveloped ones have greater pathology>

    1. The rations upon the ship were very bad... Every ten persons received three pounds of butter a week, four cans of beer and two cans of water a day, two platters full of peas every noon, meat four dinners in the week and fish three, and these we were obliged to prepare with our own butter. Also we must every noon save up enough so that we might get our supper from it. The worst of all was that both the meat and the fish were salted to such an extent and had become so rancid that we could hardly eat half of them. And had I not by the advice of good friends in England provided myself with various kinds of refreshment, it might perhaps have gone very badly for me.

      I felt another annotation of this was important because I think while this Author describes deplorable conditions and struggles such as inadequate food, and rough seas, i cant help but think of the deplorable conditions discussed in the previous chapters abord the salve ships. Those people were subjected to things far worse than this, with little to no food at all, cramped conditions, rampant sickness, and they did not have the ability to fully stand in some instances let alone lay down. It is wild to think about the privilege of this man in comparison, despite his self proclaimed inconveniences and experiences. -cherryb history 211

    2. But praised be the fatherly hand of the divine mercy which lifts us up again so many times and holds us back that we fall not entirely into the abyss of the evil one.

      This highlights the relationship with religion the author has, with him also citing psalm 107 earlier in the text. He claims that it is God that has allowed the boat to stay together and arrive in one peace, noting that if they did not do so, it was the devil's or the "evil ones" doing. So, not so much luck, or the work of the captain and crew. but divine intervention. -cherryb history 211

    3. The rations upon the ship were very bad... Every ten persons received three pounds of butter a week, four cans of beer and two cans of water a day, two platters full of peas every noon, meat four dinners in the week and fish three, and these we were obliged to prepare with our own butter. Also we must every noon save up enough so that we might get our supper from it

      This shows that the rough seas, storms, and sickness were not only the things faced, but the rations were horrible. He describes the salted fish and meats as "rancid smelling". I also think, contextually, it is interesting to see that beer was considered an essential part of rations. There is a mention of fresh water, but its fascinating to me that the amount of beer given in rations exceeds the allocation of water twofold. What were the beliefs surrounding alcohol at this time, was it believed to have supplemental health effects? - cherryb history 211

    1. eLife Assessment

      This study introduces a valuable new metric-phenological lag-to help partition the drivers of observed versus expected shifts in spring phenology under climate warming. The conceptual framework is clearly presented and supported by an extensive dataset, and the revisions have improved the manuscript, though some concerns-particularly regarding uncertainty quantification, spatial analysis, and modeling assumptions-remain only partially addressed. The strength of evidence is generally solid, but further analysis would help to validate the study's conclusions.

    1. As for the second big movement, the Great Awakening, the man who set it going at first acted much like a student of nature, too. In 1723, the year Franklin ran away, twenty-year-old Jonathan Edwards sent an essay to the Royal Society, a scientific organization in England. In it, Edwards recorded his observations on the way spiders spun out lines of web and went “sailing in the air . . . from one tree to another.” (In fact, they used their webs the same way Franklin used his kite to be pulled across the pond.) In college Edwards read the writings of John Locke and Isaac Newton. But instead of a career in science, he became a minister. Unlike Franklin, he was “by nature very unfit for secular business.”

      Edwards studied spiders. similar to Franklin's kite experiment, shows both had scientific curiosity. But Edwards chose religion over science

    2. In England the Enlightenment philosopher John Locke had written about how governments came to exist. Kings and queens claimed that their authority came from God. Locke doubted that. Why did kings have any “divine right” to rule? He suggested that the first human governments had been formed centuries ago, when people in a state of nature joined together to protect themselves. If kings ruled, it was not because God blessed them with power but because people had created that form of government.

      reflects they're deitist beliefs

    3. But their God was not the sort who divided the Red Sea to help Moses escape the Egyptians or the God who sent his son Jesus to earth to walk on water and be raised from the dead. God governed the world through natural laws, deists argued. Call him the “Supreme Architect” or “Nature’s God”—he had no need for miracles. Deists believed that human reason was the key to uncovering nature’s laws. The famous British scientist Isaac Newton had made huge advances in human knowledge, using mathematics to plot the path of the planets through the heavens and discovering the force of gravity. Like the deists, young Franklin adopted the methods of Socrates, the ancient Greek philosopher who was forever asking questions.

      method of socrates: asking questions to find truth instead of accepting what you're told.

    1. Here is a summary of the article and a step-by-step process for disagreeing constructively based on its findings.

      Summary: How to Disagree Constructively

      Disagreements can be highly beneficial, leading to better decisions and preventing errors. However, they often escalate into damaging conflicts. The common advice—to be empathetic and adopt open body language—often fails because there is an "intention-behavior gap." Your counterpart cannot read your mind; they only know what your words and actions communicate.

      The problem is that our words often fail to convey our good intentions. For example, intending to be curious, we might ask, "How can you believe that?" which sounds judgmental.

      Research by Julia Minson, Hanne Collins, and Michael Yeomans shows that the key to constructive disagreement is translating positive mental states (like curiosity and respect) into observable, verbal behaviors.


      A 5-Step Procedure for Constructive Disagreement

      This process focuses on using specific language to make your positive intentions clear to your counterpart, lowering the temperature and fostering a productive conversation.

      Step 1: Explicitly Signal Your Desire to Learn

      Instead of just feeling curious, you must state your curiosity. This signals that you want to understand, not attack.

      • Why it works: It frames the disagreement as a mutual learning exercise rather than a battle.
      • Example Language:
        • "It seems we are seeing this differently. I am curious how you think about XYZ."
        • "I'd like to understand more about your perspective on this."

      Step 2: Acknowledge Their Perspective

      People in a conflict need to know they have been heard. The most effective way to do this is to restate the core of their argument to prove you were listening.

      • Why it works: It validates the other person and ensures you are arguing against their actual point, not a misunderstanding of it.
      • Example Language:
        • "So, if I'm understanding you correctly, your main concern is..."
        • "What I'm hearing you say is that..."
        • (If you don't understand): "Could you clarify what you mean by...?"

      Step 3: Find and State Common Ground

      No matter how significant the disagreement, you can usually find shared beliefs, goals, or values if you "zoom out."

      • Why it works: This reminds both parties that you are on the same general team, reinforcing the collaborative (not competitive) nature of the conversation.
      • Example Language:
        • "I agree with some of what you’re saying, especially..."
        • "I think we both want what's best for the project."
        • "We both agree that the current situation isn't working."

      Step 4: Hedge Your Claims

      Research shows that in factual disagreements, the average person is wrong at least 50% of the time. Acknowledge this possibility by showing humility instead of asserting absolute certainty.

      • Why it works: It leaves open the possibility that you could be wrong, which makes you appear more open-minded and less threatening.
      • Example Language:
        • "From my viewpoint..."
        • "The way I've been thinking about it is..."
        • "Sometimes it is the case that..."
        • "I might be missing something, but..."

      Step 5: Share Your Story (When Appropriate)

      Strong beliefs are often rooted in personal experiences. Sharing the story behind your belief can be more effective for building trust than relying solely on facts and data.

      • Why it works: It humanizes your position, explains the emotion behind your logic, and builds an interpersonal bridge.
      • Example Language:
        • "The reason I feel strongly about this is because I had an experience where..."
        • "My perspective on this was shaped when I..."

      Note for Leaders

      To foster this culture, leaders should model these five verbal behaviors and actively train employees in these specific conversational skills—not just tell them to "be curious" or "be respectful."

    1. fate

      idk, at least some experience is that the historical process implies fatedness, the very difficulty of steering that process. but overall I see what he is saying./

    Annotators

    1. Reviewer #1 (Public review):

      In the current article, Octavia Soegyono and colleagues study "The influence of nucleus accumbens shell D1 and D2 neurons on outcome-specific Pavlovian instrumental transfer", building on extensive findings from the same lab. While there is a consensus about the specific involvement of the Shell part of the Nucleus Accumbens (NAc) in specific stimulus-based actions in choice settings (and not in General Pavlovian instrumental transfer - gPIT, as opposed to the Core part of the NAc), mechanisms at the cellular and circuitry levels remain to be explored. In the present work, using sophisticated methods (rat Cre-transgenic lines from both sexes, optogenetics and the well-established behavioral paradigm outcome-specific PIT - sPIT), Octavia Soegyono and colleagues decipher the differential contribution of dopamine receptors D1 and D2 expressing-spiny projection neurons (SPNs).

      After validating the viral strategy and the specificity of the targeting (immunochemistry and electrophysiology), the authors demonstrate that while both NAc Shell D1- and D2-SPNs participate in mediating sPIT, NAc Shell D1-SPNs projections to the Ventral Pallidum (VP, previously demonstrated as crucial for sPIT), but not D2-SPNs, mediates sPIT. They also show that these effects were specific to stimulus-based actions, as value-based choices were left intact in all manipulations.

      This is a well-designed study and the results are well supported by the experimental evidence. The paper is extremely pleasant to read and add to the current literature.

    2. Reviewer #2 (Public review):

      Summary:

      This manuscript by Soegyono et a. describes a series of experiments designed to probe the involvement of dopamine D1 and D2 neurons within the nucleus accumbens shell in outcome-specific Pavlovian-instrumental transfer (osPIT), a well-controlled assay of cue-guided action selection based on congruent outcome associations. They used an optogenetic approach to phasically silence NAc shell D1 (D1-Cre mice) or D2 (A2a-Cre mice) neurons during a subset of osPIT trials. Both manipulations disrupted cue-guided action selection but had no effects on negative control measures/tasks (concomitant approach behavior, separate valued guided choice task), nor were any osPIT impairments found in reporter only control groups. Separate experiments revealed that selective inhibition of NAc shell D1 but not D2 inputs to ventral pallidum were required for osPIT expression, thereby advancing understanding of the basal ganglia circuitry underpinning this important aspect of decision making.

      Strengths:

      The combinatorial viral and optogenetic approaches used here were convincingly validated through anatomical tract-tracing and ex vivo electrophysiology. The behavioral assays are sophisticated and well-controlled to parse cue and value guided action selection. The inclusion of reporter only control groups is rigorous and rules out nonspecific effects of the light manipulation. The findings are novel and address a critical question in the literature. Prior work using less decisive methods had implicated NAc shell D1 neurons in osPIT but suggested that D2 neurons may not be involved. The optogenetic manipulations used in the current study provides a more direct test of their involvement and convincingly demonstrate that both populations play an important role. Prior work had also implicated NAc shell connections to ventral pallidum in osPIT, but the current study reveals the selective involvement of D1 but not D2 neurons in this circuit. The authors do a good job of discussing their findings, including their nuanced interpretation that NAc shell D2 neurons may contribute to osPIT through their local regulation of NAc shell microcircuitry.

      Weaknesses:

      The current study exclusively used an optogenetic approach to probe the function of D1 and D2 NAc shell neurons. Providing a complementary assessment with chemogenetics or other appropriate methods would strengthen conclusions, particularly the novel demonstration for D2 NAc shell involvement. Likewise, the null result of optically inhibiting D2 inputs to ventral pallidum leaves open the possibility that a more complete or sustained disruption of this pathway may have impaired osPIT.

      Conclusions:

      The research described here was successful in providing critical new insights into the contributions of NAc D1 and D2 neurons in cue-guided action selection. The authors' data interpretation and conclusions are well reasoned and appropriate. They also provide a thoughtful discussion of study limitations and implications for future research. This research is therefore likely to have a significant impact on the field.

      Comments on the previous version:

      I have reviewed the rebuttal and revised manuscript and have no remaining concerns.

    3. Author response:

      The following is the authors’ response to the previous reviews

      Reviewer#1 (Public Review):

      In the current article, Octavia Soegyono and colleagues study "The influence of nucleus accumbens shell D1 and D2 neurons on outcome-specific Pavlovian instrumental transfer", building on extensive findings from the same lab. While there is a consensus about the specific involvement of the Shell part of the Nucleus Accumbens (NAc) in specific stimulus-based actions in choice settings (and not in General Pavlovian instrumental transfer - gPIT, as opposed to the Core part of the NAc), mechanisms at the cellular and circuitry levels remain to be explored. In the present work, using sophisticated methods (rat Cre-transgenic lines from both sexes, optogenetics and the well-established behavioral paradigm outcome-specific PIT - sPIT), Octavia Soegyono and colleagues decipher the diOerential contribution of dopamine receptors D1 and D2 expressing-spiny projection neurons (SPNs).

      After validating the viral strategy and the specificity of the targeting (immunochemistry and electrophysiology), the authors demonstrate that while both NAc Shell D1- and D2SPNs participate in mediating sPIT, NAc Shell D1-SPNs projections to the Ventral Pallidum (VP, previously demonstrated as crucial for sPIT), but not D2-SPNs, mediates sPIT. They also show that these eOects were specific to stimulus-based actions, as valuebased choices were left intact in all manipulations.

      This is a well-designed study and the results are well supported by the experimental evidence. The paper is extremely pleasant to read and add to the current literature.

      We thank the Reviewer for their positive assessment.

      Comments on revisions:  

      We thank the authors for their detailed responses and for addressing our comments and concerns.

      To further improve consistency and transparency, we kindly request that the authors provide, for Supplemental Figures S1-S4, panels E (raw data for lever presses during the PIT test), the individual data points together with the corresponding statistical analyses in the figure legends.

      Panel E of Figures S1-S4 now includes the individual data points. The outcome-specific data have already been analysed, and we report these analyses in the main manuscript. These analyses are more informative than those requested by the Reviewer since they report the net eFects of the stimuli on choice between actions while controlling for potential individual baseline instrumental performance. All data remain fully transparent and are publicly available on an online repository in accordance with eLife policies (see relevant section in Materials and Methods).  

      In addition, regarding Supplemental Figure S3, panel E, we note the absence of a PIT eOect in the eYFP group under the ON condition, which appears to diOer from the net response reported in the main Figure 5, panel B. Could the authors clarify this apparent discrepancy?

      We apologize for the error, which has now been corrected. 

      We also note a discrepancy between the authors' statement in their response ("40 rats excluded based on post-mortem analyses") and the number of excluded animals reported in the Materials and Methods section, which adds up to 47. We kindly ask the authors to clarify this point for consistency.

      We thank the Reviewer for identifying the error reported in our initial response. The total number of animals excluded was 47, as reported in the manuscript. 

      Finally, as a minor point, we suggest indicating the total number of animals used in the study in the Materials and Methods section.

      The total number of animals has been included in the Materials and Methods section.

      Reviewer #2 (Public Review):

      Summary:

      This manuscript by Soegyono et a. describes a series of experiments designed to probe the involvement of dopamine D1 and D2 neurons within the nucleus accumbens shell in outcome-specific Pavlovian-instrumental transfer (osPIT), a well-controlled assay of cueguided action selection based on congruent outcome associations. They used an optogenetic approach to phasically silence NAc shell D1 (D1-Cre mice) or D2 (A2a-Cre mice) neurons during a subset of osPIT trials. Both manipulations disrupted cue-guided action selection but had no eOects on negative control measures/tasks (concomitant approach behavior, separate valued guided choice task), nor were any osPIT impairments found in reporter only control groups. Separate experiments revealed that selective inhibition of NAc shell D1 but not D2 inputs to ventral pallidum were required for osPIT expression, thereby advancing understanding of the basal ganglia circuitry underpinning this important aspect of decision making.

      Strengths:

      The combinatorial viral and optogenetic approaches used here were convincingly validated through anatomical tract-tracing and ex vivo electrophysiology. The behavioral assays are sophisticated and well-controlled to parse cue and value guided action selection. The inclusion of reporter only control groups is rigorous and rules out nonspecific eOects of the light manipulation. The findings are novel and address a critical question in the literature. Prior work using less decisive methods had implicated NAc shell D1 neurons in osPIT but suggested that D2 neurons may not be involved. The optogenetic manipulations used in the current study provides a more direct test of their involvement and convincingly demonstrate that both populations play an important role. Prior work had also implicated NAc shell connections to ventral pallidum in osPIT, but the current study reveals the selective involvement of D1 but not D2 neurons in this circuit. The authors do a good job of discussing their findings, including their nuanced interpretation that NAc shell D2 neurons may contribute to osPIT through their local regulation of NAc shell microcircuitry.

      We thank the Reviewer for their positive assessment.

      Weaknesses:

      The current study exclusively used an optogenetic approach to probe the function of D1 and D2 NAc shell neurons. Providing a complementary assessment with chemogenetics or other appropriate methods would strengthen conclusions, particularly the novel demonstration for D2 NAc shell involvement. Likewise, the null result of optically inhibiting D2 inputs to ventral pallidum leaves open the possibility that a more complete or sustained disruption of this pathway may have impaired osPIT.

      We acknowledge the reviewer's valuable suggestion that demonstrating NAc-S D1- and D2-SPNs engagement in outcome-specific PIT through another technique would strengthen our optogenetic findings. Several approaches could provide this validation. Chemogenetic manipulation, as the reviewer suggested, represents one compelling option. Alternatively, immunohistochemical assessment of phosphorylated histone H3 at serine 10 (P-H3) oFers another promising avenue, given its established utility in reporting striatal SPNs plasticity in the dorsal striatum (Matamales et al., 2020). We hope to complete such an assessment in future work since it would address the limitations of previous work that relied solely on ERK1/2 phosphorylation measures in NAc-S SPNs (Laurent et al., 2014). The manuscript was modified to report these future avenues of research (page 12). 

      Regarding the null result from optical silencing of D2 terminals in the ventral pallidum, we agree with the reviewer's assessment. While we acknowledge this limitation in the current manuscript (page 13), we aim to address this gap in future studies to provide a more complete mechanistic understanding of the circuit.

      Conclusions:

      The research described here was successful in providing critical new insights into the contributions of NAc D1 and D2 neurons in cue-guided action selection. The authors' data interpretation and conclusions are well reasoned and appropriate. They also provide a thoughtful discussion of study limitations and implications for future research. This research is therefore likely to have a significant impact on the field.

      We thank the Reviewer for their positive assessment.

      Comments on revisions:

      I have reviewed the rebuttal and revised manuscript and have no remaining concerns.

      We are pleased to have addressed the Reviewer’s query.

      References

      Laurent, V., Bertran-Gonzalez, J., Chieng, B. C., & Balleine, B. W. (2014). δ-Opioid and Dopaminergic Processes in Accumbens Shell Modulate the Cholinergic Control of Predictive Learning and Choice. J Neurosci, 34(4), 1358-1369. https://doi.org/10.1523/JNEUROSCI.4592-13.2014

      Matamales, M., McGovern, A. E., Mi, J. D., Mazzone, S. B., Balleine, B. W., & BertranGonzalez, J. (2020). Local D2- to D1-neuron transmodulation updates goal-directed learning in the striatum. Science, 367(6477), 549-555. https://doi.org/10.1126/science.aaz5751

    1. eLife Assessment

      This paper addresses the significant question of quantifying epistasis patterns, which affect the predictability of evolution, by reanalyzing a recently published combinatorial deep mutational scan experiment. The findings are useful, showing that epistasis is fluid, i.e. strongly background dependent, but that fitness effects of mutations are statistically predictable based on the background fitness. While the general approach appears solid, some claims remain incompletely supported by the analysis, as arbitrary cutoffs are used and the description of methods lacks specifics. This analysis should be of interest to the community working on fitness landscapes.

    2. Reviewer #1 (Public review):

      The paper reports some interesting patterns in epistasis in a recently published large fitness landscape dataset. The results may have implications for our understanding of fitness landscapes and protein evolution. However, this version of the paper remains fairly descriptive and has significant deficiencies in clarity and rigor.

      The authors have addressed some of my criticisms (e.g., I appreciate the additional analysis of synonymous mutations, and a more rigorous approach to calling fitness peaks), but many of the issues raised in my first round of review remain in the current version. Frankly, I am quite disappointed that the authors did not address my comments point by point, which is the norm. The remaining (and some new) issues are below.

      (1a) (Modified from first round) I previously suggested to dissect what appears to be three different patterns of epistasis: "strong" and "weak" global epistasis and what one can could "purely idiosyncratic", i.e., not dependent on background fitness. The authors attempted to address this, but I don't think what they have done is sufficient. They make a statement "The lethal mutations have a slope smaller than -0.7 and average slope of -0.98. The remaining mutations all have a slope greater than -0.56" (LL 274-276)", but there is no evidence provided to support this claim. This is a strong and I think interesting statement (btw, how is "lethal" defined?) and warrants a dedicated figure. This statement suggests that the mixed patterns shown in Figure 5 can actually be meaningfully separated. Why don't the authors show this? Instead, they still claim "overall, global epistasis is not very strong on the folA landscape" (LL. 273-274). I maintain that this claim does not quite capture the observations.

      Later in the text there is a whole section called "Only a small fraction of mutations exhibit strong global epistasis", which also seems related to this issue. First, I don't follow the logic here. Why is this section separate from this initial discussion? Second, here the authors claim "only a small subset of mutations exhibits strong global epistasis (R^2 > 0.5)" and then "This sharp contrast suggests a binary behavior of mutations: they either exhibit strong global epistasis (R2 > 0.5), or not (R2 < 0.5)." But this R^2 threshold seems arbitrary, and I don't see any statistical support for this binary nature.

      (1b) (Verbatim from first round) Another rather remarkable feature of this plot is that the slopes of the strong global epistasis patterns sem to be very similar across mutations. Is this the case? Is there anything special about this slope? For example, does this slope simply reflect the fact that a given mutation becomes essentially lethal (i.e., produces the same minimal fitness) in a certain set of background genotypes?

      (1c) (Verbatim from first round) Finally, how consistent are these patterns with some null expectations? Specifically, would one expect the same distribution of global epistasis slopes on an uncorrelated landscape? Are the pivot points unusually clustered relative to an expectation on an uncorrelated landscape?

      (1d) (Verbatim from first round) The shapes of the DFE shown in Figure 7 are also quite interesting, particularly the bimodal nature of the DFE in high-fitness (HF) backgrounds. I think this bimodalilty must be a reflection of clustering of mutation-background combinations mentioned above. I think the authors ought to draw this connection explicitly. Do all HF backgrounds have a bimodal DFE? What mutations occupy the "moving" peak?

      (1e) (Modified from first round). I still don't understand why there are qualitative differences in the shape of the DFE between functional and non-functional backgrounds (Figure 8B,C). Why is the transition between bimodal DFE in Figure 8B and unimodal DFE in Figure 8C is so abrupt? Perhaps the authors can plot the DFEs for all backgrounds on the same plot and just draw a line that separates functional and non-functional backgrounds so that the reader can better see whether DFE shape changes gradually or abruptly.

      (1f) (Modified from first round) I am now more convinced that synonymous mutations alter epistasis and behave differently than non-synonymous mutations, but I still have some questions. (i) I would have liked a side-by-side comparison of synonymous and non-synonymous mutations, both in terms of their effects on fitness and on epistasis.<br /> (ii) The authors claim (LL 278-286) that "synonymous substitutions tend to follow two recurring behaviors" but this is not shown. To demonstrate this, the authors ought to plot (for example) the distribution of slopes of regression lines. Is this distribution actually bimodal? (iii) Later in the same paragraph the authors say "synonymous changes do not exhibit very strong background fitness-dependence". I don't see how this follows from the previous discussion.

      (2) The authors claim to have improved statistical rigor of their analysis, but the Methods section is really thin and inadequate for understanding how the statistical analyses were done.

      (3) In general, I notice a regrettable lack of attention to detail in the text, which makes me worried about a similar problem in the actual data analysis. Here are a few examples. (i) Throughout the text, the authors now refer to functional and non-functional genotypes, but several figures and captions retained the old HF and LF designations. (ii) Figure 7 is called Figure 8. (iii) Figure 3B is not discussed, though it logically precedes Figure 3A and 3C. (iv) Many of my comments, especially minor, were not addressed at all.

    3. Reviewer #3 (Public review):

      Summary:

      The authors have studied a previously published large dataset on the fitness landscape of a 9 base-pair region of the folA gene. The objective of the paper is to understand various aspects of epistasis in this system, which the authors have achieved through detailed and computationally expensive exploration of the landscape. The authors describe epistasis in this system as "fluid", meaning that it depends sensitively on the genetic background, thereby reducing the predictability of evolution at the genetic level. However, the study also finds some robust patterns. The first is the existence of a "pivot point" for a majority of mutations, which is a fixed growth rate at which the effect of mutations switches from beneficial to deleterious (consistent with a previous study on the topic). The second is the observation that the distribution of fitness effects (DFE) of mutations is predicted quite well by the fitness of the genotype, especially for high-fitness genotypes. While the work does not offer a synthesis of the multitude of reported results, the information provided here raises interesting questions for future studies in this field.

      Strengths:

      A major strength of the study is its multifaceted approach, which has helped the authors tease out a number of interesting epistatic properties. The study makes a timely contribution by focusing on topical issues like global epistasis, the existence of pivot points, and the dependence of DFE on the background genotype and its fitness.

      The authors have classified pairwise epistasis into six types, and found that the type of epistasis changes depending on background mutations. Switches happen more frequently for mutations at functionally important sites. Interestingly, the authors find that even synonymous mutations can alter the epistatic interaction between mutations in other codons, and this effect is uncorrelated with the direct fitness effects of the synonymous mutations. Alongside the observations of "fluidity", the study reports limited instances of global epistasis (which predicts a simple linear relationship between the size of a mutational effect and the fitness of the genetic background in which it occurs). Overall, the work presents strong evidence for the genetic context-dependent nature of epistasis in this system.

      Weaknesses:

      Despite the wealth of information provided by the study, there are a few points of concern.

      The authors find that in non-functional genotypic backgrounds, most pairs of mutations display no epistasis. However, we do not know if this simply because a significant epistatic signal is hard to detect since all the fitness values involved in calculating epistasis are small (and therefore noise-prone). A control can be done by determining whether statistically significant differences exist among the fitness values themselves. In the absence of such information, it is hard to understand whether the classification of epistasis for non-functional backgrounds into discrete categories, such as in Fig 1C, is meaningful.

      The authors have looked for global epistasis (i.e. a negative dependence of mutational fitness effect on background fitness) in all 108 (9x12) mutations in the landscape. They report that the majority of the mutations (77/108 or about 71 per cent) display weak correlation between fitness effect and background fitness (R^2<0.2), and a relatively small proportion show particularly strong correlation (R^2>0.5). They therefore conclude that global epistasis in this system is 'binary'-meaning that strong global epistasis is restricted to a few sites, whereas weak global epistasis occurs in the rest (Figure 5). Precise definitions of 'strong' and 'weak' are not given in the text, but the authors do mention that they are interested here primarily in detecting whether a correlation with background fitness exists or not. This again raises the question of the extent to which the low (and possibly noisy) fitness values of non-functional backgrounds can confound the results. For example, would the results be much the same if the analysis was repeated with only high-fitness backgrounds or only those sets of genotypes where the fitness differences between backgrounds and mutants were significant?<br /> Apart from this, I am also a bit conceptually perplexed by the term 'binary behavior', which suggests that the R^2 values should belong to two distinct classes; but, even assuming that the reported results are robust, Figure S12 shows that most values are 0.2 or less whereas higher values are more or less evenly distributed in the range 0.2-1.0, rather than showing an overall bimodal pattern. An especially confusing remark by the authors in this regard is the following; "This sharp contrast suggests a binary behavior of mutations: they either exhibit strong global epistasis (R^2 > 0.5), or not (R^2 < 0.5)'.

      Conclusions: As large datasets on empirical fitness landscapes become increasingly available, more computational studies are needed to extract as much information from them as possible. The authors have made a timely effort in this direction. It is particularly instructive to learn from the work that higher-order epistasis is pervasive in the studied intragenic landscape, at least in functional genotypic backgrounds. Some of the analysis and interpretations in the paper require careful scrutiny, and the lack of a synthesis of the multitude of reported results leaves something to be desired. But the paper contains intriguing observations that can fuel further research into the factors shaping the topography of complex landscapes.

    4. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review): 

      This paper describes a number of patterns of epistasis in a large fitness landscape dataset recently published by Papkou et al. The paper is motivated by an important goal in the field of evolutionary biology to understand the statistical structure of epistasis in protein fitness landscapes, and it capitalizes on the unique opportunities presented by this new dataset to address this problem. 

      The paper reports some interesting previously unobserved patterns that may have implications for our understanding of fitness landscapes and protein evolution. In particular, Figure 5 is very intriguing. However, I have two major concerns detailed below. First, I found the paper rather descriptive (it makes little attempt to gain deeper insights into the origins of the observed patterns) and unfocused (it reports what appears to be a disjointed collection of various statistics without a clear narrative. Second, I have concerns with the statistical rigor of the work. 

      (1) I think Figures 5 and 7 are the main, most interesting, and novel results of the paper. However, I don't think that the statement "Only a small fraction of mutations exhibit global epistasis" accurately describes what we see in Figure 5. To me, the most striking feature of this figure is that the effects of most mutations at all sites appear to be a mixture of three patterns. The most interesting pattern noted by the authors is of course the "strong" global epistasis, i.e., when the effect of a mutation is highly negatively correlated with the fitness of the background genotype. The second pattern is a "weak" global epistasis, where the correlation with background fitness is much weaker or non-existent. The third pattern is the vertically spread-out cluster at low-fitness backgrounds, i.e., a mutation has a wide range of mostly positive effects that are clearly not correlated with fitness. What is very interesting to me is that all background genotypes fall into these three groups with respect to almost every mutation, but the proportions of the three groups are different for different mutations. In contrast to the authors' statement, it seems to me that almost all mutations display strong global epistasis in at least a subset of backgrounds. A clear example is C>A mutation at site 3. 

      (1a) I think the authors ought to try to dissect these patterns and investigate them separately rather than lumping them all together and declaring that global epistasis is rare. For example, I would like to know whether those backgrounds in which mutations exhibit strong global epistasis are the same for all mutations or whether they are mutation- or perhaps positionspecific. Both answers could be potentially very interesting, either pointing to some specific site-site interactions or, alternatively, suggesting that the statistical patterns are conserved despite variation in the underlying interactions. 

      (1b) Another rather remarkable feature of this plot is that the slopes of the strong global epistasis patterns seem to be very similar across mutations. Is this the case? Is there anything special about this slope? For example, does this slope simply reflect the fact that a given mutation becomes essentially lethal (i.e., produces the same minimal fitness) in a certain set of background genotypes? 

      (1c) Finally, how consistent are these patterns with some null expectations? Specifically, would one expect the same distribution of global epistasis slopes on an uncorrelated landscape? Are the pivot points unusually clustered relative to an expectation on an uncorrelated landscape? 

      (1d) The shapes of the DFE shown in Figure 7 are also quite interesting, particularly the bimodal nature of the DFE in high-fitness (HF) backgrounds. I think this bimodality must be a reflection of the clustering of mutation-background combinations mentioned above. I think the authors ought to draw this connection explicitly. Do all HF backgrounds have a bimodal DFE? What mutations occupy the "moving" peak? 

      (1e) In several figures, the authors compare the patterns for HF and low-fitness (LF) genotypes. In some cases, there are some stark differences between these two groups, most notably in the shape of the DFE (Figure 7B, C). But there is no discussion about what could underlie these differences. Why are the statistics of epistasis different for HF and LF genotypes? Can the authors at least speculate about possible reasons? Why do HF and LF genotypes have qualitatively different DFEs? I actually don't quite understand why the transition between bimodal DFE in Figure 7B and unimodal DFE in Figure 7C is so abrupt. Is there something biologically special about the threshold that separates LF and HF genotypes? My understanding was that this was just a statistical cutoff. Perhaps the authors can plot the DFEs for all backgrounds on the same plot and just draw a line that separates HF and LF backgrounds so that the reader can better see whether the DFE shape changes gradually or abruptly.

      (1f) The analysis of the synonymous mutations is also interesting. However I think a few additional analyses are necessary to clarify what is happening here. I would like to know the extent to which synonymous mutations are more often neutral compared to non-synonymous ones. Then, synonymous pairs interact in the same way as non-synonymous pair (i.e., plot Figure 1 for synonymous pairs)? Do synonymous or non-synonymous mutations that are neutral exhibit less epistasis than non-neutral ones? Finally, do non-synonymous mutations alter epistasis among other mutations more often than synonymous mutations do? What about synonymous-neutral versus synonymous-non-neutral. Basically, I'd like to understand the extent to which a mutation that is neutral in a given background is more or less likely to alter epistasis between other mutations than a non-neutral mutation in the same background. 

      (2) I have two related methodological concerns. First, in several analyses, the authors employ thresholds that appear to be arbitrary. And second, I did not see any account of measurement errors. For example, the authors chose the 0.05 threshold to distinguish between epistasis and no epistasis, but why this particular threshold was chosen is not justified. Another example: is whether the product s12 × (s1 + s2) is greater or smaller than zero for any given mutation is uncertain due to measurement errors. Presumably, how to classify each pair of mutations should depend on the precision with which the fitness of mutants is measured. These thresholds could well be different across mutants. We know, for example, that low-fitness mutants typically have noisier fitness estimates than high-fitness mutants. I think the authors should use a statistically rigorous procedure to categorize mutations and their epistatic interactions. I think it is very important to address this issue. I got very concerned about it when I saw on LL 383-388 that synonymous stop codon mutations appear to modulate epistasis among other mutations. This seems very strange to me and makes me quite worried that this is a result of noise in LF genotypes. 

      Thank you for your review of the manuscript. In the revised version, we have addressed both major criticisms, as detailed below.

      When carefully examining the plots in Figure 5 independently, we indeed observe that the fitness effect of a mutation on different genetic backgrounds can be classified into three characteristic patterns. Our reasoning for these patterns is as follows:

      Strong correlation: Typically observed when the mutation is lethal across backgrounds. Linear regression of mutations exhibiting strong global epistasis shows slopes close to −1 and pivot points near −0.7 (Table S4). Since the reported fitness threshold is −0.508, these mutations push otherwise functional backgrounds into the non-functional range, consistent with lethal effects.

      Weak correlation: Observed when a mutation has no significant effect on fitness across backgrounds, consistent with neutrality.

      No correlation: Out of the 261,333 reported variants, 243,303 (93%) lie below the fitness threshold of −0.508, indicating that the low-fitness region is densely populated by nonfunctional variants. The “strong correlation” and “weak correlation” lines intersect in this zone. Most mutations in this region have little effect (neutral), but occasional abrupt fitness increases correspond to “resurrecting” mutations, the converse of lethal changes. For example, mutations such as X→G at locus 4 or X→A at locus 5 restore function, while the reverse changes (e.g. C→A at locus 3) are lethal.

      Thus, the “no-correlation” pattern is largely explained by mutations that reverse the effect of lethal changes, effectively resurrecting non-functional variants. In the revised manuscript, we highlight these nuances within the broader classification of fitness effect versus background fitness (pp. 10–13).

      Additional analyses included in the revision:

      Synonymous vs. non-synonymous pairs: We repeated the Figure 1 analysis for synonymous–synonymous pairs. As expected, synonymous pairs exhibit lower overall frequencies of epistasis, consistent with their greater neutrality. However, the qualitative spectrum remains similar: positive and negative epistasis dominate, while sign epistasis is rare (Supplementary Figs. S6–S7, S9–S10).

      Fitness effect vs. epistasis change: We tested whether the mean fitness effect of a mutation correlates with the percent of cases in which it changes the nature of epistasis. No correlation was found (R² ≈ 0.11), and this analysis is now included in the revised manuscript.

      Epistasis-modulating ability: Non-synonymous mutations more frequently alter the interactions between other mutations than synonymous substitutions. Within synonymous substitutions, the subset with measurable fitness effects disproportionately contributes to epistasis modulation. Thus, the ability of synonymous substitutions to modulate epistasis arises primarily from the non-neutral subset.

      These analyses clarify the role of synonymous mutations in reshaping epistasis on the folA landscape.

      Revision of statistical treatment of epistasis:

      In our original submission, we used an arbitrary threshold of 0.05 to classify the presence or absence of epistasis, following Papkou et al., who based conclusions on a single experimental replicate. However, as the reviewer correctly noted, this does not adequately account for measurement variability across different genotypes.

      In the revised manuscript, we adopt a statistically rigorous framework that incorporates replicate-based error directly. Specifically, we now use the mean fitness across six independent replicates, together with the corresponding standard deviation, to classify fitness peaks and epistasis. This eliminates arbitrary thresholds and ensures that epistatic classifications reflect the precision of measurements for each genotype.

      This revision led to both quantitative and qualitative changes:

      For high-fitness genotypes, the core patterns of higher-order (“fluid”) epistasis remain robust (Figures 2–3).

      For low-fitness genotypes, incorporating replicate-based error removed spurious fluidity effects, yielding a more accurate characterization of epistasis (Figures 2–3; Supplementary Figs. S6–S7, S9–S10).

      We describe these methodological changes in detail in the revised Methods section and provide updated code.

      Together, these revisions directly address the reviewer’s concerns. They improve the statistical rigor of our analysis, strengthen the robustness of our conclusions, and underscore the importance of accounting for measurement error in large-scale fitness landscape studies—a point we now emphasize in the manuscript.

      Reviewer #2 (Public review): 

      Significance: 

      This paper reanalyzes an experimental fitness landscape generated by Papkou et al., who assayed the fitness of all possible combinations of 4 nucleotide states at 9 sites in the E. coli DHFR gene, which confers antibiotic resistance. The 9 nucleotide sites make up 3 amino acid sites in the protein, of which one was shown to be the primary determinant of fitness by Papkou et al. This paper sought to assess whether pairwise epistatic interactions differ among genetic backgrounds at other sites and whether there are major patterns in any such differences. They use a "double mutant cycle" approach to quantify pairwise epistasis, where the epistatic interaction between two mutations is the difference between the measured fitness of the double-mutant and its predicted fitness in the absence of epistasis (which equals the sum of individual effects of each mutation observed in the single mutants relative to the reference genotype). The paper claims that epistasis is "fluid," because pairwise epistatic effects often differs depending on the genetic state at the other site. It also claims that this fluidity is "binary," because pairwise effects depend strongly on the state at nucleotide positions 5 and 6 but weakly on those at other sites. Finally, they compare the distribution of fitness effects (DFE) of single mutations for starting genotypes with similar fitness and find that despite the apparent "fluidity" of interactions this distribution is well-predicted by the fitness of the starting genotype. 

      The paper addresses an important question for genetics and evolution: how complex and unpredictable are the effects and interactions among mutations in a protein? Epistasis can make the phenotype hard to predict from the genotype and also affect the evolutionary navigability of a genotype landscape. Whether pairwise epistatic interactions depend on genetic background - that is, whether there are important high-order interactions -- is important because interactions of order greater than pairwise would make phenotypes especially idiosyncratic and difficult to predict from the genotype (or by extrapolating from experimentally measured phenotypes of genotypes randomly sampled from the huge space of possible genotypes). Another interesting question is the sparsity of such high-order interactions: if they exist but mostly depend on a small number of identifiable sequence sites in the background, then this would drastically reduce the complexity and idiosyncrasy relative to a landscape on which "fluidity" involves interactions among groups of all sites in the protein. A number of papers in the recent literature have addressed the topics of high-order epistasis and sparsity and have come to conflicting conclusions. This paper contributes to that body of literature with a case study of one published experimental dataset of high quality. The findings are therefore potentially significant if convincingly supported. 

      Validity: 

      In my judgment, the major conclusions of this paper are not well supported by the data. There are three major problems with the analysis. 

      (1) Lack of statistical tests. The authors conclude that pairwise interactions differ among backgrounds, but no statistical analysis is provided to establish that the observed differences are statistically significant, rather than being attributable to error and noise in the assay measurements. It has been established previously that the methods the authors use to estimate high-order interactions can result in inflated inferences of epistasis because of the propagation of measurement noise (see PMID 31527666 and 39261454). Error propagation can be extreme because first-order mutation effects are calculated as the difference between the measured phenotype of a single-mutant variant and the reference genotype; pairwise effects are then calculated as the difference between the measured phenotype of a double mutant and the sum of the differences described above for the single mutants. This paper claims fluidity when this latter difference itself differs when assessed in two different backgrounds. At each step of these calculations, measurement noise propagates. Because no statistical analysis is provided to evaluate whether these observed differences are greater than expected because of propagated error, the paper has not convincingly established or quantified "fluidity" in epistatic effects. 

      (2) Arbitrary cutoffs. Many of the analyses involve assigning pairwise interactions into discrete categories, based on the magnitude and direction of the difference between the predicted and observed phenotypes for a pairwise mutant. For example, the authors categorize as a positive pairwise interaction if the apparent deviation of phenotype from prediction is >0.05, negative if the deviation is <-0.05, and no interaction if the deviation is between these cutoffs. Fluidity is diagnosed when the category for a pairwise interaction differs among backgrounds. These cutoffs are essentially arbitrary, and the effects are assigned to categories without assessing statistical significance. For example, an interaction of 0.06 in one background and 0.04 in another would be classified as fluid, but it is very plausible that such a difference would arise due to error alone. The frequency of epistatic interactions in each category as claimed in the paper, as well as the extent of fluidity across backgrounds, could therefore be systematically overestimated or underestimated, affecting the major conclusions of the study. 

      (3) Global nonlinearities. The analyses do not consider the fact that apparent fluidity could be attributable to the fact that fitness measurements are bounded by a minimum (the fitness of cells carrying proteins in which DHFR is essentially nonfunctional) and a maximum (the fitness of cells in which some biological factor other than DHFR function is limiting for fitness). The data are clearly bounded; the original Papkou et al. paper states that 93% of genotypes are at the low-fitness limit at which deleterious effects no longer influence fitness. Because of this bounding, mutations that are strongly deleterious to DHFR function will therefore have an apparently smaller effect when introduced in combination with other deleterious mutations, leading to apparent epistatic interactions; moreover, these apparent interactions will have different magnitudes if they are introduced into backgrounds that themselves differ in DHFR function/fitness, leading to apparent "fluidity" of these interactions. This is a well-established issue in the literature (see PMIDs 30037990, 28100592, 39261454). It is therefore important to adjust for these global nonlinearities before assessing interactions, but the authors have not done this. 

      This global nonlinearity could explain much of the fluidity claimed in this paper. It could explain the observation that epistasis does not seem to depend as much on genetic background for low-fitness backgrounds, and the latter is constant (Figure 2B and 2C): these patterns would arise simply because the effects of deleterious mutations are all epistatically masked in backgrounds that are already near the fitness minimum. It would also explain the observations in Figure 7. For background genotypes with relatively high fitness, there are two distinct peaks of fitness effects, which likely correspond to neutral mutations and deleterious mutations that bring fitness to the lower bound of measurement; as the fitness of the background declines, the deleterious mutations have a smaller effect, so the two peaks draw closer to each other, and in the lowest-fitness backgrounds, they collapse into a single unimodal distribution in which all mutations are approximately neutral (with the distribution reflecting only noise). Global nonlinearity could also explain the apparent "binary" nature of epistasis. Sites 4 and 5 change the second amino acid, and the Papkou paper shows that only 3 amino acid states (C, D, and E) are compatible with function; all others abolish function and yield lower-bound fitness, while mutations at other sites have much weaker effects. The apparent binary nature of epistasis in Figure 5 corresponds to these effects given the nonlinearity of the fitness assay. Most mutations are close to neutral irrespective of the fitness of the background into which they are introduced: these are the "non-epistatic" mutations in the binary scheme. For the mutations at sites 4 and 5 that abolish one of the beneficial mutations, however, these have a strong background-dependence: they are very deleterious when introduced into a high-fitness background but their impact shrinks as they are introduced into backgrounds with progressively lower fitness. The apparent "binary" nature of global epistasis is likely to be a simple artifact of bounding and the bimodal distribution of functional effects: neutral mutations are insensitive to background, while the magnitude of the fitness effect of deleterious mutations declines with background fitness because they are masked by the lower bound. The authors' statement is that "global epistasis often does not hold." This is not established. A more plausible conclusion is that global epistasis imposed by the phenotype limits affects all mutations, but it does so in a nonlinear fashion. 

      In conclusion, most of the major claims in the paper could be artifactual. Much of the claimed pairwise epistasis could be caused by measurement noise, the use of arbitrary cutoffs, and the lack of adjustment for global nonlinearity. Much of the fluidity or higher-order epistasis could be attributable to the same issues. And the apparently binary nature of global epistasis is also the expected result of this nonlinearity. 

      We thank the reviewer for raising this important concern. We fully agree that the use of arbitrary thresholds in the earlier version of the manuscript, together with the lack of an explicit treatment of measurement error, could compromise the rigor of our conclusions. To address this, we have undertaken a thorough re-analysis of the folA landscape.

      (1)  Incorporating measurement error and avoiding noise-driven artifacts

      In the original version, we followed Papkou et al. in using a single experimental replicate and applying fixed thresholds to classify epistasis. As the reviewer correctly notes, this approach allows noise to propagate from single-mutant measurements to double-mutant effects, and ultimately to higher-order epistasis.

      In the revised analysis, we now:

      Use the mean fitness across all six independent replicates for each genotype.

      Incorporate the corresponding standard deviation as a measure of experimental error.

      Classify epistatic interactions only when differences between a genotype and its neighbors exceed combined error margins, rather than using a fixed cutoff.

      This ensures that observed changes in epistasis are statistically distinguishable from noise. Details are provided in the revised Methods section and updated code.

      (2) Replacing arbitrary thresholds with error-based criteria

      Previously, we used an arbitrary ±0.05 cutoff to define the presence/absence of epistasis. As the reviewer notes, this could misclassify interactions (e.g. labeling an effect as “fluid” when the difference lies within error). In the revised framework, these thresholds have been eliminated. Instead, interactions are classified based on whether their distributions overlap within replicate variance.

      This approach scales naturally with measurement precision, which differs between high-fitness and low-fitness genotypes, and removes the need for a universal cutoff.

      (3) Consequences of re-analysis

      Implementing this revised framework produced several important updates:

      High-fitness backgrounds: The qualitative picture of higher-order (“fluid”) epistasis remains robust. The patterns reported originally are preserved.

      Low-fitness backgrounds: Accounting for replicate variance revealed that part of the previously inferred “fluidity” arose from noise. These spurious effects are now removed, giving a more conservative but more accurate view of epistasis in non-functional regions.

      Fitness peaks: Our replicate-aware analysis identifies 127 peaks, compared to 514 in Papkou et al. Importantly, all 127 peaks occur in functional regions of the landscape. This difference highlights the importance of replicate-based error treatment: relying on a single run without demonstrating repeatability can yield artifacts.

      (4) Addressing bounding effects and terminology

      We also agree with the reviewer that bounding effects, arising from the biological limits of fitness, can create apparent nonlinearities in the genotype–phenotype map. To clarify this, we made the following changes:

      Terminology: We now use the term higher-order epistasis instead of fluid epistasis, emphasizing that the observed background-dependence involves more than two mutations and cannot be explained by global nonlinearities alone.

      We also clarify the definitions of sign-epistasis used in this work.

      By replacing arbitrary cutoffs with replicate-based error estimates and by explicitly considering bounding effects, we have substantially increased the rigor of our analysis. While this reanalysis led to both quantitative and qualitative changes in some regions, the central conclusion remains unchanged: higher-order epistasis is pervasive in the folA landscape, especially in functional backgrounds.

      All analysis scripts and codes are provided as Supplementary Material.

      Reviewer #3 (Public review): 

      Summary: 

      The authors have studied a previously published large dataset on the fitness landscape of a 9 base-pair region of the folA gene. The objective of the paper is to understand various aspects of epistasis in this system, which the authors have achieved through detailed and computationally expensive exploration of the landscape. The authors describe epistasis in this system as "fluid", meaning that it depends sensitively on the genetic background, thereby reducing the predictability of evolution at the genetic level. However, the study also finds two robust patterns. The first is the existence of a "pivot point" for a majority of mutations, which is a fixed growth rate at which the effect of mutations switches from beneficial to deleterious (consistent with a previous study on the topic). The second is the observation that the distribution of fitness effects (DFE) of mutations is predicted quite well by the fitness of the genotype, especially for high-fitness genotypes. While the work does not offer a synthesis of the multitude of reported results, the information provided here raises interesting questions for future studies in this field. 

      Strengths: 

      A major strength of the study is its detailed and multifaceted approach, which has helped the authors tease out a number of interesting epistatic properties. The study makes a timely contribution by focusing on topical issues like the prevalence of global epistasis, the existence of pivot points, and the dependence of DFE on the background genotype and its fitness. The methodology is presented in a largely transparent manner, which makes it easy to interpret and evaluate the results. 

      The authors have classified pairwise epistasis into six types and found that the type of epistasis changes depending on background mutations. Switches happen more frequently for mutations at functionally important sites. Interestingly, the authors find that even synonymous mutations in stop codons can alter the epistatic interaction between mutations in other codons. Consistent with these observations of "fluidity", the study reports limited instances of global epistasis (which predicts a simple linear relationship between the size of a mutational effect and the fitness of the genetic background in which it occurs). Overall, the work presents some evidence for the genetic context-dependent nature of epistasis in this system. 

      Weaknesses: 

      Despite the wealth of information provided by the study, there are some shortcomings of the paper which must be mentioned. 

      (1) In the Significance Statement, the authors say that the "fluid" nature of epistasis is a previously unknown property. This is not accurate. What the authors describe as "fluidity" is essentially the prevalence of certain forms of higher-order epistasis (i.e., epistasis beyond pairwise mutational interactions). The existence of higher-order epistasis is a well-known feature of many landscapes. For example, in an early work, (Szendro et. al., J. Stat. Mech., 2013), the presence of a significant degree of higher-order epistasis was reported for a number of empirical fitness landscapes. Likewise, (Weinreich et. al., Curr. Opin. Genet. Dev., 2013) analysed several fitness landscapes and found that higher-order epistatic terms were on average larger than the pairwise term in nearly all cases. They further showed that ignoring higher-order epistasis leads to a significant overestimate of accessible evolutionary paths. The literature on higher-order epistasis has grown substantially since these early works. Any future versions of the present preprint will benefit from a more thorough contextual discussion of the literature on higher-order epistasis.

      (2) In the paper, the term 'sign epistasis' is used in a way that is different from its wellestablished meaning. (Pairwise) sign epistasis, in its standard usage, is said to occur when the effect of a mutation switches from beneficial to deleterious (or vice versa) when a mutation occurs at a different locus. The authors require a stronger condition, namely that the sum of the individual effects of two mutations should have the opposite sign from their joint effect. This is a sufficient condition for sign epistasis, but not a necessary one. The property studied by the authors is important in its own right, but it is not equivalent to sign epistasis. 

      (3) The authors have looked for global epistasis in all 108 (9x12) mutations, out of which only 16 showed a correlation of R^2 > 0.4. 14 out of these 16 mutations were in the functionally important nucleotide positions. Based on this, the authors conclude that global epistasis is rare in this landscape, and further, that mutations in this landscape can be classified into one of two binary states - those that exhibit global epistasis (a small minority) and those that do not (the majority). I suspect, however, that a biologically significant binary classification based on these data may be premature. Unsurprisingly, mutational effects are stronger at the functional sites as seen in Figure 5 and Figure 2, which means that even if global epistasis is present for all mutations, a statistical signal will be more easily detected for the functionally important sites. Indeed, the authors show that the means of DFEs decrease linearly with background fitness, which hints at the possibility that a weak global epistatic effect may be present (though hard to detect) in the individual mutations. Given the high importance of the phenomenon of global epistasis, it pays to be cautious in interpreting these results. 

      (4) The study reports that synonymous mutations frequently change the nature of epistasis between mutations in other codons. However, it is unclear whether this should be surprising, because, as the authors have already noted, synonymous mutations can have an impact on cellular functions. The reader may wonder if the synonymous mutations that cause changes in epistatic interactions in a certain background also tend to be non-neutral in that background. Unfortunately, the fitness effect of synonymous mutations has not been reported in the paper. 

      (5) The authors find that DFEs of high-fitness genotypes tend to depend only on fitness and not on genetic composition. This is an intriguing observation, but unfortunately, the authors do not provide any possible explanation or connect it to theoretical literature. I am reminded of work by (Agarwala and Fisher, Theor. Popul. Biol., 2019) as well as (Reddy and Desai, eLife, 2023) where conditions under which the DFE depends only on the fitness have been derived. Any discussion of possible connections to these works could be a useful addition.  

      We thank the reviewer for the summary of our work and for highlighting both its strengths and areas for improvement. We have carefully considered the points raised and revised the manuscript accordingly. The revised version:

      (1) Clarifies the conceptual framework. We emphasize the distinction between background-dependent, higher-order epistasis and global nonlinearities. To avoid ambiguity, we have replaced the term “fluid” epistasis with higher-order epistasis throughout, in line with prior literature (e.g. Szendro et al., 2013; Weinreich et al., 2013). We now explicitly situate our results in the context of these studies and clarify our definitions of epistasis, correcting the earlier error where “strong sign epistasis” was used in place of “sign epistasis.”

      (2) Improves statistical rigor. We now incorporate replicate variance and statistical error criteria in place of arbitrary thresholds. This ensures that classification of epistasis reflects experimental precision rather than fixed, arbitrary cutoffs.

      (3) Expands treatment of synonymous mutations. We now explicitly analyze synonymous mutations, separating those that are neutral from those that are non-neutral. Our results show that non-neutral synonymous mutations are disproportionately responsible for altering epistatic interactions, while neutral synonymous mutations rarely do so. We also report the fitness effects of synonymous mutations directly and include new analyses showing that there is no correlation between the mean fitness effect of a synonymous mutation and the frequency with which it alters epistasis (Supplementary Fig. S11).

      These revisions strengthen both the rigor and the clarity of the manuscript. We hope they address the reviewer’s concerns and make the significance of our findings, particularly the siteresolved quantification of higher-order epistasis in the folA landscape, including in synonymous mutations, more apparent.

      Reviewing Editor Comments: 

      Key revision suggestions: 

      (1) Please quantify the impact of measurement noise on your conclusions, and perform statistical analysis to determine whether the observed differences of epistasis due to different backgrounds are statistically significant. 

      (2) Please investigate how your conclusions depend on the cutoffs, and consider choosing them based on statistical criteria. 

      (3) Please reconsider the possible role of global epistasis. In particular, the effect of bounds on fitness values. All reviewers are concerned that all claims, including about global epistasis, may be consistent with a simple null model where most low fitness genotypes are non-functional and variation in their fitness is simply driven by measurement noise. Please provide a convincing argument rejecting this model. 

      More generally, we recommend that you consider all suggestions by reviewers, including those about results, but also those about terminology and citing relevant works. 

      Thank you for your guidance. We have substantially revised the manuscript to incorporate the reviewers’ suggestions. In addition to addressing the three central issues raised, we have refined terminology, expanded the discussion of prior work, and clarified the presentation of our main results. We believe these changes significantly strengthen both the rigor and the impact of the study. We are grateful to the Reviewing Editor and reviewers for their constructive feedback.

      In the revised manuscript, we address the three major points as follows:

      (1) Quantifying measurement noise and statistical significance. We now use the average of six independent experimental runs for each genotype, together with the corresponding standard deviations, to explicitly quantify measurement uncertainty. Pairwise and higher-order epistasis are assessed relative to these error estimates, rather than against fixed thresholds. This ensures that differences across genetic backgrounds are statistically distinguishable from noise.

      (2) Replacing arbitrary cutoffs with statistical criteria. We have eliminated the use of arbitrary thresholds. Instead, classification of interactions (positive, negative, or neutral epistasis) is based on whether fitness differences exceed replicate variance. This approach scales naturally with measurement precision. While some results change quantitatively for high-fitness backgrounds and qualitatively for low-fitness backgrounds, our central conclusions remain robust.

      (3) Analysis of synonymous mutations. We now separately analyze synonymous mutations to test their role in altering epistasis. Our results show that there is no correlation between the average fitness effect of a synonymous mutation and the frequency with which it changes epistatic interactions.

      We have revised terminology for clarity (replacing “fluid” with higher-order epistasis) and updated the Discussion to place our work in the broader context of the literature on higher-order epistasis.

      Finally, we have rewritten the entire manuscript to improve clarity, refine the narrative flow, and ensure that the presentation more crisply reflects the subject of the study

      Reviewer #1 (Recommendations for the authors): 

      MINOR COMMENTS 

      (1) Lines 102-107. Papkou's definition of non-functional genotypes makes sense since it is based on the fact that some genotypes are statistically indistinguishable in terms of fitness from mutants with premature stop codons in folA. It doesn't really matter whether to call them low fitness or non-functional, but it would be helpful to explain the basis for this distinction. 

      Thank you for raising this point. To maintain consistency with the original dataset and analysis, we retain Papkou et al.’s nomenclature and refer to these genotypes as “functional” or “non-functional.” 

      (2) Lines 111-112. I think the authors need to briefly explain here how they define the absence of epistasis. They do so in the Methods, but this information is essential and needs to be conveyed to the reader in the Results as well. 

      Thank you for the suggestion. We agree that this definition is essential for readers to follow the Results. In the revised manuscript, we have added a brief explanation at the start of the Results section clarifying how we define the absence of epistasis. Specifically, we now state that two mutations are considered non-epistatic when the observed fitness of the double mutant is statistically indistinguishable (within error of six replicates) from the additive expectation based on the single mutants. This ensures that the Results section is selfcontained, while full details remain in the Methods.

      (3) Lines 142 and elsewhere. The authors introduce the qualifier "fluid" to describe the fact that the value or sign of pairwise epistasis changes across genetic backgrounds. I don't see a need for this new terminology, since it is already captured adequately by the term "higher-order epistasis". The epistasis field is already rife with jargon, and I would prefer if new terms were introduced only when absolutely necessary. 

      Thank you for this helpful suggestion. We agree that introducing new terminology is unnecessary here. In the revised manuscript, we have replaced the term “fluid” epistasis with “higher-order epistasis” throughout, to align with established usage and avoid adding jargon.

      (4) Figure 6. I don't think this is the best way of showing that the pivot points are clustered. A histogram would be more appropriate and would take less space. However it would allow the authors to display a null distribution to demonstrate that this clustering is indeed surprising. 

      (5) Lines 320-321. Mann-Whitney U tests whether one distribution is systematically shifted up or down relative to the other. Please change the language here. It looks like the authors also performed the Kolmogorov-Smirnoff test, which is appropriate, but it doesn't look like the results are reported anywhere. Please report. 

      (6) Lines 330-334. The fact that HF genotypes seem to have more similar DFEs than LF genotypes is somewhat counterintuitive. Could this be an artifact of the fact that any two random HF genotypes are more similar to each other than any two randomly sampled LF genotypes? 

      (7) Lines 427. The sentence "The set of these selected variants are assigned their one hamming distance neighbours to construct a new 𝑛-base sequence space" is confusing. I think it is pretty clear how to construct a n-base sequence space, and this sentence adds more confusion than it removes. 

      Thank you for raising this point. To maintain consistency with the original dataset and analysis, we retain Papkou et al.’s nomenclature and refer to these genotypes as “functional” or “non-functional.” 

      We now start the results section of the manuscript with a brief description of how each type of epistasis is defined. Specifically, we now state that two mutations are considered non-epistatic when the observed fitness of the double mutant is statistically indistinguishable (within the error of six replicates) from the additive expectation based on the single mutants. This ensures that the Results section is self-contained, while full details remain in the Methods.

      We also agree that introducing new terminology is unnecessary. In the revised manuscript, we have replaced the term “fluid” epistasis with “higher-order epistasis” throughout, to align with established usage and avoid adding jargon. Finally, we concur that the identified sentence was unnecessary and potentially confusing; it has been removed from the revised manuscript to improve clarity. In fact, we have rewritten the entire manuscript for better flow and readability. 

      Reviewer #2 (Recommendations for the authors): 

      (1) Supplementary Figure S2A and S3 seem to be the same. 

      (3) The classification scheme for reciprocal sign/single sign/other sign epistasis differs from convention and should be made more explicit or renamed. 

      (4) Re the claim that high and low fitness backgrounds have different frequencies of the various types of epistasis: 

      Are the frequency distributions of the different types of epistasis statistically different between high and low fitness backgrounds statistically significant? It seems that they follow similar general patterns, and the sample size is much smaller for high fitness backgrounds so more variance in their distributions is expected. 

      Do bounding of fitness measurements play a role in generating the differences in types of epistasis seen in high vs. low-fitness backgrounds? If many variants are at the lower bound of the fitness assay, then positive epistasis might simply be less detectable for these backgrounds (which seems to be the biggest difference between high/low fitness backgrounds). 

      (5) In Figure 4B, points are not independent, because the mutation effects are calculated for all mutations in all backgrounds, rather than with reference to a single background or fluorescence value. The same mutations are therefore counted many times. 

      (6) It is not clear how the "pivot growth rate" was calculated or what the importance of this metric is. 

      (7) In the introduction, the justification for reanalyzing the Papkou et al dataset in particular is not clear. 

      (8) Epistasis at the nucleotide level is expected because of the genetic code: fitness and function are primarily affected by amino acid changes, and nucleotide mutations will affect amino acids depending on the state at other nucleotide sites in the same codon. For the most part, this is not explicitly taken account of in the paper. I recommend separating apparent epistasis due to the genetic code from that attributable to dependence among codons. 

      Thank you for noting this. Figure S2A shows results for high-fitness peaks only, whereas Figure S3 shows results for all peaks across the landscape. We have now made this distinction explicit in the figure legends and main text of the revised manuscript. 

      In the revised analysis, peaks are defined using the average fitness across six experimental replicates along with the corresponding standard deviation. Each genotype is compared with all single-step neighbors, and it is classified as a peak only if its mean fitness is significantly higher than all neighbors (p < 0.05). This procedure explicitly accounts for measurement error and replaces the arbitrary thresholding used previously. Full details are now described in the Methods.

      To avoid confusion, we now state our definitions explicitly at the start of the analysis. We have now corrected our definition in the text. We define sign epistasis as a one where at least one mutation switches from being beneficial to deleterious. 

      We have clarified our motivation in the Introduction. The Papkou et al. dataset is the most comprehensive experimental map of a complete 9-bp region of folA and provides six independent replicates, making it uniquely suited for testing hypotheses about backgrounddependent epistasis. Importantly, Papkou et al. based their conclusions on a single run, whereas our reanalysis incorporates replicate means and variances, leading to substantive differences—for example, a reduction in reported peaks from 514 to 127. By recalibrating the analysis, we provide a more rigorous account of this landscape and highlight how methodological choices affect conclusions.

      We also agree that some nucleotide-level epistasis reflects the structure of the genetic code (i.e., codon degeneracy and context-dependence of amino acid substitutions). In the revised manuscript, we explicitly separate epistasis attributable to codon structure from epistasis arising among codons. For example, synonymous mutations that alter epistasis within codons are treated separately from those affecting interactions across codons, and this distinction is now clearly indicated in the Results.

      Reviewer #3 (Recommendations for the authors): 

      (1) The analysis of peak density and accessibility in the paragraph starting on line 96 seems a bit out of context. Its connection with the various forms of epistasis treated in the rest of the paper is unclear. 

      (2) As mentioned in the Public Review, the term 'sign epistasis' has been used in a non-standard way. My suggestion would be to use a different term. Even a slightly modified term, such as "strong sign epistasis", should help to avoid any confusion. 

      (3)  mentioned in the public review that it is not clear whether the synonymous mutations that change the type of epistasis also tend to be non-neutral. This issue could be addressed by computing, for example, the fitness effects of all synonymous mutations for backgrounds and mutation pairs where a switch in epistasis occurs, and comparing it with fitness effects where no such switch occurs. 

      (4) Do the authors have any proposal for why synonymous mutations seem to cause more frequent changes in epistasis in low-fitness backgrounds? Related to this, is there any systematic difference between the types of switch caused by synonymous mutations in the low- versus high-fitness backgrounds? 

      (5) It is unclear exactly how the pivot points were determined, especially since the data for many mutations is noisy. The protocol should be provided in the Methods section. 

      (6) Line 303: possible typo, "accurate" --> "inaccurate". 

      (7) The value of Delta used for the "phenotypic DFE" has not been mentioned in the main text (including Methods).

      We agree that the connection needed to be clearer. In the revised manuscript, we (i) relocate and retitle this material as a brief “Landscape overview” preceding the epistasis analyses, (ii) explicitly link multi-peakedness and path accessibility to epistasis (e.g., multi-peak structure implies the presence of sign/reciprocal-sign epistasis; accessibility is shaped by background-dependent effects), and (iii) move derivations to the Supplement. We also recomputed peak density and accessibility using replicate-averaged fitness with replicate SDs, so the overview and downstream epistasis sections now use a single, error-aware landscape (updated in Figs. 1–3, with cross-references in the text).

      We have aligned our terminology and now state definitions upfront. 

      After replacing fixed cutoffs with replicate-based error criteria, switches are more frequent in high-fitness backgrounds (Fig. 3). Mechanistically, near the lower fitness bound, deleterious effects are masked (global nonlinearity), reducing apparent switching. Functional/high-fitness backgrounds allow both beneficial and deleterious outcomes, so background-dependent (higher-order) interactions manifest more readily. Switch types also vary by background fitness: high-fitness backgrounds show more sign/strong-sign switches, whereas low-fitness backgrounds show mostly magnitude reclassifications (Fig. 3C; Supplement Fig. Sx).

      Finally, we corrected a typo by replacing “accurate” with “inaccurate” and now define Δ (equal to 0.05) in the main text (in Results and Figure 8 caption).

    1. Nothing matching that URL

      Can we somehow set this section so once the timer hits 0, it either goes back to the calendar going straight across (probably difficult but preferred) or the timer disappears (second preference)?

    1. Reviewer #2 (Public review):

      The paper by Makarov et al. describes the software tool called DendroTweaks, intended for examination of multi-compartmental biophysically detailed neuron models. It offers extensive capabilities for working with very complex distributed biophysical neuronal models and should be a useful addition to the growing ecosystem of tools for neuronal modeling.

      Strengths

      • This Python-based tool allows for visualization of a neuronal model's compartments.

      • The tool works with morphology reconstructions in the widely used .swc and .asc formats.

      • It can support many neuronal models using the NMODL language, which is widely used for neuronal modeling.

      • It permits one to plot the properties of linear and non-linear conductances in every compartment of a neuronal model, facilitating examination of model's details.

      • DendroTweaks supports manipulation of the model parameters and morphological details, which is important for exploration of the relations of the model composition and parameters with its electrophysiological activity.

      • The paper is very well written - everything is clear, and the capabilities of the tool are described and illustrated with great attention to details.

      Weaknesses

      • Not a really big weakness, but it would be really helpful if the authors showed how the performance of their tool scales. This can be done for an increasing number of compartments - how long does it take to carry out typical procedures in DendroTweaks, on a given hardware, for a cell model with 100 compartments, 200, 300, and so on? This information will be quite useful to understand the applicability of the software.

      Let me also add here a few suggestions (not weaknesses, but something that can be useful, and if the authors can easily add some of these for publication, that would strongly increase the value of the paper).

      • It would be very helpful to add functionality to read major formats in the field, such as NeuroML and SONATA.

      • Visualization is available as a static 2D projection of the cell's morphology. It would be nice to implement 3D interactive visualization.

      • It is nice that DendroTweaks can modify the models, such as revising the radii of the morphological segments or ionic conductances. It would be really useful then to have the functionality for writing the resulting models into files for subsequent reuse.

      • If I didn't miss something, it seems that DendroTweaks supports allocation of groups of synapses, where all synapses in a group receive the same type of Poisson spike train. It would be very useful to provide more flexibility. One option is to leverage the SONATA format, which has ample functionality for specifying such diverse inputs.

      • "Each session can be saved as a .json file and reuploaded when needed" - do these files contain the whole history of the session or the exact snapshot of what is visualized when the file is saved? If the latter, which variables are saved, and which are not? Please clarify.

      Comments on revisions:

      In this revised version of the paper, the authors addressed all my comments. While many of the suggestions were addressed by textual changes in the manuscript or an explanation in the response to the reviewers (rather than adding substantial new functionality to the tool), DendroTweaks in its current updated state does represent an advanced and useful tool. Further extensions can be added as the development of the tool continues, in interaction with the community.

    2. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      Summary:

      Dendrotweaks provides its users with a solid tool to implement, visualize, tune, validate, understand, and reduce single-neuron models that incorporate complex dendritic arbors with differential distribution of biophysical mechanisms. The visualization of dendritic segments and biophysical mechanisms therein provide users with an intuitive way to understand and appreciate dendritic physiology.

      Strengths:

      (1) The visualization tools are simplified, elegant, and intuitive.

      (2) The ability to build single-neuron models using simple and intuitive interfaces.

      (3) The ability to validate models with different measurements.

      (4) The ability to systematically and progressively reduce morphologically-realistic neuronal models.

      Weaknesses:

      (1) Inability to account for neuron-to-neuron variability in structural, biophysical, and physiological properties in the model-building and validation processes.

      We agree with the reviewer that it is important to account for neuron-to-neuron variability. The core approach of DendroTweaks, and its strongest aspect, is the interactive exploration of how morpho-electric parameters affect neuronal activity. In light of this, variability can be achieved through the interactive updating of the model parameters with widgets. In a sense, by adjusting a widget (e.g., channel distribution or kinetics), a user ends up with a new instance of a cell in the parameter space and receives almost real-time feedback on how this change affected neuronal activity. This approach is much simpler than implementing complex optimization protocols for different parameter sets, which would detract from the interactivity aspect of the GUI. In its revised version, DendroTweaks also accounts for neuron-to-neuron morphological variability, as channel distributions are now based on morphological domains (rather than the previous segment-specific approach). This makes it possible to apply the same biophysical configuration across various morphologies. Overall, both biophysical and morphological variability can be explored within DendroTweaks. 

      (2) Inability to account for the many-to-many mapping between ion channels and physiological outcomes. Reliance on hand-tuning provides a single biased model that does not respect pronounced neuron-to-neuron variability observed in electrophysiological measurements.

      We acknowledge the challenge of accounting for degeneracy in the relation between ion channels and physiological outcomes and the importance of capturing neuron-to-neuron variability. One possible way to address this, as we mention in the Discussion, is to integrate automated parameter optimization algorithms alongside the existing interactive hand-tuning with widgets. In its revised version, DendroTweaks can integrate with Jaxley (Deistler et al., 2024) in addition to NEURON. The models created in DendroTweaks can now be run with Jaxley (although not all types of models, see the limitations in the Discussion), and their parameters can be optimized via automated and fast gradient-based parameter optimization, including optimization of heterogeneous channel distributions. In particular, a key advantage of integrating Jaxley with DendroTweaks was its NMODL-to-Python converter, which significantly reduced the need to manually re-implement existing ion channel models for Jaxley (see here: https://dendrotweaks.readthedocs.io/en/latest/tutorials/convert_to_jaxley.html).

      (1) Michael Deistler, Kyra L. Kadhim, Matthijs Pals, Jonas Beck, Ziwei Huang, Manuel Gloeckler, Janne K. Lappalainen, Cornelius Schröder, Philipp Berens, Pedro J. Gonçalves, Jakob H. Macke Differentiable simulation enables large-scale training of detailed biophysical models of neural dynamics bioRxiv 2024.08.21.608979; doi:https://doi.org/10.1101/2024.08.21.608979

      Lack of a demonstration on how to connect reduced models into a network within the toolbox.

      Building a network of reduced models is an exciting direction, yet beyond the scope of this manuscript, whose primary goal is to introduce DendroTweaks and highlight its capabilities. DendroTweaks is designed for single-cell modeling, aiming to cover its various aspects in great detail. Of course, we expect refined single-cell models, both detailed and simplified, to be further integrated into networks. But this does not need to occur within DendroTweaks. We believe this network-building step is best handled by dedicated network simulation platforms. To facilitate the network-building process, we extended the exporting capabilities of DendroTweaks. To enable the export of reduced models in DendroTweaks’s modular format, as well as in plain simulator code, we implemented a method to fit the resulting parameter distributions to analytical functions (e.g., polynomials). This approach provided a compact representation, requiring a few coefficients to be stored in order to reproduce a distribution, independently of the original segmentation. The reduced morphologies can be exported as SWC files, standardized ion channel models as MOD files, and channel distributions as JSON files. Moreover, plain NEURON code (Python) to instantiate a cell class can be automatically generated for any model, including the reduced ones. Finally, to demonstrate how these exported models can be integrated into larger simulations, we implemented a "toy" network model in a Jupyter notebook included as an example in the GitHub repository. We believe that these changes greatly facilitate the integration of DendroTweaks-produced models into networks while also allowing users to run these networks on their favorite platforms.

      (4) Lack of a set of tutorials, which is common across many "Tools and Resources" papers, that would be helpful in users getting acquainted with the toolbox.

      This is an important point that we believe has been addressed fully in the revised version of the tool and manuscript. As previously mentioned, the lack of documentation was due to the software's early stage. We have now added comprehensive documentation, which is available at https://dendrotweaks.readthedocs.io. This extensive material includes API references, 12 tutorials, 4 interactive Jupyter notebooks, and a series of video tutorials, and it is regularly updated with new content. Moreover, the toolbox's GUI with example models is available through our online platform at https://dendrotweaks.dendrites.gr.  

      Reviewer #2 (Public review):

      The paper by Makarov et al. describes the software tool called DendroTweaks, intended for the examination of multi-compartmental biophysically detailed neuron models. It offers extensive capabilities for working with very complex distributed biophysical neuronal models and should be a useful addition to the growing ecosystem of tools for neuronal modeling.

      Strengths

      (1) This Python-based tool allows for visualization of a neuronal model's compartments.

      (2) The tool works with morphology reconstructions in the widely used .swc and .asc formats.

      (3) It can support many neuronal models using the NMODL language, which is widely used for neuronal modeling.

      (4) It permits one to plot the properties of linear and non-linear conductances in every compartment of a neuronal model, facilitating examination of the model's details.

      (5) DendroTweaks supports manipulation of the model parameters and morphological details, which is important for the exploration of the relations of the model composition and parameters with its electrophysiological activity.

      (6) The paper is very well written - everything is clear, and the capabilities of the tool are described and illustrated with great attention to detail.

      Weaknesses

      (1) Not a really big weakness, but it would be really helpful if the authors showed how the performance of their tool scales. This can be done for an increasing number of compartments - how long does it take to carry out typical procedures in DendroTweaks, on a given hardware, for a cell model with 100 compartments, 200, 300, and so on? This information will be quite useful to understand the applicability of the software.

      DendroTweaks functions as a layer on top of a simulator. As a result, its performance scales in the same way as for a given simulator. The GUI currently displays the time taken to run a simulation (e.g., in NEURON) at the bottom of the Simulation tab in the left menu. While Bokeh-related processing and rendering also consume time, this is not as straightforward to measure. It is worth noting, however, that this time is short and approximately equivalent to rendering the corresponding plots elsewhere (e.g., in a Jupyter notebook), and thus adds negligible overhead to the total simulation time. 

      (2) Let me also add here a few suggestions (not weaknesses, but something that can be useful, and if the authors can easily add some of these for publication, that would strongly increase the value of the paper).

      (3) It would be very helpful to add functionality to read major formats in the field, such as NeuroML and SONATA.

      We agree with the reviewer that support for major formats will substantially improve the toolbox, ensuring the reproducibility and reusability of the models. While integration with these formats has not been fully implemented, we have taken several steps to ensure elegant and reproducible model representation. Specifically, we have increased the modularity of model components and developed a custom compact data format tailored to single-cell modeling needs. We used a JSON representation inspired by the Allen Cell Types Database schema, modified to account for non-constant distributions of the model parameters. We have transitioned from a representation of parameter distributions dependent on specific segmentation graphs and sections to a more generalized domain-based distribution approach. In this revised methodology, segment groups are no longer explicitly defined by segment identifiers, but rather by specification of anatomical domains and conditional expressions (e.g., “select all segments in the apical domain with the maximum diameter < 0.8 µm”). Additionally, we have implemented the export of experimental protocols into CSV and JSON files, where the JSON files contain information about the stimuli (e.g., synaptic conductance, time constants), and the CSV files store locations of recording sites and stimuli. These features contribute toward a higher-level, structured representation of models, which we view as an important step toward eventual compatibility with standard formats such as NeuroML and SONATA. We have also initiated a two-way integration between DendroTweaks and SONATA. We developed a converter from DendroTweaks to SONATA that automatically generates SONATA files to reproduce models created in DendroTweaks. Additionally, support for the DendroTweaks JSON representation of biophysical properties will be added to the SONATA data format ecosystem, enabling models with complex dendritic distributions of channels. This integration is still in progress and will be included in the next version of DendroTweaks. While full integration with these formats is a goal for future releases, we believe the current enhancements to modularity and exportability represent a significant step forward, providing immediate value to the community.

      (4) Visualization is available as a static 2D projection of the cell's morphology. It would be nice to implement 3D interactive visualization.

      We offer an option to rotate a cell around the Y axis using a slider under the plot. This is a workaround, as implementing a true 3D visualization in Bokeh would require custom Bokeh elements, along with external JavaScript libraries. It's worth noting that there are already specialized tools available for 3D morphology visualization. In light of this, while a 3D approach is technically feasible, we advocate for a different method. The core idea of DendroTweaks’ morphology exploration is that each section is “clickable”, allowing its geometric properties to be examined in a 2D "Section" view. Furthermore, we believe the "Graph" view presents the overall cell topology and distribution of channels and synapses more clearly.

      (5) It is nice that DendroTweaks can modify the models, such as revising the radii of the morphological segments or ionic conductances. It would be really useful then to have the functionality for writing the resulting models into files for subsequent reuse.

      This functionality is fully available in local installations. Users can export JSON files with channel distributions and SWC files after morphology reduction through the GUI. Please note that for resource management purposes, file import/export is disabled on the public online demo. However, it can be enabled upon local installation by modifying the configuration file (app/default_config.json). In addition, it is now possible to generate plain NEURON (Python) code to reproduce a given model outside the toolbox (e.g., for network simulations). Moreover, it is now possible to export the simulation protocols as CSV files for locations of stimuli and recordings and JSON files for stimuli parameters.

      (6) If I didn't miss something, it seems that DendroTweaks supports the allocation of groups of synapses, where all synapses in a group receive the same type of Poisson spike train. It would be very useful to provide more flexibility. One option is to leverage the SONATA format, which has ample functionality for specifying such diverse inputs.

      Currently, each population of “virtual” neurons that form synapses on the detailed cell shares the same set of parameters for both biophysical properties of synapses (e.g., reversal potential, time constants) and presynaptic "population" activity (e.g., rate, onset). The parameter that controls an incoming Poisson spike train is the rate, which is indeed shared across all synapses in a population. Unfortunately, the current implementation lacks the capability to simulate complex synaptic inputs with heterogeneous parameters across individual synapses or those following non-uniform statistical distributions (the present implementation is limited to random uniform distributions). We have added this information in the Discussion (3. Discussion - 3.2 Limitations and future directions - ¶.5) to make users aware of the limitations. As it requires a substantial amount of additional work, we plan to address such limitations in future versions of the toolbox.

      (7) "Each session can be saved as a .json file and reuploaded when needed" - do these files contain the whole history of the session or the exact snapshot of what is visualized when the file is saved? If the latter, which variables are saved, and which are not? Please clarify.

      In the previous implementation, these files captured the exact snapshot of the model's latest state. In the new version, we adopted a modular approach where the biophysical configuration (e.g., channel distributions) and stimulation protocols are exported to separate files. This allows the user to easily load and switch the stimulation protocols for a given model. In addition, the distribution of parameters (e.g., channel conductances) is now based on the morphological domains and is agnostic of the exact morphology (i.e., sections and segments), which allows the same JSON files with biophysical configurations to be reused across multiple similar morphologies. This also allows for easy file exchange between the GUI and the standalone version.

      Joint recommendations to Authors:

      The reviewers agreed that the paper is well written and that DendroTweaks offers a useful collection of tools to explore models of single-cell biophysics. However, the tooling as provided with this submission has critical limitations in the capabilities, accessibility, and documentation that significantly limit the utility of DendroTweaks. While we recognize that it is under active development and features may have changed already, we can only evaluate the code and documentation available to us here.

      We thank the reviewers for their positive evaluation of the manuscript and express our sincere appreciation for their feedback. We acknowledge the limitations they have pointed out and have addressed most of these concerns in our revised version.

      In particular, we would emphasize:

      (1) While the features may be rich, the documentation for either a user of the graphical interface or the library is extremely sparse. A collection of specific tutorials walking a GUI user through simple and complex model examples would be vital for genuine uptake. As one category of the intended user is likely to be new to computational modeling, it would be particularly good if this documentation could also highlight known issues that can arise from the naive use of computational techniques. Similarly, the library aspect needs to be documented in a more standard manner, with docstrings, an API function list, and more didactic tutorials for standard use cases.

      DendroTweaks now features comprehensive documentation. The standalone Python library code is well-documented with thorough docstrings. The overall code modularity and readability have improved. The documentation is created using the widely adopted Sphinx generator, making it accessible for external contributors, and it is available via ReadTheDocs https://dendrotweaks.readthedocs.io/en/latest/index.html. The documentation provides a comprehensive set of tutorials (6 basic, 6 advanced) covering all key concepts and workflows offered by the toolbox. Interactive Jupyter notebooks are included in the documentation, along with the quick start guide. All example models also have corresponding notebooks that allow users to build the model from scratch.

      The toolbox has its own online platform, where a quick-start guide for the GUI is available https://dendrotweaks.dendrites.gr/guide.html. We have created video tutorials for the GUI covering the basic use cases. Additionally, we have added tips and instructions alongside widgets in the GUI, as well as a status panel that displays application status, warnings, and other information. Finally, we plan to familiarize the community with the toolbox by organizing online and in-person tutorials, as the one recently held at the CNS*2025 conference (https://cns2025florence.sched.com/event/25kVa/building-intuitive-and-efficient-biophysicalmodels-with-jaxley-and-dendrotweaks). Moreover, the toolbox was already successfully used for training young researchers during the Taiwan NeuroAI 2025 Summer School, founded by Ching-Lung Hsu. The feedback was very positive.

      (2) The paper describes both a GUI web app and a Python library. However, the code currently mixes these two in a way that largely makes sense for the web app but makes it very difficult to use the library aspect. Refactoring the code to separate apps and libraries would be important for anyone to use the library as well as allowing others to host their own DendroTweak servers. Please see the notes from the reviewing editor below for more details.

      The code in the previous `app/model` folder, responsible for the core functionality of the toolbox, has been extensively refactored and extended, and separated into a standalone library. The library is included in the Python package index (PyPI, https://pypi.org/project/dendrotweaks).

      Notes from the Reviewing Editor Comments (Recommendations for the authors):

      (1) While one could import morphologies and use a collection of ion channel models, details of synapse groups and stimulation approaches appeared to be only configurable manually in the GUI. The ability to save and load full neuron and simulation states would be extremely useful for reproducibility and sharing data with collaborators or as an interactive data product with a publication. There is a line in the text about saving states as json files (also mentioned by Reviewer #2), but I could see no such feature in the version currently online.

      We decided to reserve the online version for demonstration and educational purposes, with more example models being added over time. However, this functionality is available upon local installation of the app (and after specifying it in the ‘default_config.json’ in the root directory of the app). We’ve adopted a modular model representation to store separately morphology, channel models, biophysical parameters, and stimulation protocols.

      (2) Relatedly, GUI exploration of complex data is often a precursor to a more automated simulation run. An easy mechanism to go from a user configuration to scripting would be useful to allow the early strength of GUIs to feed into the power of large-scale scripting.

      Any model could be easily exported to a modular DendroTweaks representation and later imported either in the GUI or in the standalone version programmatically. This ensures a seamless transition between the two use cases.

      (3) While the paper discusses DendroTweaks as both a GUI and a python library, the zip file of code in the submission is not in good form as a library. Back-end library code is intermingled with front-end web app code, which limits the ability to install the library from a standard python interface like PyPI. API documentation is also lacking. Functions tend to not have docstrings, and the few that do, do not follow typical patterns describing parameters and types.

      As stated above, all these issues have been resolved in the new version of the toolbox. The library code is now housed in a separate repository https://github.com/Poirazi-Lab/DendroTweaks and included in PyPI https://pypi.org/project/dendrotweaks. The classes and public methods follow Numpy-style docstrings, and the API reference is available in the documentation: https://dendrotweaks.readthedocs.io/en/latest/genindex.html.

      (4) Library installation is very difficult. The requirements are currently a lockfile, fully specifying exact versions of all dependencies. This is exactly correct for web app deployment to maintain consistency, but is not feasible in the context of libraries where you want to have minimal impact on a user's environment. Refactoring the library from the web app is critical for making DendroTweaks usable in both forms described in the paper.

      The lockfile makes installation more or less impossible on computer setups other than that of the author. Needless to say, this is not acceptable for a tool, and I would encourage the authors to ask other people to attempt to install their code as they describe in the text. For example, attempting to create a conda environment from the environment.yml file on an M1 MacBook Pro failed because it could not find several requirements. I was able to get it to install within a Linux docker image with the x86 platform specified, but this is not generally viable. To make this be the tool it is described as in text, this must be resolved. A common pattern that would work well here is to have a requirements lockfile and Docker image for the web app that imports a separate, more minimally restrictive library package with that could be hosted on PyPI or, less conveniently, through conda-forge.

      The installation of the standalone library is now straightforward via pip install dendrotweaks.On the Windows platform, however, manual installation of NEURON is required as described          in the official NEURON documentation https://nrn.readthedocs.io/en/8.2.6/install/install_instructions.html#windows.

      (5) As an aside, to improve potential uptake, the authors might consider an MIT-style license rather than the GNU Public License unless they feel strongly about the GPL. Many organizations are hesitant to build on GPL software because of the wide-ranging demands it places on software derived from or using GPL code.

      We thank the editor for this suggestion. We are considering changing the licence to MPL 2.0. It will maintain copyleft restrictions only on the package files while allowing end-users to freely choose their own license for any derived work, including the models, generated data files, and code that simply imports and uses our package.

      Reviewer #1 (Recommendations for the authors):

      (1) Abstract: Neurons rely on the interplay between dendritic morphology and ion channels to transform synaptic inputs into a sequence of somatic spikes. Technically, this would have to be morphology, ion channels, pumps, transporters, exchangers, buffers, calcium stores, and other molecules. For instance, if the calcium buffer concentration is large, then there would be less free calcium for activating the calcium-activated potassium channels. If there are different chloride co-transporters - NKCC vs. KCC - expressed in the neuron or different parts of the neuron, that would alter the chloride reversal for all the voltage- or ligand-gated chloride channels in the neuron. So, while morphology and ion channels are two important parts of the transformation, it would be incorrect to ignore the other components that contribute to the transformation. The statement might be revised to make these two components as two critical components.

      The phrase “Two critical components” was added as it was suggested by the reviewer.

      (2) Section 2.1 - The overall GUI looks intuitive and simple.

      (3) Section 2.2

      (a) The Graph view of morphology, especially accounting for the specific d_lambda is useful.

      (b) "Note that while microgeometry might not significantly affect the simulation at a low spatial resolution (small number of segments) due to averaging, it can introduce unexpected cell behavior at a higher level of spatial discretization."

      It might be good to warn the users that the compartmentalization and error analyses are with reference to the electrical lambda. If users have to account for calcium microdomains, these analyses wouldn't hold given the 2 orders of magnitude differences between the electrical and the calcium lambdas (e.g., Zador and Koch, J Neuroscience, 1994). Please sensitize users that the impact of active dendrites in regulating calcium microdomains and signaling is critical when it comes to plasticity models in morphologically realistic structures.

      We thank the reviewer for this important point. We have clarified in the text that our spatial discretization specifically refers to the electrical length constant. We acknowledge that electrical and chemical processes operate on fundamentally different spatial and temporal scales, which requires special consideration when modeling phenomena like synaptic plasticity. We have sensitized users about this distinction. However, we do not address such examples in the manuscript, thus leaving the detailed discussion of non-electrical compartmentalization beyond the scope of this work.

      (c) I am not very sure if the "smooth" tool for diameters that is illustrated is useful. Users shouldn't consider real variability in morphology as artifacts of reconstruction. As mentioned above, while this might not be an issue with electrical compartmentalization, calcium compartmentalization will severely be affected by small changes in morphology. Any model that incorporates calcium-gated channels should appropriately compartmentalize calcium. Without this, the spread of activation of calcium-dependent conductances would be an overestimate. Even small changes in cellular shape and curvature can have large impacts when it comes to signaling in terms of protein aggregation and clustering.

      Although this functionality is still available in the toolbox, we have removed the emphasis from it in the manuscript. Nevertheless, for the purpose of addressing the reviewer’s comment, we provide an example when this “smoothening” might be needed:please see Figure S1 from Tasciotti et al. 2025.

      (2) Simone Tasciotti, Daniel Maxim Iascone, Spyridon Chavlis, Luke Hammond, Yardena Katz, Attila Losonczy, Franck Polleux, Panayiota Poirazi. From Morphology to Computation: How Synaptic Organization Shapes Place Fields in CA1 Pyramidal Neurons bioRxiv 2025.05.30.657022; doi: https://doi.org/10.1101/2025.05.30.657022

      (4) Section 2.3

      (a) The graphical representation of channel gating kinetics is very useful.

      (b) Please warn the users that experimental measurements of channel gating kinetics are extremely variable. Taking the average of the sigmoids or the activation/deactivation/inactivation kinetics provides an illusion that each channel subtype in a given cell type has fixed values of V_1/2, k, delta, and tau, but it is really a range obtained from several experiments. The heterogeneity is real and reflects cell-to-cell variability in channel gating kinetics, not experimental artifacts. Please sensitize the readers that there is not a single value for these channel parameters.

      This is a fair comment, and it refers to a general problem in neuronal modeling. In DendroTweaks, we follow the approach widely used in the community that indeed doesn't account for heterogeneity. We added a paragraph in the revised manuscript's Discussion (3. Discussion - 3.3 Limitations and future directions - ¶.3) to address this issue.

      (5) Section 2.4

      (a) Same as above: Please sensitize users that the gradients in channel conductances are measured as an average of measurements from several different cells. This gradient need not be present in each neuron, as there could be variability in location-dependent measurements across cells. The average following a sigmoid doesn't necessarily mean that each neuron will have the channel distributed with that specific sigmoid (or even a sigmoid!) with the specific parametric values that the average reported. This is extremely important because there is an illusion that the gradient is fixed across cells and follows a fixed functional form.

      We added this information to our Discussion in the same paragraph mentioned above.

      (b) Please provide an example where the half-maximal voltage of a channel varies as a function of distance (such as Poolos et al., Nature Neuroscience, 2002 or Migliore et al., 1999; Colbert and Johnston, 1997). This might require a step-like function in some scenarios. An illustration would be appropriate because people tend to assume that channel gating kinetics are similar throughout the dendrite. Again, please mention that these shifts are gleaned from the average and don't really imply that each neuron must have that specific gradient, given neuron-to-neuron variability in these measurements.

      We thank the reviewer for the provided literature, which we now cite when describing parameter distributions (2. Results - 2.4 Distributing ion channels - ¶.1). Please note that DendroTweaks' programming interface and data format natively support non-linear distribution of kinetic parameters alongside the channel conductances. As for the step-like function, users can either directly apply the built-in step-like distribution function or create it by combining two constant distributions.

      (6) Section 2.5

      (a) It might be useful to provide a mechanism for implementing the normalization of unitary conductances at the cell body, (as in Magee and Cook, 2000; Andrasfalvy et al., J Neuroscience, 2001). Specifically, users should be able to compute AMPAR conductance values at each segment which would provide a somatic EPSP value of 0.2 mV.

      This functionality is indeed useful and will be added in future releases. Currently, it has been mentioned in the list of known limitations when working with synaptic inputs (3. Discussion - 3.3 Limitations and future directions - ¶.5).

      (b) Users could be sensitized about differences in decay time constants of GABA_A receptors that are associated with parvalbamin vs. somatostatin neurons. As these have been linked to slow and fast gamma oscillations and different somatodendritic locations along different cell types, this might be useful (e.g., 10.1016/j.neuron.2017.11.033;10.1523/jneurosci.0261-20.2020; 10.7554/eLife.95562.1; 10.3389/fncel.2023.1146278).

      We thank the reviewer for highlighting this important biological detail. DendroTweaks enables users to define model parameters specific to their cell type of interest. For practical reasons, we leave the selection of biologically relevant parameters to the users. However, we will consider adding an explicit example in our tutorials to showcase the toolbox's flexibility in this regard.

      (7) Section 2.6

      While reducing the morphological complexity has its advantages, users of this tool should be sensitized in this section about how the reduction does not capture all the complexity of the dendritic computation. For instance, the segregation/amplification properties of Polsky et al., 2004, Larkum et al., 2009 would not be captured by a fully reduced model. An example across different levels of reductions, implementing simulations in Figure 7F (but for synapses on the same vs. different branches), would be ideal. Demonstrate segregation/amplification in the full model for the same set of synapses - coming on the same branch/different branch (linear integration of synapses on different branches and nonlinear integration of synapses on the same branch). Then, show that with different levels of reduction, this segregation/amplification vanishes in the reduced model. In addition, while impedance-based approaches account for account for electrical computation, calcium-based computation is not something that is accountable with reduced models, given the small lambda_calcium values. Given the importance of calcium-activated conductances in electrical behaviour, this becomes extremely important to account for and sensitize users to. The lack of such sensitization results in presumptuous reductions that assume that all dendritic computation is accounted for by reduced models!

      We agree with the reviewer that reduction leads to a loss in the complexity of dendritic computation. This has been stated in both the original algorithm paper (Amsalem et al., 2020) and in our manuscript (e.g., 3. Discussion - 3.2 Comparison to existing modeling software - ¶.6). In fact, to address this problem, we extended the functionality of neuron_reduce to allow for multiple levels of morphology reduction. Our motivation for integrating morphology reduction in the toolbox was to leverage the exploratory power of DendroTweaks to assess how different degrees of reduction alter cell integrative properties, determining which computations are preserved, which are lost, and at what specific reduction level these changes occur. Nevertheless, to address this comment, we've made it more explicit in the Discussion that reduction inevitably alters integrative properties and, at a certain level, leads to loss of dendritic computations.

      (8) Section 2.7

      (a) The validation process has two implicit assumptions:

      (i) There is only one value of physiological measurements that neurons and dendrites are endowed with. The heterogeneity in these measurements even within the same cell type is ignored. The users should be allowed to validate each measurement over a range rather than a single value. Users should be sensitized about the heterogeneity of physiological measurements.

      (ii) The validation process is largely akin to hand-tuning models where a one-to-one mapping of channels to measurements is assumed. For instance, input resistance can be altered by passive properties, by Ih, and by any channel that is active under resting conditions. Firing rate and patterns can be changed by pretty much every single ion channel that expresses along the somatodendritic axis.

      An updated validation process that respects physiological heterogeneities in measurements and accounts for global dependencies would be more appropriate. Please update these to account for heterogeneities and many-to-many mappings between channels and measurements. An ideal implementation would be to incorporate randomized search procedures (across channel parameters spanning neuron-to-neuron variability in channel conductances/gating properties) to find a population of models that satisfy all physiological constraints (including neuron-to-neuron variability in each physiological measurement), rather than reliance on procedures that are akin to hand-tuning models. Such population-based approaches are now common across morphologically-realistic models for different cell types (e.g., Rathour and Narayanan, PNAS, 2014; Basak and Narayanan, J Physiology, 2018; Migliore et al., PLoS Computational Biology, 2018; Basak and Narayanan, Brain Structure and Function, 2020; Roy and Narayanan, Neural Networks, 2021; Roy and Narayanan, J Physiology, 2023; Arnaudon et al., iScience, 2023; Reva et al., Patterns, 2023; Kumari and Narayanan, J Neurophysiology, 2024) and do away with the biases introduced by hand-tuning as well as the assumption of one-to-one mapping between channels and measurements.

      We appreciate the reviewer’s comment and the suggested alternatives to our validation process. We have extended the discussion on these alternative approaches (3. Discussion - 2. Comparison to existing modeling software - ¶.5). However, it is important to note that neither one-value nor one-to-one mapping assumption is imposed in our approach. It is true that validation is performed on a given model instance with fixed single-value parameters. However, users can discover heterogeneity and degeneracy in their models via interactive exploration. In the GUI, a given parameter can be changed, and the influence of this change on model output can be observed in real time. Validation can be run after each change to see whether the model output still falls within a biologically plausible regime or not. This is, of course, time-consuming and less efficient than any automated parameter optimization.

      However, and importantly, this is the niche of DendroTweaks. The approach we provide here can indeed be referred to as model hand-tuning. This is intentional: we aim to complement black-box optimization by exposing the relationship between parameters and model outputs. DendroTweaks is not aimed at automated parameter optimization and is not meant to provide the user with parameter ranges automatically. The built-in validation in DendroTweaks is intended as a lightweight, fast feedback tool to guide manual tuning of dendritic model parameters so as to enhance intuitive understanding and assess the plausibility of outputs, not as a substitute for comprehensive model validation or optimization. The latter can be done using existing frameworks, designed for this purpose, as mentioned by the reviewer. 

      (b) Users could be asked to wait for RMP to reach steady state. For instance, in some of the traces in Figure 7, the current injection is provided before RMP reaches steady-state. In the presence of slow channels (HCN or calcium-activated channels), the RMP can take a while to settle down. Users might be sensitized about this. This would also bring to attention the ability of several resting channels in modulating RMP, and the need to wait for steady-state before measurements are made.

      We agree with the observation and updated the validation process accordingly. We have added functionality for simulation stabilization, allowing users to pre-run a simulation before the main simulation time. For example, model.run(duration=1000, prerun_time=300) could be used to stabilize the model for a period of 300 ms before running the main simulation for 1 s.

      (c) Strictly speaking, it is incorrect to obtain membrane time constant by fitting a single exponential to the initial part of the sag response (Figure 7A). This may be confirmed in the model by setting HCN to zero (strictly all active channel conductances to zero), obtaining the voltage-response to a pulse current, fitting a double exponential (as Rall showed, for a finite cable or for a real neuron, a single exponential would yield incorrect values for the tau) to the voltage response, and mapping membrane time constant to the slower of the two time-constants (in the double exponential fit). This value will be very different from what is obtained in Figure 7A. Please correct this, with references to Rall's original papers and to electrophysiological papers that use this process to assess membrane properties of neurons and their dendrites (e.g., Stuart and Spruston, J Neurosci, 1998; Golding and Spruston, J Physiology, 2005).

      We updated the algorithm for calculating the membrane time constant based on the reviewer's suggestions and added the suggested references. The time constant is now obtained in a model with blocked HCN channels (setting maximal conductance to 0) via a double exponential fit, taking the slowest component.

      (9) Section 3

      (a) May be good to emphasize the many-to-many mapping between ion channels and neuronal functions here in detail, and on how to explore this within the Dendrotweaks framework.

      We have added a paragraph in the Discussion that addresses both the problems of heterogeneity and degeneracy in biological neurons and neuronal models (3. Discussion - 3.3 Limitations and future directions - ¶.3)

      (b) May be good to have a specific section either here or in results about how the different reduced models can actually be incorporated towards building a network.

      As mentioned earlier, building a network of reduced models is a promising new direction. However, it is beyond the scope of this manuscript, whose primary goal is to introduce DendroTweaks and highlight its capabilities. DendroTweaks is designed for single-cell modeling and provides export capabilities that allow integrating it into broader workflows, including network modeling. We have added a paragraph in the manuscript (3. Discussion - 3.1 Conceptual and implementational accessibility - ¶.2) that addresses how DendroTweaks could be used alongside other software, in particular for scaling up single-cell models to the network level.

      (10) Section 4

      (a) Section 4.3: In the second sentence (line 568), the "first Kirchhoff's law" within parentheses immediately after Q=CV gives an illusion that Q=CV is the first Kirchhoff's law! Please state that this is with reference to the algebraic sum of currents at a node.

      We have corrected the equations and apologize for this oversight. 

      (b) Table 1: In the presence of active ion channels, input resistance, membrane time constant, and voltage attenuation are not passive properties. Input resistance is affected by any active channel that is active at rest (HCN, Kir, A-type K+ through the window current, etc). The same holds for membrane time constant and voltage attenuation as well. This could be made clear by stating if these measurements are obtained in the presence or absence of active ion channels. In real neurons, all these measurements are affected by active ion channels; so, ideally, these are also active properties, not passive! Also, please mention that in the presence of resonating channels (e.g., HCN, M-type K+), a single exponential fit won't be appropriate to obtain tau, given the presence of sag.

      We thank the reviewer for pointing out this ambiguity. What the term “Passive” means in Table 1 (e.g., for the input resistance, R_in) is that the minimal set of parameters needed to validate R_in are the passive ones (i.e., Cm, Ra, and Leak). We have changed the table listing to reflect this.

      Reviewer #2 (Recommendations for the authors):

      (1) Figure 2B and the caption to Figure 2F show and describe the diameter of the sections, whereas the image in Figure 2F shows the radius. Which is the correct one?

      The reason for this is that Figure 2B shows the sections' geometry as it is represented in NEURON, i.e., with diameters, while Figure 2F shows the geometry as it is represented in an SWC file (as these changes are made based on the SWC file). Nevertheless, as mentioned earlier, we decided to remove panel F from the figure in the new version, to present a more important panel on tree graph representations.

      (2) "Each segment can be viewed as an equivalent RC circuit representing a part of the membrane". The example in Figure 2B is perhaps a relatively simple case. For more complex cases where multiple nonlinear conductances are present in each section, would it be possible to show each of these conductances explicitly? If yes, it would be nice to illustrate that.

      We would like to clarify that "can be viewed" here was intended to mean "can be considered," and we have updated the text accordingly. The schematic RC circuits were added to the corresponding figure for illustration purposes only and are not present in the GUI, as this would indeed be impractical for multiple conductances.

      (3) Some extra citations could be added. For example, it is a little strange that BRIAN2 is mentioned, but NEST is not. It might be worth mentioning and citing it. Also, the Allen Cell Types Database is mentioned, but no citation for it is given. It could be useful to add such citations (https://doi.org/10.1038/s41593-019-0417-0, https://doi.org/10.1038/s41467-017-02718-3).

      Brian 2 is extensively used in our lab on its own and as a foundation of the Dendrify library (Pagkalos et al., 2023). As stated in the discussion, we are considering bridging reduced Hodgkin-Huxley-type models to Dendrify leaky integrate-and-fire type models. For these reasons, Brian 2 is mentioned in the discussion. However, we acknowledge that our previous overview omitted references to some key software, which have now been added to the updated manuscript. We appreciate the reviewer providing references that we had overlooked.

      (3) Pagkalos, M., Chavlis, S. & Poirazi, P. Introducing the Dendrify framework for incorporating dendrites to spiking neural networks. Nat Commun 14, 131 (2023). https://doi.org/10.1038/s41467-022-35747-8

    1. The Day My Smart Vacuum Turned Against Me
      • The iLife A11 smart vacuum was found to constantly send detailed data, including 3D home maps, to manufacturer servers without explicit user consent.
      • When the user blocked telemetry transmissions, the vacuum was remotely disabled by a command from the manufacturer, resulting in repeated failures.
      • Reverse engineering revealed the device runs a Linux OS with an open root access port and includes software allowing the manufacturer to remotely control or disable it.
      • Service centers temporarily restored functionality by reconnecting the device to manufacturer servers, but it failed again once telemetry was blocked.
      • This hardware and practice are common in many smart vacuums from brands like Xiaomi, Wyze, and Viomi, raising broad privacy and control concerns.
      • The case highlights significant security risks and loss of user autonomy inherent to many "smart" IoT devices relying on cloud connectivity.
    1. eLife Assessment

      This is an important study with convincing evidence that multi-voxel fMRI activity patterns for threat-conditioned stimuli are altered by learning CS-US contingencies. The analyses are dense, but rigorous. The protocol is quite nuanced and complex, but the authors have done a fair job of explaining and presenting the results. The work is relevant for our understanding of how effective learning changes neural stimulus representation in the human brain.

    2. Reviewer #1 (Public review):

      Summary:

      The authors conducted a human neuroimaging study investigating the role of context in the representation of fear associations when the contingencies between a conditioned stimulus and shock unconditioned stimulus switches between contexts. The novelty of the analysis centered on neural pattern similarity to derive a measure of context and cue stability and generalization across different regions of the brain. Given the complexity and nuance of the results, it is kind of difficult to provide a concise summary. But during fear and reversal, there was cue generalization (between current CS+ cues) in the canonical fear network, and "item stability" for cues that changed their association with the shock in the IFG and precuneus. Reinstatement was quantified as pattern similarity for items or sets of cues from the earlier phases to the test phases, and they found different patterns in the IFG and dmPFC. A similar analytical strategy was applied to contexts.

      Strengths:

      Overall, I found this to be a novel use of MVPA to study the role of context in reversal/extinction of human fear conditioning that yielded interesting results. The paper was overall well-written, with a strong introduction and fairly detailed methods and results. The lack of any univariate contrast results from the test phases was used as motivation for the neural pattern similarity approach, which I appreciated as a reader.

      I have no additional or new comments. The authors adequately addressed my major comments and concerns.

    3. Author response:

      The following is the authors’ response to the original reviews.

      Reviewing Editor Comments:

      The study design used reversal learning (i.e. the CS+ becomes the CS- and vice versa), while the title mentions 'fear learning and extinction'. In my opinion, the paper does not provide insight into extinction and the title should be changed.

      Thank you for this important point. We agree that our paradigm focuses more directly on reversal learning than on standard extinction, as the test phases represent extinction in the absence of a US but follow a reversal phase. To better reflect the core of our investigation, we have changed the title.

      Proposed change in manuscript (Title): Original Title: Distinct representational properties of cues and contexts shape fear learning and extinction 

      New Title: Distinct representational properties of cues and contexts shape fear and reversal learning

      Secondly, the design uses 'trace conditioning', whereas the neuroscientific research and synaptic/memory models are rather based on 'delay conditioning'. However, given the limitations of this design, it would still be possible to make the implications of this paper relevant to other areas, such as declarative memory research.

      This is an excellent point, and we thank you for highlighting it. Our design, where a temporal gap exists between the CS offset and US onset, is indeed a form of trace conditioning. We also agree that this feature, particularly given the known role of the hippocampus in trace conditioning, strengthens the link between our findings and the broader field of episodic memory.

      Proposed change in manuscript (Methods, Section "General procedure and stimuli"): We inserted the following text (lines 218-220): "It is important to note that the temporal gap between the CS offset and potential US delivery (see Figure 1A) indicates that our paradigm employs a trace conditioning design. This form of learning is known to be hippocampus-dependent and has been distinguished from delay conditioning.

      Proposed change in manuscript (Discussion): We added the following to the discussion (lines 774-779): "Furthermore, our use of a trace conditioning paradigm, which is known to engage the hippocampus more than delay conditioning does, may have facilitated the detection of item-specific, episodiclike memory traces and their interaction with context. This strengthens the relevance of our findings for understanding the interplay between aversive learning and mechanisms of episodic memory."

      The strength of the evidence at this point would be described as 'solid'. In order to increase the strength (to convincing), analyses including FWE correction would be necessary. I think exploratory (and perhaps some FDR-based) analyses have their valued place in papers, but I agree that these should be reported as such. The issue of testing multiple independent hypotheses also needs to be addressed to increase the strength of evidence (to convincing). Evaluating the design with 4 cues could lead to false positives if, for example, current valence, i.e. (CS++ and CS-+) > (CS+- and CS--), and past valence (CS++ > CS+-) > (CS-+ > CS--) are tested as independent tests within the same data set. Authors need to adjust their alpha threshold.

      We fully agree. As summarized in our general response, we have implemented two major changes to our statistical approach to address these concerns comprehensively. These, are stated above, are the following:

      (1) Correction for Multiple Hypotheses: We previously used FWER-corrected p-values that were obtained through permutation testing. We have now applied a Bonferroni adjustment to the FWER-corrected threshold (previously 0.05) used in our searchlight analyses. For instance, in the acquisition phase, since 2 independent tests (contrasts) were conducted, the significance threshold of each of these searchlight maps was set to p <0.025 (after FWE-correction estimated through non-parametric permutation testing); in reversal, 4 tests were conducted, hence the significance threshold was set to p<0.0125. This change is now clearly described in the Methods section (section “Searchlight approach” (lines 477484). This change had no impact on our searchlight results, given that all clusters that were previously as significant with the previous FWER alpha of 0.05 were also significant at the new, Bonferroni-adjusted thresholds; we also now report the cluster-specific corrected p-values in the cluster tables in Supplementary Material.

      (2) ROI Analyses: Our ROI-based analyses used FDR-based correction for within each item reinstatement/generalized reinstatement pair of each ROI. We now explicitly state in the abstract, methods and results sections that these ROI-based analyses are exploratory and secondary to the primary whole-brain results, given that the correction method used is more liberal, in accordance with the exploratory character of these analyses.

      We are confident that these changes ensure both the robustness and transparency of our reported findings.

      Reviewer #1 (Public Review):

      (1) I had a difficult time unpacking lines 419-420: "item stability represents the similarity of the neural representation of an item to other representations of this same item."

      We thank the reviewer for pointing out this lack of clarity. We have revised the definition to be more intuitive and have ensured it is introduced earlier in the manuscript.

      Proposed change in manuscript (Introduction, lines 144-150): We introduced the concept earlier and more clearly: "Furthermore, we can measure the consistency of a neural pattern for a given item across multiple presentations. This metric, which we refer to as “item stability”, quantifies how consistently a specific stimulus (e.g., the image of a kettle) is represented in the brain across multiple repetitions of the same item. Higher item stability has been linked to successful episodic memory encoding (Xue et al., 2010)."

      Proposed change in manuscript (Methods, Section "Item stability and generalization of cues"): Original text: "Thus, item stability represents the similarity of the neural representation of an item to other representations of this same item (Xue, 2018), or the consistency of neural activity across repetitions (Sommer et al., 2022)."

      Revised text (lines 434-436): "Item stability is defined as the average similarity of neural patterns elicited by multiple presentations of the same item (e.g., the kettle). It therefore measures the consistency of an item's neural representation across repeated encounters."

      (2) The authors use the phrase "representational geometry" several times in the paper without clearly defining what they mean by this.

      We apologize for this omission. We have now added a clear and concise definition of "representational geometry" in the Introduction, citing the foundational work by Kriegeskorte et al. (2008).

      Proposed change in manuscript (Introduction): We inserted the following text (lines 117-125): " By contrast, multivariate pattern analyses (MVPA), such as representational similarity analysis (RSA; Kriegeskorte et al., 2008) has emerged as a powerful tool to investigate the content and structure of these representations (e.g., Hennings et al., 2022). This approach allows us to characterize the “representational geometry” of a set of items – that is, the structure of similarities and dissimilarities between their associated neural activity patterns. This geometry reveals how the brain organizes information, for instance, by clustering items that are conceptually similar while separating those that are distinct."

      (3) The abstract is quite dense and will likely be challenging to decipher for those without a specialized knowledge of both the topic (fear conditioning) and the analytical approach. For instance, the goal of the study is clearly articulated in the first few sentences, but then suddenly jumps to a sentence stating "our data show that contingency changes during reversal induce memory traces with distinct representational geometries characterized by stable activity patterns across repetitions..." this would be challenging for a reader to grok without having a clear understanding of the complex analytical approach used in the paper.

      We agree with your assessment. We have rewritten it to be more accessible to a general scientific audience, by focusing on the conceptual findings rather than methodological jargon.

      Proposed change in manuscript (Abstract): We revised the abstract to be clearer. It now reads: " When we learn that something is dangerous, a fear memory is formed. However, this memory is not fixed and can be updated through new experiences, such as learning that the threat is no longer present. This process of updating, known as extinction or reversal learning, is highly dependent on the context in which it occurs. How the brain represents cues, contexts, and their changing threat value remains a major question. Here, we used functional magnetic resonance imaging and a novel fear learning paradigm to track the neural representations of stimuli across fear acquisition, reversal, and test phases. We found that initial fear learning creates generalized neural representations for all threatening cues in the brain’s fear network. During reversal learning, when threat contingencies switched for some of the cues, two distinct representational strategies were observed. On the one hand, we still identified generalized patterns for currently threatening cues, whereas on the other hand, we observed highly stable representations of individual cues (i.e., item-specific) that changed their valence, particularly in the precuneus and prefrontal cortex. Furthermore, we observed that the brain represents contexts more distinctly during reversal learning. Furthermore, additional exploratory analyses showed that the degree of this context specificity in the prefrontal cortex predicted the subsequent return of fear, providing a potential neural mechanism for fear renewal. Our findings reveal that the brain uses a flexible combination of generalized and specific representations to adapt to a changing world, shedding new light on the mechanisms that support cognitive flexibility and the treatment of anxiety disorders via exposure therapy."

      (4) Minor: I believe it is STM200 not the STM2000.

      Thank you for pointing this out. We have corrected it in the Methods section.

      Proposed change in manuscript (Methods, Page 5, Line 211): Original: STM2000 -> Corrected: STM200

      (5) Line 146: "...could be particularly fruitful as a means to study the influence of fear reversal or extinction on context representations, which have never been analyzed in previous fear and extinction learning studies." I direct the authors to Hennings et al., 2020, Contextual reinstatement promotes extinction generalization in healthy adults but not PTSD, as an example of using MVPA to decipher reinstatement of the extinction context during test.

      Thank for pointing us towards this relevant work. We have revised the sentence to reflect the state of the literature more accurately.

      Proposed change in manuscript (Introduction, Page 3): Original text: "...which have never been analyzed in previous fear and extinction learning studies." 

      Revised text (lines 154-157): "...which, despite some notable exceptions (e.g., Hennings et al., 2020), have been less systematically investigated than cue representations across different learning stages."

      (6) This is a methodological/conceptual point, but it appears from Figure 1 that the shock occurs 2.5 seconds after the CS (and context) goes off the screen. This would seem to be more like a trace conditioning procedure than a standard delay fear conditioning procedure. This could be a trivial point, but there have been numerous studies over the last several decades comparing differences between these two forms of fear acquisition, both behaviorally and neurally, including differences in how trace vs delay conditioning is extinguished.

      Thank you for this pertinent observation; this was also pointed out by the editor. As detailed in our response to the editor, we now explicitly acknowledge that our paradigm uses a trace conditioning design, and have added statements to this effect in the Methods and Discussion sections (lines 218-220, and 774-779).

      (7) In Figure 4, it would help to see the individual data points derived from the model used to test significance between the different conditions (reinstatement between Acq, reversal, and test-new).

      We agree that this would improve the transparency of our results. We have revised Figure 4 to include individual data points, which are now plotted over the bar graphs. 

      Reviewer #2 (Public Review & Recommendations)

      Use a more stringent method of multiple comparison correction: voxel-wise FWE instead of FDR; Holm-Bonferroni across multiple hypothesis tests. If FDR is chosen then the exploratory character of the results should be transparently reported in the abstract.

      Thank you for these critical comments regarding our statistical methods. As detailed in the general response and response to the editor (Comment 3), we have thoroughly revised our approach to ensure its rigor. We now clarify that our whole-brain analyses consistently use FWER-corrected pvalues. Additionally, the significance of these FWER-corrected p-values (obtained through permutation testing), which were previously considered significant against a default threshold of 0.05, are now compared with a Bonferroni-adjusted threshold equal to the number of tested contrasts in each experimental phase. We have modified the revised manuscript accordingly, in the methods section (lines 473-484) and in the supplementary material, where we added the p-values (FWER-corrected) of each cluster, evaluated against the new Bonferroni-adjusted thresholds. It is to be of note that this had no impact on our searchlight results, given that all clusters that were previously reported as significant with the alpha threshold of 0.05 were also significant at the new, corrected thresholds.

      Proposed change in manuscript (Methods): We revised the relevant paragraphs (lines 473-484): "Significance corresponding to the contrast between conditions of the maps of interest was FWER-corrected using nonparametric permutation testing at the cluster level (10,000 permutations) to estimate significant cluster size. Additionally, we adjusted the alpha threshold against which we assessed the significance of the cluster-specific FWERcorrected p-values using Bonferroni correction. In this order, we divided the default alpha corrected threshold of 0.05 by the number of statistical comparisons that were conducted in each experimental phase. For example, for fear acquisition, we compared the CS+>CS- contrast for both item stability and cue generalization, resulting in 2 comparisons and hence a corrected alpha threshold of 0.025. Only clusters that had a FWER-corrected p-value below the Bonferroni-adjusted threshold were deemed significant. All searchlight analyses were restricted within a gray matter mask.”

      The authors report fMRI results from line 96 onwards; all of these refer exclusively to mass-univariate fMRI which could be mentioned more transparently... The authors contrast "activation fMRI" with "RSA" (line 112). Again, I would suggest mentioning "mass-univariate fMRI", and contrasting this with "multivariate" fMRI, of which RSA is just one flavour. For example, there is some work that is clear and replicable, demonstrating human amygdala involvement in fear conditioning using SVM-based analysis of highresolution amygdala signals (one paper is currently cited in the discussion).

      Thank you for this important clarification. We have revised the manuscript to incorporate your suggestions. We now introduce our initial analyses as "mass-univariate" and contrast them with the "multivariate pattern analysis" (MVPA) approach of RSA.

      Proposed change in manuscript (Introduction): We revised the relevant paragraphs (lines 113-125): " While mass-univariate functional magnetic resonance imaging (fMRI) activation studies have been instrumental in identifying the brain regions involved in fear learning and extinction, they are insensitive to the patterns of neural activity that underlie the stimulus-specific representations of threat cues and contexts. Contrastingly, multivariate pattern analyses methods, such as representational similarity analysis (RSA; Kriegeskorte et al., 2008), have emerged as a powerful tool to investigate the content and structure of these representations (e.g., Hennings et al., 2022). This approach allows us to characterize the “representational geometry” of a set of items – i.e., the structure of similarities and dissimilarities between their associated neural activity patterns. This geometry reveals how the brain organizes information, for instance, by clustering items that are conceptually similar while separating those that are distinct.”

      Line 177: unclear how incomplete data was dealt with. If there are 30 subjects and 9 incomplete data sets, then how do they end up with 24 in the final sample?

      We apologize for the unclear wording in our original manuscript. We have clarified the participant exclusion pipeline in the Methods section.

      Proposed change in manuscript (Methods, Section "Participants"): Original text: "The number of participants with usable fMRI data for each phase was as follows: N = 30 for the first phase of day one, N = 29 for the second phase of day one, N = 27 for the first phase of day two, and N = 26 for the second phase of day two. Of the 30 participants who completed the first session, four did not return for the second day and thus had incomplete data across the four experimental phases. An additional two participants were excluded from the analysis due to excessive head movement (>2.5 mm in any direction). This resulted in a final sample of 24 participants (8 males) between 18 and 32 years of age (mean: 24.69 years, standard deviation: 3.6) with complete, low-motion fMRI data for all analyses." 

      Revised text: "The number of participants with usable fMRI data for each phase was as follows: N = 30 for the first phase of day one, N = 29 for the second phase of day one, N = 27 for the first phase of day two, and N = 26 for the second phase of day two. An additional two participants were excluded from the analysis due to excessive head movement (>2.5 mm in any direction). This resulted in a final sample of 24 participants (8 males) between 18 and 32 years of age (mean: 24.69 years, standard deviation: 3.6) with complete, low-motion fMRI data for all analyses."

      Typo in line 201.  

      Thank you for your comment. We have re-examined line 201 (“interval (Figure 1A). A total of eight CSs were presented during each phase and”) and the surrounding text but were unable to identify a clear typographical error in the provided quote. However, in the process of revising the manuscript for clarity, we have rephrased this section.

      it would be good to see all details of the US calibration procedure, and the physical details of the electric shock (e.g. duration, ...).

      Thank you for your comment. We have expanded the Methods section to include these important details.

      Proposed change in manuscript (Methods, Section "General procedure and stimuli"): We inserted the following text (lines 225-230): "Electrical stimulation was delivered via two Ag/AgCl electrodes attached to the distal phalanx of the index and middle fingers of the non-dominant hand. he intensity of the electrical stimulation was calibrated individually for each participant prior to the experiment. Using a stepping procedure, the voltage was gradually increased until the participant rated the sensation as 'unpleasant but not painful'.

      "beta series modelling" is a jargon term used in some neuroimaging software but not others. In essence, the authors use trial-by-trial BOLD response amplitude estimates in their model. Also, I don't think this requires justification - using the raw BOLD signal would seem outdated for at least 15 years.

      Thank you for this helpful suggestion. We have simplified the relevant sentences for improved clarity.

      Proposed change in manuscript (Methods, Section "RSA"): Original text: "...an approach known as beta-series modeling (Rissman et al., 2004; Turner et al., 2012)." 

      Revised text (lines 391-393): "...an approach that allows for the estimation of trial-by-trial BOLD response amplitudes, often referred to as beta-series modeling (Rissman et al., 2004). Specifically, we used a Least Square Separate (LSS) approach..."

      I found the use of "Pavlovian trace" a bit confusing. The authors are coming from memory research where "memory trace" is often used; however, in associative learning the term "trace conditioning" means something else. Perhaps this can be explained upon first occurrence, and "memory trace" instead of "Pavlovian trace" might be more common.

      We are grateful for this comment, as it highlights a critical point of potential confusion, especially given that we now acknowledge our paradigm uses a trace conditioning design. To eliminate this ambiguity, we have replaced all instances of "Pavlovian trace" with "lingering fear memory trace" throughout the manuscript (lines 542 and 599).

      I would suggest removing evaluative statements from the results (repeated use of "interesting").

      Thank you for this valuable suggestion. We have reviewed the Results section and removed subjective evaluative words to maintain a more objective tone. 

      Line 882: one of these references refers to a multivariate BOLD analysis using SVM, not explicitly using temporal information in the signal (although they do show session-by-session information).

      Thank you for this correction. We have re-examined the cited paper (Bach et al., 2011) and removed its inclusion in the text accordingly.

    1. Reviewer #3 (Public review):

      Summary:

      This article is about the neural circuitry underlying motion vision in the fruit fly. Specifically, it regards the roles of two identified neurons, called C2 and C3, that form columnar connections between neurons in the lamina and medulla, including neurons that are presynaptic to the elementary motion detectors T4 and T5. The approach takes advantage of specific fly lines in which one can disable the synaptic outputs of either or both of the C2/3 cell types. This is combined with optical recording from various neurons in the circuit, and with behavioral measurements of the turning reaction to moving stimuli.

      The experiments are planned logically. The effects of silencing the C2/C3 neurons are substantial in size. The dominant effect is to make the responses of downstream neurons more sustained, consistent with a circuit role in feedback or feedforward inhibition. Silencing C2/C3 also makes the motion-sensitive neurons T4/T5 less direction-selective. However, the turning response of the fly is affected only in subtle ways. Detection of motion appears unaffected. But the response fails to discriminate between two motion pulses that happen in close succession. One can conclude that C2/C3 are involved in the motion vision circuit, by sharpening responses in time, though they are not essential for its basic function of motion detection.

      Strengths:

      The combination of cutting-edge methods available in fruit fly neuroscience. Well-planned experiments carried out to a high standard. Convincing effects documenting the role of these neurons in neural processing and behavior.

      Weaknesses:

      The report could benefit from a mechanistic argument linking the effects at the level of single neurons, the resulting neural computations in elementary motion detectors, and the altered behavioral response to visual motion.

    1. document current activity in situ

      This is a major change in work/do-how

      Expanding on the principle of maintaining continuity of focus

      keep the flow uninterrupted

      do not go to where information may eventually be stored

      but create/crurate/formulate everrything that comes to min in the same page in the current focal context

      Worry about referencing it from all pertinent context eventually

      Thus insteead of going to get your diary or daily log out and switch to it to document what u do

      create a back link to where it needs to be recoded and linked to

      cf

    1. Reviewer #3 (Public review):

      Donofrio et al. report a new observation that in normal aging mice, anti-calbindin whole-mount staining and coronal immunohistochemistry in the cerebellum often show a sagittally patterned loss of Purkinje cells with age. The authors address a central concern that calbindin antibody staining alone is not sufficient to definitively assess Purkinje cell loss, and corroborate their antibody staining data with transgenic Pcp2-CRE x flox-GFP reporter mice and Neutral Red staining. The authors then investigate whether this patterned Purkinje loss correlates with the known parasagittal expression of zebrin-II, finding a strong but imperfect correlation with zebrin-II antibody staining. They next draw a connection between this age-related Purkinje loss to the age-related decline in motor function in mice, with trending but non-significant statistical association between the severity/patterning of Purkinje loss and motor phenotypes within cohorts of aged mice. Finally, the authors look at post-mortem human cerebellar tissues from deceased healthy donors between 21 and 74 years of age, finding a positive correlation between Purkinje degeneration and age, but with unknown spatial patterning.

      The conclusions drawn from this study are well supported by the data provided, with image quantification corroborating visual observations. The authors highlight several examples of parasagittal patterning of Purkinje cell degeneration in disease, and they show that proper methodologies must be used to account for these patterns to avoid highly variable data in the sagittal plane. The authors aptly point out that additional work is needed to investigate the spatial patterns of Purkinje cell loss in the human cerebellum.

    1. Reviewer #3 (Public review):

      Summary:

      In this manuscript, Edwards et al. describe hamFISH, a customizable and cost-efficient method for performing targeted spatial transcriptomics. hamFISH utilizes highly amplified multiplexed branched DNA amplification, and the authors extensively describe hamFISH development and its advantages over prior variants of this approach.

      The authors then used hamFISH to investigate an important circuit in the mouse brain for social behavior, the medial amygdala (MeA). To develop a hamFISH probe set capable of distinguishing MeA neurons, the authors mined published single cell RNA-sequencing datasets of the MeA, ultimately creating a panel of 32 hamFISH probes that mostly cover the identified MeA cell types. They evaluated over 600,000 MeA cells and classified neurons into 16 inhibitory and 10 excitatory types, many of which are spatially clustered.

      The authors combined hamFISH with viral and other circuit tracer injections to determine whether the identified MeA cell populations sent and/or received unique inputs from connected brain regions, finding evidence that several cell types had unique patterns of input and output. Finally, the authors performed hamFISH on the brains of male mice that were placed in behavioral conditions that elicit aggressive, infanticidal, or mating behaviors, finding that some cell populations are selectively activated (as assessed by c-fos mRNA expression) in specific social contexts.

      Strengths:

      (1) The authors developed an optimized tissue preparation protocol for hamFISH and implemented oligopools instead of individually synthesized oligonucleotides to reduce costs. The branched DNA amplification scheme improved smFISH signal compared to previous methods, and multiple variants provide additional improvements in signal intensity and specificity. Compared to other spatial transcriptomics methods, the pipeline for imaging and analysis is streamlined, and is compatible with other techniques like fluorescence-based circuit tracing. This approach is cost-effective and has several advantages that make it a valuable addition to the list of spatial transcriptomics toolkits.

      (2) Using 31 probes, hamFISH was able to detect 16 inhibitory and 10 excitatory neuron types in the MeA subregions, including the vast majority of cell types identified by other transcriptomics approaches. The authors quantified the distributions of these cell types along the anterior-posterior, dorsal-ventral, and medial-lateral axes, finding spatial segregation among some, but not all, MeA excitatory and inhibitory cell types. The authors additionally identified a class of inhibitory neurons expressing Ndnf (and a subset of these that express Chrna7) that project to multiple social chemosensory circuits.

      (3) The authors combined hamFISH with MeA input and output mapping, finding cell-type biases in the projections to the MPOA, BNST, and VMHvl, and inputs from multiple regions.

      (4) The authors identified excitatory and inhibitory cell types, and patterns of activity across cell types, that were selectively activated during various social behaviors, including aggression, mating, and infanticide, providing new insights and avenues for future research into MeA circuit function.

      Weaknesses:

      (1) Gene selection for hamFISH is likely to still be a limiting factor, even with the expanded (32-probe) capacity. This may have contributed to the lack of ability to identify sexually dimorphic cell types (Fig. S2B). This is an expected tradeoff for a method that has major advantages in terms of cost and adaptability.

      (2) Adaptation of hamFISH, for example, to adapt it to other brain regions or tissues, may require extensive optimization. This does not preclude it from being highly useful for other brain regions with extra effort.

      (3) Pairing this method with behavioral experiments is likely to require further optimization, as c-fos mRNA expression is an indirect and incomplete survey of neuronal activity (e.g. not all cell types upregulate c-fos when electrically active). As such, there is a risk of false negative results that limit its utility for understanding circuit function.

      (4) The incompatibility of hamFISH with thicker tissue samples and minimal optical sectioning introduce additional technical limitations. For example, it would be difficult to densely sample larger neural circuits using serial 20 micron sections.

    2. Author response:

      The following is the authors’ response to the original reviews

      Reviewing Editor Comments:

      Recommendations for improvement:

      (1) Address data presentation, editing, and other issues of lack of clarity as pointed out by the reviewers.

      We have now addressed all comments from reviewers that identify editing errors and lack of clarity issues. Regarding data presentation we have made some changes, for example including a combined heatmap to show consistency between row names (Figure 2 - figure supplement 2), but also kept some stylistic features such as the balance between main and supplemental figures that we think fits more naturally with the story of the paper.

      (2) Inclusion of requested and critical details in the methodology section, an important component for broad applicability of a new methodology by other investigators.

      We have added the requested details to the methods section, specifically the RCA protocol.

      (3) More in-depth discussion of the limitations of the methodology and approach to capture important but more complex components of tissues of interest, for example, sexual dimorphism.

      We have now edited the ‘pitfalls of study’ section in the discussion to include further detail of the limitations of the number of genes that can be used to deeply profile transcriptomic types, including sexual dimorphism. Regarding its use in other tissues of interest, we have now included a reference in the discussion (Bintu et al., 2025) where a similar strategy has been used to profile cells in the olfactory epithelium and olfactory bulb. We have also used hamFISH in other brain areas (as commented in our public reviews responses) but as this is unpublished work we will refrain from mentioning it in the main text.

      Reviewer #1 (Recommendations for the authors):

      The manuscript by Edwards et al. would benefit from minor revisions. Here, we outline several points that could / should be addressed:

      (1) General balance of data presentation between main and supplementary figures

      (a) quantifications were often missing from main figures and only presented in the supplements

      Thank you for raising this point. We believe that the balance of panels between the main and supplemental figures matches our story and results section well with quantifications included in the main figures where appropriate.

      (b) more informative figure legends in supplements (e.g.: Supplementary Figure I - Figure 3)

      We have now revised the figure legends and added more description where appropriate.

      (c) missing subpanel in Figure 3; figure legend describes 3H, which is missing in the figure

      We thank the reviewer for pointing this out and have now amended the subpanel.

      stand-alone figure on inhibitory neuron cluster i3 cells

      We agree that this is an important characterisation of i3 cells but decided to place this figure in the supplement as it does not fall within the main storyline (defining transcriptomic characterisation of cell types in a multimodal fashion), but rather acts as accessory information for those specifically interested in these inhibitory cell types.

      statistical tests used (e.g.: Figure 1 C -, Supplementary Figure 3 - Figure 2)/ graphs shown (Supplementary Figure 1 - 1 D)

      The statistical tests used are described in the figure legends.

      t-SNE dimensionality reduction of positional parameters

      Explanations of the t-SNE dimensionality reduction of positional parameters can be found in the materials and methods.

      (d) heatmaps similarly informative and more convincing

      We have included an extra heatmap (Figure 2 - figure supplement 2) in response to Reviewer 3’s comment (see below) in order to more easily follow genes across all the different clusters. We hope this helps to make the heatmaps more convincing and informative.

      code availability

      Code availability is described in the methods section of the manuscript.

      page 6, 3rd paragraph wrong description of PMCo abbreviation

      We thank the reviewer for identifying the mistake and we have now amended it.

      Reviewer #2 (Recommendations for the authors):

      The pre-existing scRNA-seq dataset on which the manuscript is based is an older Drop-seq dataset for which minimal QC information is provided. The authors should include QC information (genes/cells and UMIs/cells) in the Methods. Moreover, the Seurat clustering of these cells and depiction of marker genes in feature plots are not shown.

      It is therefore difficult to determine how the authors selected their 31 genes for their hamFISH panel, or how selective they are to the original Drop-seq clusters.

      The QC information of this dataset can be found in the original publication (Chen et al., 2019) with our clustering methods described in the materials and methods section. We have not included individual gene names in our heatmap plots for presentation purposes (there are over 200 rows), but the data and cluster descriptions can be found in supplemental tables.

      Reviewer #3 (Recommendations for the authors):

      (1) The imaging modality is not entirely clear in the methods. The microscopy technique is referenced to prior work and involves taking z-stacks, but analysis appears to be done on maximum z-projections, which seems like it would introduce the risk of false attribution of gene expression to cells that are overlapping in "z".

      Thank you for pointing out the technical limitation of the microscopy. For imaging we used epifluorescence microscopy with 14x 500 nm z-steps to collect our raw data and generate a maximum intensity projection for further analysis. Because of the thin sections (10 um) used for the imaging, the overlap between cells in z is expected to be minimal. However, we cannot completely rule out misattribution raised in the comment. The method section contains this information.

      (2) Supplemental Figure 1 - Figure Supplement 2B: RCA looks significantly different when compared to v2 smFISH from the representative image, although it is written as comparable. Additionally, there is no information about RCA mentioned in the Materials and Methods section. Supplemental Figure 1 - Figure Supplement 2B: The figure label for RCA is missing.

      By comparable we are referring to the intensity rather than pattern as mentioned in the results section. We did not analyze the number of spots. It is true that the pattern of RCA signal is much sparser due to its inherent insensitivity compared with hamFISH. We thank the reviewer for identifying the lack of a methodological RCA description and have amended the manuscript to include this. We have also now amended the missing RCA label in the figure.

      (3) Figure 2C and associated supplement: The rows (each gene) are not consistent across the subpanels (i.e. they do not line up left-to-right), this makes it difficult for the reader to follow the patterns that distinguish the cell types in each subset.

      We have done this as we believe it makes for an easier interpretation of inhibitory vs excitatory clusters for the reader. However, we agree with the reviewer that one may wish to look at the dataset as a whole with a consistent gene order, and we have now provided this in the corresponding supplemental figure.  

      (4) "Consistent with previous work, most inhibitory classes are localized in the dorsal and ventral subdivisions of the MeA, whereas excitatory neurons occupy primarily the ventral MeA (Figure 2D, Figure 2 - Figure Supplement 2C, Figure 1D)". - The reference to Figure 1D seems to be an error.

      We thank the reviewer for identifying the mistake, and we have now amended it.

      (5) Supplemental Figure 2 - Figure Supplement 1, "published by Chen et al." - should have a proper reference number to be compatible with the rest of the manuscript. Also, the lack of gene info makes it difficult to understand Panel A. Finally, the text on Panel B refers to "hamMERFISH" which seems an error.

      We thank the reviewer for identifying the mistake on Panel B, it has now been amended. We have also changed the reference format. Regarding the lack of gene information in panel A, it is difficult to present all row names due to the large number of rows (>200), but this information can be found in supplemental table 2.

      (6) Supplemental Figure 2 - Figure Supplement 1: there are thin dividing lines drawn on each section, but these are not described or defined, making it difficult to understand what is being delineated.

      We thank the reviewer for identifying this omission and have now edited to figure legend to contain a description.

      (7) Page 4, "...we found 26 clusters in cells that are positive for Slc32a1 (inhibitory) or Slc17a6 (encoding Vglut2 and therefore excitatory) positive (Figure 2 - figure supplement 1A, Table S2)."

      This seems to be an error as Figure 2 - figure supplement 1A does not show this.

      We double-checked that this description describes the panel accurately.

      (8) "The clustering revealed that inhibitory and excitatory classes generally have different spatial properties (Figure 1E, left), although the salt-and-pepper, sparse nature of e10 (Nts+) cells is more similar to inhibitory cells than other excitatory classes".

      The references to Figure 1E's should be to Figure 2E.

      We thank the reviewer for identifying the mistake, and we have now amended it.

      (9) "Comparison of the proportion of all cells that are cluster X vs projection neurons labelled by CTB that are cluster X". Please explain cluster X in this context.

      We have now rephrased this sentence in the figure legend for clarity.

      (10) Figure 3 - figure supplement 3: There appears to be quite a bit of heterogeneity in the patterns of activity across clusters even within behavioral contexts (e.g. the bottom 2 animals paired with females). It might be worth commenting on (or quantifying) whether there were any evident differences in the social behaviors observed (e.g. mating or not?) in individuals demonstrating these patterns.

      We thank the reviewer for this observation. We unfortunately did not quantify the behaviors, but we agree that more work is needed to link the pattern of c-fos activity with incrementally measured behavioral variables. At least, we did not include animals that did not display the anticipated social behaviours (as described in the materials and methods) in the in situ transcriptomic profiling work.

    1. The printing press and the Gutenberg Bible did notcause the Protestant Reformation

      Marwick does not agree with the theory of technological determinism, which says that technology alone creates societal development. With AI, social media, and automation, this is an important reminder today. People often say that technology will "change" society, but it always depends on the choices people make, how easy it is to get to, and the power structures that are in place. Like the printing press needed social movements to be important, current technology needs moral and political direction to make significant development.

    2. The person of the year:You.

      This comment shows how people were hopeful at first that social media and user-generated content would give regular people more influence. Time's "Person of the Year" for 2006 was a symbol of digital democracy since anybody could publish, connect, and be seen. But when you look back, the promise of empowerment hasn't always worked out. People can speak out on platforms like YouTube, Instagram, and TikTok, but algorithms that favor business and influence over real equality control them. What started as a revolution of "you" often transformed into a way to make money by using your data and attention.

    1. Reviewer #1 (Public review):

      In the current article, Octavia Soegyono and colleagues study "The influence of nucleus accumbens shell D1 and D2 neurons on outcome-specific Pavlovian instrumental transfer", building on extensive findings from the same lab. While there is a consensus about the specific involvement of the Shell part of the Nucleus Accumbens (NAc) in specific stimulus-based actions in choice settings (and not in General Pavlovian instrumental transfer - gPIT, as opposed to the Core part of the NAc), mechanisms at the cellular and circuitry levels remain to be explored. In the present work, using sophisticated methods (rat Cre-transgenic lines from both sexes, optogenetics and the well-established behavioral paradigm outcome-specific PIT - sPIT), Octavia Soegyono and colleagues decipher the differential contribution of dopamine receptors D1 and D2 expressing-spiny projection neurons (SPNs).

      After validating the viral strategy and the specificity of the targeting (immunochemistry and electrophysiology), the authors demonstrate that while both NAc Shell D1- and D2-SPNs participate in mediating sPIT, NAc Shell D1-SPNs projections to the Ventral Pallidum (VP, previously demonstrated as crucial for sPIT), but not D2-SPNs, mediates sPIT. They also show that these effects were specific to stimulus-based actions, as value-based choices were left intact in all manipulations.

      This is a well-designed study and the results are well supported by the experimental evidence. The paper is extremely pleasant to read and add to the current literature.

      Comments on revisions:

      We thank the authors for their detailed responses and for addressing our comments and concerns.

      To further improve consistency and transparency, we kindly request that the authors provide, for Supplemental Figures S1-S4, panels E (raw data for lever presses during the PIT test), the individual data points together with the corresponding statistical analyses in the figure legends.

      In addition, regarding Supplemental Figure S3, panel E, we note the absence of a PIT effect in the eYFP group under the ON condition, which appears to differ from the net response reported in the main Figure 5, panel B. Could the authors clarify this apparent discrepancy?

      We also note a discrepancy between the authors' statement in their response ("40 rats excluded based on post-mortem analyses") and the number of excluded animals reported in the Materials and Methods section, which adds up to 47. We kindly ask the authors to clarify this point for consistency.

      Finally, as a minor point, we suggest indicating the total number of animals used in the study in the Materials and Methods section.

    2. Reviewer #2 (Public review):

      Summary:

      This manuscript by Soegyono et a. describes a series of experiments designed to probe the involvement of dopamine D1 and D2 neurons within the nucleus accumbens shell in outcome-specific Pavlovian-instrumental transfer (osPIT), a well-controlled assay of cue-guided action selection based on congruent outcome associations. They used an optogenetic approach to phasically silence NAc shell D1 (D1-Cre mice) or D2 (A2a-Cre mice) neurons during a subset of osPIT trials. Both manipulations disrupted cue-guided action selection but had no effects on negative control measures/tasks (concomitant approach behavior, separate valued guided choice task), nor were any osPIT impairments found in reporter only control groups. Separate experiments revealed that selective inhibition of NAc shell D1 but not D2 inputs to ventral pallidum were required for osPIT expression, thereby advancing understanding of the basal ganglia circuitry underpinning this important aspect of decision making.

      Strengths:

      The combinatorial viral and optogenetic approaches used here were convincingly validated through anatomical tract-tracing and ex vivo electrophysiology. The behavioral assays are sophisticated and well-controlled to parse cue and value guided action selection. The inclusion of reporter only control groups is rigorous and rules out nonspecific effects of the light manipulation. The findings are novel and address a critical question in the literature. Prior work using less decisive methods had implicated NAc shell D1 neurons in osPIT but suggested that D2 neurons may not be involved. The optogenetic manipulations used in the current study provides a more direct test of their involvement and convincingly demonstrate that both populations play an important role. Prior work had also implicated NAc shell connections to ventral pallidum in osPIT, but the current study reveals the selective involvement of D1 but not D2 neurons in this circuit. The authors do a good job of discussing their findings, including their nuanced interpretation that NAc shell D2 neurons may contribute to osPIT through their local regulation of NAc shell microcircuitry.

      Weaknesses:

      The current study exclusively used an optogenetic approach to probe the function of D1 and D2 NAc shell neurons. Providing a complementary assessment with chemogenetics or other appropriate methods would strengthen conclusions, particularly the novel demonstration for D2 NAc shell involvement. Likewise, the null result of optically inhibiting D2 inputs to ventral pallidum leaves open the possibility that a more complete or sustained disruption of this pathway may have impaired osPIT.

      Conclusions:

      The research described here was successful in providing critical new insights into the contributions of NAc D1 and D2 neurons in cue-guided action selection. The authors' data interpretation and conclusions are well reasoned and appropriate. They also provide a thoughtful discussion of study limitations and implications for future research. This research is therefore likely to have a significant impact on the field.

      Comments on revisions:

      I have reviewed the rebuttal and revised manuscript and have no remaining concerns.

    3. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      In the current article, Octavia Soegyono and colleagues study "The influence of nucleus accumbens shell D1 and D2 neurons on outcome-specific Pavlovian instrumental transfer", building on extensive findings from the same lab. While there is a consensus about the specific involvement of the Shell part of the Nucleus Accumbens (NAc) in specific stimulus-based actions in choice settings (and not in General Pavlovian instrumental transfer - gPIT, as opposed to the Core part of the NAc), mechanisms at the cellular and circuitry levels remain to be explored. In the present work, using sophisticated methods (rat Cre-transgenic lines from both sexes, optogenetics, and the well-established behavioral paradigm outcome-specific PIT-sPIT), Octavia Soegyono and colleagues decipher the diNerential contribution of dopamine receptors D1 and D2 expressing spiny projection neurons (SPNs). 

      After validating the viral strategy and the specificity of the targeting (immunochemistry and electrophysiology), the authors demonstrate that while both NAc Shell D1- and D2SPNs participate in mediating sPIT, NAc Shell D1-SPNs projections to the Ventral Pallidum (VP, previously demonstrated as crucial for sPIT), but not D2-SPNs, mediates sPIT. They also show that these eNects were specific to stimulus-based actions, as valuebased choices were left intact in all manipulations. 

      This is a well-designed study, and the results are well supported by the experimental evidence. The paper is extremely pleasant to read and adds to the current literature.

      We thank the Reviewer for their positive assessment. 

      Reviewer 2 (Public Review):

      Summary: 

      This manuscript by Soegyono et al. describes a series of experiments designed to probe the involvement of dopamine D1 and D2 neurons within the nucleus accumbens shell in outcome-specific Pavlovian-instrumental transfer (osPIT), a well-controlled assay of cueguided action selection based on congruent outcome associations. They used an optogenetic approach to phasically silence NAc shell D1 (D1-Cre mice) or D2 (A2a-Cre mice) neurons during a subset of osPIT trials. Both manipulations disrupted cue-guided action selection but had no eNects on negative control measures/tasks (concomitant approach behavior, separate valued guided choice task), nor were any osPIT impairments found in reporter-only control groups. Separate experiments revealed that selective inhibition of NAc shell D1 but not D2 inputs to ventral pallidum was required for osPIT expression, thereby advancing understanding of the basal ganglia circuitry underpinning this important aspect of decision making.

      Strengths: 

      The combinatorial viral and optogenetic approaches used here were convincingly validated through anatomical tract-tracing and ex vivo electrophysiology. The behavioral assays are sophisticated and well-controlled to parse cue and value-guided action selection. The inclusion of reporter-only control groups is rigorous and rules out nonspecific eNects of the light manipulation. The findings are novel and address a critical question in the literature. Prior work using less decisive methods had implicated NAc shell D1 neurons in osPIT but suggested that D2 neurons may not be involved. The optogenetic manipulations used in the current study provide a more direct test of their involvement and convincingly demonstrate that both populations play an important role. Prior work had also implicated NAc shell connections to ventral pallidum in osPIT, but the current study reveals the selective involvement of D1 but not D2 neurons in this circuit. The authors do a good job of discussing their findings, including their nuanced interpretation that NAc shell D2 neurons may contribute to osPIT through their local regulation of NAc shell microcircuitry. 

      We thank the Reviewer for their positive assessment. 

      Weaknesses: 

      The current study exclusively used an optogenetic approach to probe the function of D1 and D2 NAc shell neurons. Providing a complementary assessment with chemogenetics or other appropriate methods would strengthen conclusions, particularly the novel demonstration of D2 NAc shell involvement. Likewise, the null result of optically inhibiting D2 inputs to the ventral pallidum leaves open the possibility that a more complete or sustained disruption of this pathway may have impaired osPIT.

      We acknowledge the reviewer's valuable suggestion that demonstrating NAc-S D1- and D2-SPNs engagement in outcome-specific PIT through another technique would strengthen our optogenetic findings. Several approaches could provide this validation. Chemogenetic manipulation, as the reviewer suggested, represents one compelling option. Alternatively, immunohistochemical assessment of phosphorylated histone H3 at serine 10 (P-H3) oMers another promising avenue, given its established utility in reporting striatal SPNs plasticity in the dorsal striatum (Matamales et al., 2020). We hope to complete such an assessment in future work since it would address the limitations of previous work that relied solely on ERK1/2 phosphorylation measures in NAc-S SPNs (Laurent et al., 2014). The manuscript was modified to report these future avenues of research (page 12). 

      Regarding the null result from optical silencing of D2 terminals in the ventral pallidum, we agree with the reviewer's assessment. While we acknowledge this limitation in the current manuscript (page 13), we aim to address this gap in future studies to provide a more complete mechanistic understanding of the circuit.

      Reviewer 3 (Public Review):

      Summary:

      The authors present data demonstrating that optogenetic inhibition of either D1- or D2MSNs in the NAc Shell attenuates expression of sensory-specific PIT while largely sparing value-based decision on an instrumental task. They also provide evidence that SS-PIT depends on D1-MSN projections from the NAc-Shell to the VP, whereas projections from D2-MSNs to the VP do not contribute to SS-PIT.

      Strengths:

      This is clearly written. The evidence largely supports the authors' interpretations, and these eNects are somewhat novel, so they help advance our understanding of PIT and NAc-Shell function.

      We thank the Reviewer for their positive assessment. 

      Weaknesses:

      I think the interpretation of some of the eNects (specifically the claim that D1-MSNs do not contribute to value-based decision making) is not fully supported by the data presented.

      We appreciate the reviewer's comment regarding the marginal attenuation of valuebased choice observed following NAc-S D1-SPN silencing. While this manipulation did produce a slight reduction in choice performance, the behavior remained largely intact. We are hesitant to interpret this marginal eMect as evidence for a direct role of NAc-S D1SPNs in value-based decision-making, particularly given the substantial literature demonstrating that NAc-S manipulations typically preserve such choice behavior (Corbit et al., 2001; Corbit & Balleine, 2011; Laurent et al., 2012). Furthermore, previous work has shown that NAc-S D1 receptor blockade impairs outcome-specific PIT while leaving value-based choice unaMected (Laurent et al., 2014). We favor an alternative explanation for our observed marginal reduction. As documented in Supplemental Figure 1, viral transduction extended slightly into the nucleus accumbens core (NAc-C), a region established as critical for value-based decision-making (Corbit et al., 2001; Corbit & Balleine, 2011; Laurent et al., 2012; Parkes et al., 2015). The marginal impairment may therefore reflect inadvertent silencing of a small number of  NAc-C D1-SPNs rather than a functional contribution from NAc-S D1-SPNs. Future studies specifically targeting larger NAc-C D1-SPN populations would help clarify this possibility and provide definitive resolution of this question.

      Reviewer 1 (Recommendations for the Author):

      My main concerns and comments are listed below.

      (1) Could the authors provide the "raw" data of the PIT tests, such as PreSame vs Same vs PreDiNerent vs DiNerent? Could the authors clarify how the Net responding was calculated? Was it Same minus PreSame & DiNerent minus PreDiNerent, or was the average of PreSame and PreDiNerent used in this calculation?

      The raw data for PIT testing across all experiments are now included in the Supplemental Figures (Supplemental Figures S1E, S2E, S3E, and S4E). Baseline responding was quantified as the average number of lever presses per minute for both actions during the two-minute period (i.e., average of PreSame and PreDiMerent) preceding each stimulus presentation. This methodology has been clarified in the revised manuscript (page 7).

      (2) While both sexes are utilized in the current study, no statistical analysis is provided. Can the authors please comment on this point and provide these analyses (for both training and tests)?

      As noted in the original manuscript, the final sample sizes for female and male rats were insuMicient to provide adequate statistical power for sex-based analyses (page 15). To address this limitation, we have now cited a previous study from our laboratory (Burton et al., 2014) that conducted such analyses with suMicient power in identical behavioural tasks. That study identified only marginal sex diMerences in performance, with female rats exhibiting slightly higher magazine entry rates during Pavlovian conditioning. Importantly, no diMerences were observed in outcome-specific PIT or value-based choice performance between sexes.

      (3) Regarding Figure 1 - Anterograde tracing in D1-Cre and A2a-Cre rats (from line 976), I have one major and one minor question:

      (3.1) I do not understand the rationale of showing anterograde tracing from the Dorsal Striatum (DS) as this region is not studied in the current work. Moreover, sagittal micrographs of D1-Cre and A2a-Cre would be relevant here. Could the authors please provide these micrographs and explain the rationale for doing tracing in DS?

      We included dorsal striatum (DS) tracing data as a reference because the projection patterns of D1 and D2 SPNs in this region are well-established and extensively characterized, in contrast to the more limited literature on these cell types in the NAc-S. Regarding the comment about sagittal micrographs, we are uncertain of the specific concern as these images are presented in Figure 1B.

      If the reviewer is requesting sagittal micrographs for NAc-S anterograde tracing, we did not employ this approach because: (1) the NAc-S and ventral pallidum are anatomically adjacent regions and (2) the medial-lateral coordinates of the ventral pallidum and lateral hypothalamus do not align optimally with those of the NAc-S, limiting the utility of sagittal analysis for these projections.

      (3.2) There is no description about how the quantifications were done: manually? Automatically? What script or plugin was used? If automated, what were the thresholding conditions? How many brain sections along the anteroposterior axis? What was the density of these subpopulations? Can the authors include a methodological section to address this point?

      We apologize for the omission of quantification methods used to assess viral transduction specificity. This methodological description has now been added to the revised manuscript (page 22). Briefly, we employed a manual procedure in two sections per rat, and cell counts were completed in a defined region of interest located around the viral infusion site.

      (4) Lex A & Hauber (2008) Dopamine D1 and D2 receptors in the nucleus accumbens core and shell mediate Pavlovian-instrumental transfer. Learning & memory 15:483- 491, should be cited and discussed. It also seems that the contribution of the main dopaminergic source of the brain, the ventral tegmental area, is not cited, while it has been investigated in PIT in at least 3 studies regarding sPIT only, notably the VP-VTA pathway (Leung & Balleine 2015, accurately cited already).

      We did not include the Lex & Hauber (2008) study because its experimental design (single lever and single outcome) prevents diMerentiation between the eMects of Pavlovian stimuli on action performance (general PIT) versus action selection (outcome-specific PIT, as examined in the present study). Drawing connections between their findings and our results would require speculative interpretations regarding whether their observed eMects reflect general or outcome-specific PIT mechanisms, which could distract from the core findings reported in the article.

      Several studies examining the role of the VTA in outcome-specific PIT were referenced in the manuscript's introduction. Following the reviewer's recommendation, these references have also been incorporated into the discussion section (page 13). 

      (5) While not directly the focus of this study, it would be interesting to highlight the accumbens dissociation between General vs Specific PIT, and how the dopaminergic system (diNerentially?) influences both forms of PIT.

      We agree with the reviewer that the double dissociation between nucleus accumbens core/shell function and general/specific PIT is an interesting topic. However, the present manuscript does not examine this dissociation, the nucleus accumbens core, or general PIT. Similarly, our study does not directly investigate the dopaminergic system per se. We believe that discussing these topics would distract from our core findings and substantially increase manuscript length without contributing novel data directly relevant to these areas. 

      (6) While authors indicate that conditioned response to auditory stimuli (magazine visits) are persevered in all groups, suggesting intact sensitivity to the general motivational properties of reward-predictive stimuli (lines 344, 360), authors can't conclude about the specificity of this behavior i.e. does the subject use a mental representation of O1 when experiencing S1, leading to a magazine visits to retrieve O1 (and same for S2-O2), or not? Two food ports would be needed to address this question; also, authors should comment on the fact that competition between instrumental & pavlovian responses does not explain the deficits observed.

      We agree with the Reviewer that magazine entry data cannot be used to draw conclusions about specificity, and we do not make such claims in our manuscript. We are therefore unclear about the specific concern being raised. Following the Reviewer’s recommendation, we have commented on the fact that response competition could not explain the results obtained (page 11, see also supplemental discussion). 

      The minor comments are listed below.

      (7) A high number of rats were excluded (> 32 total), and the number of rats excluded for NAc-S D1-SPNs-VP is not indicated.

      We apologize for omitting the number of rats excluded from the experiment examining NAc-S D1-SPN projections to the ventral pallidum. This information has been added to the revised manuscript (page 22).

      (7.1) Can authors please comment on the elevated number of exclusions?

      A total of 133 rats were used across the reported experiments, with 40 rats excluded based on post-mortem analyses. This represents an attrition rate of approximately 30%, which we consider reasonable given that most animals received two separate viral infusions and two separate fiber-optic cannula implantations, and that the inclusion of both female and male rats contributed to some variability in coordinates and so targeting. 

      (7.2) Can authors please present the performance of these animals during the tasks (OFF conditions, and for control ones, both ON & OFF conditions)?

      Rats were excluded after assessing the spread of viral infusions, placement of fibre-optic cannulas and potential damage due to the surgical procedures (page 21). The requested data are presented below and plotted in the same manner as in Figures 3-6. The pattern of performance in excluded animals was highly variable. 

      Author response image 1.

       

      (8) For tracing, only males were used, and for electrophysiology, only females were used.

      (8.1) Can authors please comment on not using both sexes in these experiments? 

      We agree that equal allocation of female and male rats in the experiments presented in Figures 1-2 would have been preferable. Animal availability was the sole factor determining these allocations. Importantly, both female and male D1-Cre and A2A-Cre rats were used for the NAc-S tracing studies, and no sex diMerences were observed in the projection patterns. The article describing the two transgenic lines of rats did not report any sex diMerence (Pettibone et al., 2019). 

      (8.2) Is there evidence in the literature that the electrophysiological properties of female versus male SPNs could diNer?

      The literature indicates that there is no sex diMerence in the electrophysiological properties of NAc-S SPNs (Cao et al., 2018; Willett et al., 2016).  

      (8.3) It seems like there is a discrepancy between the number of animals used as presented in the Figure 2 legend versus what is described in the main text. In the Figure legend, I understand that 5 animals were used for D1-Cre/DIO-eNpHR3.0 validation, and 7 animals for A2a-Cre/DIO-eNpHR3.0; however, the main text indicates the use of a total of 8 animals instead of the 12 presented in the Figure legend. Can authors please address this mismatch or clarify?

      The number of rats reported in the main text and Figure 2 legend was correct. However, recordings sometimes involved multiple cells from the same animal, and this aspect of the data was incorrectly reported and generated confusion. We have clarified the numbers in both the main text and Figure 2 legend to distinguish between animal counts and cell counts. 

      (9) Overall, in the study, have the authors checked for outliers?

      Performance across all training and testing stages was inspected to identify potential behavioral outliers in each experiment. Abnormal performance during a single session within a multi-session stage was not considered suMicient grounds for outlier designation. Based on these criteria, no subjects remaining after post-mortem analyses exhibited performance patterns warranting exclusion through statistical outlier analysis. However, we have conducted the specific analyses requested by the Reviewer, as described below. 

      (9.1) In Figure 3, it seems that one female in the eYFP group, in the OFF situation, for the diNerent condition, has a higher level of responding than the others. Can authors please confirm or refute this visual observation with the appropriate statistical analysis?

      Statistical analysis (z-score) confirmed the reviewer's observation regarding responding of the diMerent action in the OFF condition for this subject (|z| = 2.58). Similar extreme responding was observed in the ON condition (|z| = 2.03). Analyzing responding on the diMerent action in isolation is not informative in the context of outcome-specific PIT. Additional analyses revealed |z| < 2 when examining the magnitude of choice discrimination in outcome-specific PIT (i.e., net same versus net diMerent responding) in both ON and OFF conditions. Furthermore, this subject showed |z| < 2 across all other experimental stages. Based on these analyses, we conclude that the subject should be kept in all analyses. 

      (9.2) In Figure 5, it seems that one male, in the ON situation, in the diNerent condition, has a quite higher level of responding - is this subject an outlier? If so, how does it aNect the statistical analysis after being removed? And who is this subject in the OFF condition?

      The reviewer has identified two diMerent male rats infused with the eNpHR3.0 virus and has asked closer examination of their performance.

      The first rat showed outlier-level responding on the diMerent action in the ON condition (|z| = 2.89) but normal responding for all other measures across LED conditions (|z| < 2). Additional analyses revealed |z| = 2.55 when examining choice discrimination magnitude in outcome-specific PIT during the ON condition but not during the OFF condition (|z| = 0.62). This subject exhibited |z| < 2 across all other experimental stages.

      The second rat showed outlier-level responding on the same action in the OFF condition (|z| = 2.02) but normal responding for all other measures across LED conditions (|z| < 2). Additional analyses revealed |z| = 2.12 when examining choice discrimination magnitude in outcome-specific PIT during the OFF condition but not during the ON condition (|z| = 0.67). This subject also exhibited |z| < 2 across all other experimental stages.

      We excluded these two subjects and conducted the same analyses as described in the original manuscript. Baseline responding did not diMer between groups (p = 0.14), allowing to look at the net eMect of the stimuli. Overall lever presses were greater in the eYFP rats (Group: F(1,16) = 6.08, p < 0.05; η<sup>2</sup> = 0.28) and were reduced by LED activation (LED: F(1,16) = 9.52, p < 0.01; η<sup>2</sup> = 0.44) and this reduction depended on the group considered (Group x LED: F(1,16) = 12.125, p < 0.001; η<sup>2</sup> = 0.43). Lever press rates were higher on the action earning the same outcome as the stimuli compared to the action earning the diMerent outcome (Lever: F(1,16)= 49.32; η<sup>2</sup> = 0.76; p < 0.001), regardless of group (Group x Lever: p = 0.14). There was a Lever by LED light condition interaction (Lever x LED: F(1,16)= 5.25; η<sup>2</sup> = 0.24; p < 0.05) but no an interaction between group, LED light condition, and Lever during the presentation of the predictive stimuli (p = 0.10). Given the significant Group x LED and Lever x LED interactions, additional analyses were conducted to determine the source of these interactions. In eYFP rats, LED activation had no eMect (LED: p = 0.70) and lever presses were greater on the same action (Lever: (F(1,9) = 23.94, p < 0.001; η<sup>2</sup> = 0.79) regardless of LED condition (LED x Lever: p = 0.72). By contrast, in eNpHR3.0 rats, lever presses were reduced by LED activation (LED: F(1,9) = 23.97, p < 0.001; η<sup>2</sup> = 0.73), were greater on the same action (Lever: F(1,9) = 16.920, p < 0.001; η<sup>2</sup> = 0.65) and the two factors interacted (LED x Lever: F(1,9) = 9.12, p < 0.01; η<sup>2</sup> = 0.50). These rats demonstrated outcome-specific PIT in the OFF condition (F(1,9) = 27.26, p < 0.001; η<sup>2</sup> = 0.75) but not in the ON condition (p = 0.08).

      Overall, excluding these two rats altered the statistical analyses, but both the original and revised analyses yielded the same outcome: silencing the NAc-S D1-SPN to VP pathway disrupted PIT. More importantly, we do not believe there are suMicient grounds to exclude the two rats identified by the reviewer. These animals did not display outlier-level responding across training stages or during the choice test. Their potential classification as outliers would be based on responding during only one LED condition and not the other, with notably opposite patterns between the two rats despite belonging to the same experimental group. 

      (10) I think it would be appreciable if in the cartoons from Figure 5.A and 6.A, the SPNs neurons were color-coded as in the results (test plots) and the supplementary figures (histological color-coding), such as D1- in blue & D2-SPNs in red.

      Our current color-coding system uses blue for D1-SPNs transduced with eNpHR3.0 and red for D2-SPNs transduced with eNpHR3.0. The D1-SPNs and D2-SPNs shown in Figures 5A and 6A represent cells transduced with either eYFP (control) or eNpHR3.0 virus and therefore cannot be assigned the blue or red color, which is reserved for eNpHR3.0transduced cells specifically. The micrographs in the Supplemental Figures maintain consistency with the color-coding established in the main figures.

      (11) As there are (relatively small) variations in the control performance in term of Net responding (from ~3 to ~7 responses per min), I wonder what would be the result of pooling eYFP groups from the two first experiments (Figures 3 & 4) and from the two last ones (Figures 5 & 6) - would the same statically results stand or vary (as eYFP vs D1-Cre vs A2a-Cre rats)? In particular for Figures 3 & 4, with and without the potential outlier, if it's indeed an outlier.

      We considered the Reviewer’s recommendation but do not believe the requested analysis is appropriate. The Reviewer is requesting the pooling of data from subjects of distinct transgenic strains (D1-Cre and A2A-Cre rats) that underwent surgical and behavioral procedures at diMerent time points, sometimes months apart. Each experiment was designed with necessary controls to enable adequate statistical analyses for testing our specific hypotheses. 

      (12) Presence of cameras in operant cages is mentioned in methods, but no data is presented regarding recordings, though authors mention that they allow for real-time observations of behavior. I suggest removing "to record" or adding a statement about the fact that no videos were recorded or used in the present study.

      We have removed “to record” from the manuscript (page 18). 

      (13) In all supplementary Figures, "F" is wrongly indicated as "E".

      We thank the Reviewer for reporting these errors, which have been corrected. 

      (14) While the authors acknowledge that the eNicacy of optogenetic inhibition of terminals is questionable, I think that more details are required to address this point in the discussion (existing literature?). Maybe, the combination of an anterograde tracer from SPNs to VP, to label VP neurons (to facilitate patching these neurons), and the Credependent inhibitory opsin in the NAc Shell, with optogenetic illumination at the level of the VP, along with electrophysiological recordings of VP neurons, could help address this question but may, reasonably, seem challenging technically.

      Our manuscript does not state that optogenetic inhibition of terminals is questionable. It acknowledges that we do not provide any evidence about the eMicacy of the approach. Regardless, we have provided additional details and suggestions to address this lack of evidence (page 13). 

      (15) A nice addition could be an illustration of the proposed model (from line 374), but it may be unnecessary.

      We have carefully considered the reviewer's recommendation. The proposed model is detailed in three published articles, including one that is freely accessible, which we have cited when presenting the model in our manuscript (page 14). This reference should provide interested readers with easy access to a comprehensive illustration of the model.

      Reviewer 2 (Recommendations for the Author):

      As noted in my public comments, this is a truly excellent and compelling study. I have only a few minor comments.

      (1) I could not find the coordinates/parameters for the dorsal striatal AAV injections for that component of the tract tracing experiment.

      We apologize for this omission, which has now been corrected (page 16). 

      (2) Please add the final group sizes to the figure captions.

      We followed the Reviewer’s recommendation and added group sizes in the main figure captions. 

      (3) The discussion of group exclusions (p 21 line 637) seems to accidentally omit (n = X) the number of NAc-S D1-SPNs-VP mice excluded.

      We apologize for this omission, which has now been corrected (page 22). 

      (4) There were some labeling issues in the supplementary figures (perhaps elsewhere, too). Specifically, panel E was listed twice (once for F) in captions.

      We apologize for this error, which has now been corrected.  

      (5) Inspection of the magazine entry data from PIT tests suggests that the optogenetic manipulations may have had some eNects on this behavior and would encourage the authors to probe further. There was a significant group diNerence for D1-SPN inhibition and a marginal group eNect for D2-SPNs. The fact that these eNects were in opposite directions is intriguing, although not easily interpreted based on the canonical D1/D2 model. Of course, the eNects are not specific to the light-on trials, but this could be due to carryover into light-oN trials. An analysis of trial-order eNects seems crucial for interpreting these eNects. One might also consider normalizing for pre-test baseline performance. Response rates during Pavlovian conditioning seem to suggest that D2eNpHR mice showed slightly higher conditioned responding during training, which contrasts with their low entry rates at test. I don't see any of this as problematic -- but more should be done to interpret these findings.

      We thank the reviewer for raising this interesting point regarding magazine entry rates. Since these data are presented in the Supplemental Figures, we have added a section in the Supplemental Material file that elaborates on these findings. This section does not address trial order eMects, as trial order was fully counterbalanced in our experiments and the relevant statistical analyses would lack adequate power. Baseline normalization was not conducted because the reviewer's suggestion was based on their assumption that eNpHR3.0 rats in the D2-SPNs experiment showed slightly higher magazine entries during Pavlovian training. However, this was not the case. In fact, like the eNpHR3.0 rats in the D1-SPNs experiment, they tended to display lower magazine entries during training. The added section therefore focuses on the potential role of response competition during outcome-specific PIT tests. Although we concluded that response competition cannot explain our findings, we believe it may complicate interpretation of magazine entry behavior. Thus, we recommend that future studies examine the role of NAc-S SPNs using purely Pavlovian tasks. It is worth nothing that we have recently completed experiments (unpublished) examining NAc-S D1- and D2-SPN silencing during stimulus presentation in a Pavlovian task identical to the one used here. Silencing of either SPN population had no eMect on magazine entry behavior.

      Reviewer 3 (Recommendations for the Author):

      Broad comments:

      Throughout the manuscript, the authors draw parallels between the eNect established via pharmacological manipulations and those shown here with optogenetic manipulation. I understand using the pharmacological data to launch this investigation, but these two procedures address very diNerent physiological questions. In the case of a pharmacological manipulation, the targets are receptors, wherever they are expressed, and in the case of D2 receptors, this means altering function in both pre-synaptically expressed autoreceptors and post-synaptically expressed D2 MSN receptors. In the case of an optogenetic approach, the target is a specific cell population with a high degree of temporal control. So I would just caution against comparing results from these types of studies too closely.

      Related to this point is the consideration of the physiological relevance of the manipulation. Under normal conditions, dopamine acts at D1-like receptors to increase the probability of cell firing via Ga signaling. In contrast, dopamine binding of D2-like receptors decreases the cell's firing probability (signaling via Gi/o). Thus, shunting D1MSN activation provides a clear impression of the role of these cells and, putatively, the role of dopamine acting on these cells. However, inhibiting D2-MSNs more closely mimics these cells' response to dopamine (though optogenetic manipulations are likely far more impactful than Gi signaling). All this is to say that when we consider the results presented here in Experiment 2, it might suggest that during PIT testing, normal performance may require a halting of DA release onto D2-MSNs. This is highly speculative, of course, just a thought worth considering.

      We agree with the comments made by the Reviewer, and the original manuscript included statements acknowledging that pharmacological approaches are limited in the capacity to inform about the function of NAc-S SPNs (pages 4 and 9). As noted by the Reviewer, these limitations are especially salient when considering NAc-S D2-SPNs. Based on the Reviewer’s comment, we have modified our discussion to further underscore these limitations (page 12). Finally, we agree with the suggestion that PIT may require a halting of DA release onto D2-SPNs. This is consistent with the model presented, whereby D2-SPNs function is required to trigger enkephalin release (page 13).     

      Section-Specific Comments and Questions:

      Results:

      Anterograde tracing and ex vivo cell recordings in D1 Cre and A2a Cre rats: Why are there no statistics reported for the e-phys data in this section? Was this merely a qualitative demonstration? I realize that the A2a-Cre condition only shows 3 recordings, so I appreciate the limitations in analyzing the data presented.

      The reviewer is correct that we initially intended to provide a qualitative demonstration. However, we have now included statistical analyses for the ex vivo recordings. It is important to note that there were at least 5 recordings per condition, though overlapping data points may give the impression of fewer recordings in certain conditions. We have provided the exact number of recordings in both the main text (page 5) and figure legend. 

      What does trial by trial analysis look like, because in addition to the eNects of extinction, do you know if the responsiveness of the opsin to light stimulation is altered after repeated exposures, or whether the cells themselves become compromised in any way with repeated light-inhibition, particularly given the relatively long 2m duration of the trial.

      The Reviewer raises an interesting point, and we provide complete trial-by-trial data for each experiment below. As identified by the Reviewer, there is some evidence for extinction, although it remained modest. Importantly, the data suggest that light stimulation did not aMect the physiology of the targeted cells. In eNpHR3.0 rats, performance across OFF trials remained stable (both for Same and DiMerent) even though they were preceded by ON trials, indicating no carryover eMects from optical stimulation.

      Author response image 2.

       

      The statistics for the choice test are not reported for eNpHR-D1-Cre rats, but do show a weakening of the instrumental devaluation eNect "Group x Lever x LED: F1,18 = 10.04, p < 0.01, = 0.36". The post hoc comparisons showed that all groups showed devaluation, but it is evident that there is a weakening of this eNect when the LED was on (η<sup>2</sup> = 0.41) vs oN (η<sup>2</sup> = 0.78), so I think the authors should soften the claim that NAcS-D1s are not involved in value-based decision-making. (Also, there is a typo in the legend in Figure S1, where the caption for panel "F" is listed as "E".) I also think that this could be potentially interesting in light of the fact that with circuit manipulation, this same weakening of the instrumental devaluation eNect was not observed. To me, this suggests that D1-NAcS that project to a diNerent region (not VP) contribute to value-based decision making.

      This comment overlaps with one made in the Public Review, for which we have already provided a response. Given its importance, we have added a section addressing this point in the supplemental discussion of the Supplementary Material file, which aligns with the location of the relevant data. The caption labelling error has been corrected.

      Materials and Methods:

      Subjects:

      Were these heterozygous or homozygous rats? If hetero, what rats were used for crossbreeding (sex, strain, and vendor)? Was genotyping done by the lab or outsourced to commercial services? If genotyping was done within the lab, please provide a brief description of the protocol used. How was food restriction established and maintained (i.e., how many days to bring weights down, and was maintenance achieved by rationing or by limiting ad lib access to food for some period in the day)?

      The information requested by the Reviewer have been added to the subjects section (pages 15-16).  

      Were rats pair/group housed after implantation of optic fibers?

      We have clarified that rats were group houses throughout (see subjects section; pages 15-16). 

      Behavioral Procedures:

      How long did each 0.2ml sucrose infusion take? For pellets, for each US delivery, was it a single pellet or two in quick succession?

      We have modified the method section to indicate that the sucrose was delivered across 2 seconds and that a single pellet was provided (page 17). 

      The CS to ITI duration ratio is quite low. Is there a reason such a short ratio was used in training?

      These parameters are those used in all our previous experiments on outcome-specific PIT. There is no specific reason for using such a ratio, except that it shortens the length of the training session. 

      Relative to the end of training, when were the optical implantation surgeries conducted, and how much recovery time was given before initiating reminder training and testing?

      Fibre-optic implantation was conducted 3-4 days after training and another 3-4 days were given for recovery. This has been clarified in the Materials and methods section (pages 15-16).

      I think a diagram or schematic showing the timeline for surgeries, training, and testing would be helpful to the audience.

      We opted for a text-based experimental timeline rather than a diagram due to slight temporal variations across experiments (page 15).

      On trials, when the LED was on, was light delivered continuously or pulsed? Do these opto-receptors 'bleach' within such a long window?

      We apologize for the lack of clarity; the light was delivered continuously. We have modified the manuscript (pages 6 and 19) and figure legend accordingly. The postmortem analysis did not provide evidence for photobleaching (Supplemental Figures) and as noted above, the behavioural results do not indicate any negative physiological impact on cell function.  

      Immunofluorescence: The blocking solution used during IHC is described as "NHS"; is this normal horse serum?

      The Reviewer is correct; NHS stands for normal horse serum. This has been added (page 21). 

      Microscopy and imaging:

      For the description of rats excluded due to placement or viral spread problems, an n=X is listed for the NAc S D1 SPNs --> VP silencing group. Is this a typo, or was that meant to read as n=0? Also, was there a major sex diNerence in the attrition rate? If so, I think reporting the sex of the lost subjects might be beneficial to the scientific community, as it might reflect a need for better guidance on sex-specific coordinates for targeting small nuclei.

      We apologize for the error regarding the number of excluded animals. This error has been corrected (page 23). There were no major sex diMerences in the attrition rate. The manuscript has been updated to provide information about the sex of excluded animals (page 23). 

      References

      Cao, J., Willett, J. A., Dorris, D. M., & Meitzen, J. (2018). Sex DiMerences in Medium Spiny Neuron Excitability and Glutamatergic Synaptic Input: Heterogeneity Across Striatal Regions and Evidence for Estradiol-Dependent Sexual DiMerentiation. Front Endocrinol (Lausanne), 9, 173. https://doi.org/10.3389/fendo.2018.00173

      Corbit, L. H., Muir, J. L., Balleine, B. W., & Balleine, B. W. (2001). The role of the nucleus accumbens in instrumental conditioning: Evidence of a functional dissociation between accumbens core and shell. J Neurosci, 21(9), 3251-3260. http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=11312 310&retmode=ref&cmd=prlinks

      Corbit, L. H., & Balleine, B. W. (2011). The general and outcome-specific forms of Pavlovian-instrumental transfer are diMerentially mediated by the nucleus accumbens core and shell. J Neurosci, 31(33), 11786-11794. https://doi.org/10.1523/JNEUROSCI.2711-11.2011

      Laurent, V., Bertran-Gonzalez, J., Chieng, B. C., & Balleine, B. W. (2014). δ-Opioid and Dopaminergic Processes in Accumbens Shell Modulate the Cholinergic Control of Predictive Learning and Choice. J Neurosci, 34(4), 1358-1369. https://doi.org/10.1523/JNEUROSCI.4592-13.2014

      Laurent, V., Leung, B., Maidment, N., & Balleine, B. W. (2012). μ- and δ-opioid-related processes in the accumbens core and shell diMerentially mediate the influence of reward-guided and stimulus-guided decisions on choice. J Neurosci, 32(5), 1875-1883. https://doi.org/10.1523/JNEUROSCI.4688-11.2012

      Matamales, M., McGovern, A. E., Mi, J. D., Mazzone, S. B., Balleine, B. W., & BertranGonzalez, J. (2020). Local D2- to D1-neuron transmodulation updates goal-directed learning in the striatum. Science, 367(6477), 549-555. https://doi.org/10.1126/science.aaz5751

      Parkes, S. L., Bradfield, L. A., & Balleine, B. W. (2015). Interaction of insular cortex and ventral striatum mediates the eMect of incentive memory on choice between goaldirected actions. J Neurosci, 35(16), 6464-6471. https://doi.org/10.1523/JNEUROSCI.4153-14.2015

      Pettibone, J. R., Yu, J. Y., Derman, R. C., Faust, T. W., Hughes, E. D., Filipiak, W. E., Saunders, T. L., Ferrario, C. R., & Berke, J. D. (2019). Knock-In Rat Lines with Cre Recombinase at the Dopamine D1 and Adenosine 2a Receptor Loci. eNeuro, 6(5). https://doi.org/10.1523/ENEURO.0163-19.2019

      Willett, J. A., Will, T., Hauser, C. A., Dorris, D. M., Cao, J., & Meitzen, J. (2016). No Evidence for Sex DiMerences in the Electrophysiological Properties and Excitatory Synaptic Input onto Nucleus Accumbens Shell Medium Spiny Neurons. eNeuro, 3(1), ENEURO.0147-15.2016. https://doi.org/10.1523/ENEURO.0147-15.2016

    1. Reviewer #1 (Public review):

      In the current article, Octavia Soegyono and colleagues study "The influence of nucleus accumbens shell D1 and D2 neurons on outcome-specific Pavlovian instrumental transfer", building on extensive findings from the same lab. While there is a consensus about the specific involvement of the Shell part of the Nucleus Accumbens (NAc) in specific stimulus-based actions in choice settings (and not in General Pavlovian instrumental transfer - gPIT, as opposed to the Core part of the NAc), mechanisms at the cellular and circuitry levels remain to be explored. In the present work, using sophisticated methods (rat Cre-transgenic lines from both sexes, optogenetics, and the well-established behavioral paradigm outcome-specific PIT-sPIT), Octavia Soegyono and colleagues decipher the differential contribution of dopamine receptors D1 and D2 expressing spiny projection neurons (SPNs).

      After validating the viral strategy and the specificity of the targeting (immunochemistry and electrophysiology), the authors demonstrate that while both NAc Shell D1- and D2-SPNs participate in mediating sPIT, NAc Shell D1-SPNs projections to the Ventral Pallidum (VP, previously demonstrated as crucial for sPIT), but not D2-SPNs, mediates sPIT. They also show that these effects were specific to stimulus-based actions, as value-based choices were left intact in all manipulations.

      This is a well-designed study, and the results are well supported by the experimental evidence. The paper is extremely pleasant to read and adds to the current literature.

    2. Reviewer #2 (Public review):

      Summary:

      This manuscript by Soegyono et al. describes a series of experiments designed to probe the involvement of dopamine D1 and D2 neurons within the nucleus accumbens shell in outcome-specific Pavlovian-instrumental transfer (osPIT), a well-controlled assay of cue-guided action selection based on congruent outcome associations. They used an optogenetic approach to phasically silence NAc shell D1 (D1-Cre mice) or D2 (A2a-Cre mice) neurons during a subset of osPIT trials. Both manipulations disrupted cue-guided action selection but had no effects on negative control measures/tasks (concomitant approach behavior, separate valued guided choice task), nor were any osPIT impairments found in reporter-only control groups. Separate experiments revealed that selective inhibition of NAc shell D1 but not D2 inputs to ventral pallidum was required for osPIT expression, thereby advancing understanding of the basal ganglia circuitry underpinning this important aspect of decision making.

      Strengths:

      The combinatorial viral and optogenetic approaches used here were convincingly validated through anatomical tract-tracing and ex vivo electrophysiology. The behavioral assays are sophisticated and well-controlled to parse cue and value-guided action selection. The inclusion of reporter-only control groups is rigorous and rules out nonspecific effects of the light manipulation. The findings are novel and address a critical question in the literature. Prior work using less decisive methods had implicated NAc shell D1 neurons in osPIT but suggested that D2 neurons may not be involved. The optogenetic manipulations used in the current study provide a more direct test of their involvement and convincingly demonstrate that both populations play an important role. Prior work had also implicated NAc shell connections to ventral pallidum in osPIT, but the current study reveals the selective involvement of D1 but not D2 neurons in this circuit. The authors do a good job of discussing their findings, including their nuanced interpretation that NAc shell D2 neurons may contribute to osPIT through their local regulation of NAc shell microcircuitry.

      Weaknesses:

      The current study exclusively used an optogenetic approach to probe the function of D1 and D2 NAc shell neurons. Providing a complementary assessment with chemogenetics or other appropriate methods would strengthen conclusions, particularly the novel demonstration of D2 NAc shell involvement. Likewise, the null result of optically inhibiting D2 inputs to the ventral pallidum leaves open the possibility that a more complete or sustained disruption of this pathway may have impaired osPIT.

    3. Author response:

      Reviewer #1 (Public review):

      In the current article, Octavia Soegyono and colleagues study "The influence of nucleus accumbens shell D1 and D2 neurons on outcome-specific Pavlovian instrumental transfer", building on extensive findings from the same lab. While there is a consensus about the specific involvement of the Shell part of the Nucleus Accumbens (NAc) in specific stimulus-based actions in choice settings (and not in General Pavlovian instrumental transfer - gPIT, as opposed to the Core part of the NAc), mechanisms at the cellular and circuitry levels remain to be explored. In the present work, using sophisticated methods (rat Cre-transgenic lines from both sexes, optogenetics, and the well-established behavioral paradigm outcome-specific PIT-sPIT), Octavia Soegyono and colleagues decipher the differential contribution of dopamine receptors D1 and D2 expressing spiny projection neurons (SPNs).

      After validating the viral strategy and the specificity of the targeting (immunochemistry and electrophysiology), the authors demonstrate that while both NAc Shell D1- and D2-SPNs participate in mediating sPIT, NAc Shell D1-SPNs projections to the Ventral Pallidum (VP, previously demonstrated as crucial for sPIT), but not D2-SPNs, mediates sPIT. They also show that these effects were specific to stimulus-based actions, as value-based choices were left intact in all manipulations.

      This is a well-designed study, and the results are well supported by the experimental evidence. The paper is extremely pleasant to read and adds to the current literature.

      We thank the Reviewer for their positive assessment.

      Reviewer #2 (Public review):

      Summary:

      This manuscript by Soegyono et al. describes a series of experiments designed to probe the involvement of dopamine D1 and D2 neurons within the nucleus accumbens shell in outcome-specific Pavlovian-instrumental transfer (osPIT), a well-controlled assay of cue-guided action selection based on congruent outcome associations. They used an optogenetic approach to phasically silence NAc shell D1 (D1-Cre mice) or D2 (A2a-Cre mice) neurons during a subset of osPIT trials. Both manipulations disrupted cue-guided action selection but had no effects on negative control measures/tasks (concomitant approach behavior, separate valued guided choice task), nor were any osPIT impairments found in reporter-only control groups. Separate experiments revealed that selective inhibition of NAc shell D1 but not D2 inputs to ventral pallidum was required for osPIT expression, thereby advancing understanding of the basal ganglia circuitry underpinning this important aspect of decision making.

      Strengths:

      The combinatorial viral and optogenetic approaches used here were convincingly validated through anatomical tract-tracing and ex vivo electrophysiology. The behavioral assays are sophisticated and well-controlled to parse cue and value-guided action selection. The inclusion of reporter-only control groups is rigorous and rules out nonspecific effects of the light manipulation. The findings are novel and address a critical question in the literature. Prior work using less decisive methods had implicated NAc shell D1 neurons in osPIT but suggested that D2 neurons may not be involved. The optogenetic manipulations used in the current study provide a more direct test of their involvement and convincingly demonstrate that both populations play an important role. Prior work had also implicated NAc shell connections to ventral pallidum in osPIT, but the current study reveals the selective involvement of D1 but not D2 neurons in this circuit. The authors do a good job of discussing their findings, including their nuanced interpretation that NAc shell D2 neurons may contribute to osPIT through their local regulation of NAc shell microcircuitry.

      We thank the Reviewer for their positive assessment.

      Weaknesses:

      The current study exclusively used an optogenetic approach to probe the function of D1 and D2 NAc shell neurons. Providing a complementary assessment with chemogenetics or other appropriate methods would strengthen conclusions, particularly the novel demonstration of D2 NAc shell involvement. Likewise, the null result of optically inhibiting D2 inputs to the ventral pallidum leaves open the possibility that a more complete or sustained disruption of this pathway may have impaired osPIT.

      We acknowledge the reviewer's valuable suggestion that demonstrating NAc-S D1- and D2-SPN engagement in outcome-specific PIT through another technique would strengthen our optogenetic findings. Several approaches could provide this validation. Chemogenetic manipulation, as the reviewer suggested, represents one compelling option. Alternatively, immunohistochemical assessment of phosphorylated histone H3 at serine 10 (P-H3) offers another promising avenue, given its established utility in reporting striatal SPN plasticity in the dorsal striatum (Matamales et al., 2020). We hope to complete such an assessment in future work since it would address the limitations of previous work that relied solely on ERK1/2 phosphorylation measures in NAc-S SPNs (Laurent et al., 2014).

      Regarding the null result from optical silencing of D2 terminals in the ventral pallidum, we agree with the reviewer's assessment. While we acknowledge this limitation in the current manuscript (see discussion), we aim to address this gap in future studies to provide a more complete mechanistic understanding of the circuit.

      Reviewer #3 (Public review):

      Summary:

      The authors present data demonstrating that optogenetic inhibition of either D1- or D2-MSNs in the NAc Shell attenuates expression of sensory-specific PIT while largely sparing value-based decision on an instrumental task. They also provide evidence that SS-PIT depends on D1-MSN projections from the NAc-Shell to the VP, whereas projections from D2-MSNs to the VP do not contribute to SS-PIT.

      Strengths:

      This is clearly written. The evidence largely supports the authors' interpretations, and these effects are somewhat novel, so they help advance our understanding of PIT and NAc-Shell function.

      We thank the Reviewer for their positive assessment.

      Weaknesses:

      I think the interpretation of some of the effects (specifically the claim that D1-MSNs do not contribute to value-based decision making) is not fully supported by the data presented.

      We appreciate the reviewer's comment regarding the marginal attenuation of value-based choice observed following NAc-S D1-SPN silencing. While this manipulation did produce a slight reduction in choice performance, the behavior remained largely intact. We are hesitant to interpret this marginal effect as evidence for a direct role of NAc-S D1-SPNs in value-based decision-making, particularly given the substantial literature demonstrating that NAc-S manipulations typically preserve such choice behavior (Corbit & Balleine, 2011; Corbit et al., 2001; Laurent et al., 2012). Notably, previous work has shown that NAc-S D1 receptor blockade impairs outcome-specific PIT while leaving value-based choice unaffected (Laurent et al., 2014). We favor an alternative explanation for our observed marginal reduction. As documented in Supplemental Figure 1, viral transduction extended slightly into the nucleus accumbens core (NAc-C), a region established as critical for value-based decision-making (Corbit & Balleine, 2011; Corbit et al., 2001; Laurent et al., 2012). The marginal impairment may therefore reflect inadvertent silencing of a small NAc-C D1-SPN population rather than a functional contribution from NAc-S D1-SPNs. Future studies specifically targeting larger NAc-C D1-SPN populations would help clarify this possibility and provide definitive resolution of this question.

    1. Reflecting on Tate’s experience, we can see that what his two teachers personally defined as appropriate (or inappropriate) for young children guided their actions. We could say that for Ms. Tiana, Tate’s sense of self, of autonomy, and of belonging in the classroom were prioritized. At first glance, it seems that for Ms. Diane conformity to traditional gender expression was more important than Tate’s desire, warmth, or comfort—but perhaps she had not taken the time to examine her own biases or misconceptions surrounding gender. In any case, Ms. Diane’s actions reinforced expectations for conformity to the gender expressions typically promoted in our broader society.

      In my experience, this is often a generational gap, too. It's relatively recent that we can talk openly about this topic, and older teachers may have simply grown up in a different world. That's why an understanding of the basics of gender (as I mentioned above) is also necessary for teachers to even begin understanding how to reflect on their biases.

    1. Reviewer #2 (Public review):

      Summary:

      This work by den Bakker and Kloosterman contributes to the vast body of research exploring the dynamics governing the communication between the hippocampus (HPC) and the medial prefrontal cortex (mPFC) during spatial learning and navigation. Previous research showed that population activity of mPFC neurons is replayed during HPC sharp-wave ripple events (SWRs), which may therefore correspond to privileged windows for the transfer of learned navigation information from the HPC, where initial learning occurs, to the mPFC, which is thought to store this information long term. Indeed, it was also previously shown that the activity of mPFC neurons contains task-related information that can inform about the location of an animal in a maze, which can predict the animals' navigational choices. Here, the authors aim to show that the mPFC neurons that are modulated by HPC activity (SWRs and theta rhythms) are distinct from those "encoding" spatial information. This result could suggest that the integration of spatial information originating from the HPC within the mPFC may require the cooperation of separate sets of neurons.

      This observation may be useful to further extend our understanding of the dynamics regulating the exchange of information between the HPC and mPFC during learning. However, my understanding is that this finding is mainly based upon a negative result, which cannot be statistically proven by the failure to reject the null hypothesis. Moreover, in my reading, the rest of the paper mainly replicates phenomena that have already been described, with the original reports not correctly cited. My opinion is that the novel elements should be precisely identified and discussed, while the current phrasing in the manuscript, in most cases, leads readers to think that these results are new. Detailed comments are provided below.

      Major concerns:

      ORIGINAL COMMENT: (1) The main claim of the manuscript is that the neurons involved in predicting upcoming choices are not the neurons modulated by the HPC. This is based upon the evidence provided in Figure 5, which is a negative result that the authors employ to claim that predictive non-local representations in the mPFC are not linked to hippocampal SWRs and theta phase. However, it is important to remember that in a statistical test, the failure to reject the null hypothesis does not prove that the null hypothesis is true. Since this claim is so central in this work, the authors should use appropriate statistics to demonstrate that the null hypothesis is true. This can be accomplished by showing that there is no effect above some size that is so small that it would make the effect meaningless (see https://doi.org/10.1177/070674370304801108).

      AUTHOR RESPONSE: We would like to highlight a few important points here. (1) We indeed do not intend to claim that the SWR-modulated neurons are not at all involved in predicting upcoming choice, just that the SWR-unmodulated neurons may play a larger role. We have rephrased the title and abstract to make this clearer.

      REVIEWER COMMENT: The title has been rephrased but still conveys the same substantive claim. The abstract sentence also does not clearly state what was found. Using "independently" in the new title continues to imply that SWR modulation and prediction of upcoming choices are separate phenomena. By contrast, in your response here in the rebuttall you state only that "SWR-unmodulated neurons may play a larger role," which is a much more tempered claim than what the manuscript currently argues. Why is this clarification not adopted in the article? Moreover, the main text continues to use the same arguments as before; beyond the cosmetic changes of title and abstract, the claim itself has not materially changed.

      AUTHOR RESPONSE: (2) The hypothesis that we put forward is based not only on a negative effect, but on the findings that: the SWR-unmodulated neurons show higher spatial tuning (Fig 3b), more directional selectivity (Fig 3d), more frequent encoding of the upcoming choice at the choice point (new analysis, added in Fig 4d), and higher spike rates during the representations of the upcoming choice (Fig 5b). This is further highlighted by the fact that the representations of upcoming choice in the PFC are not time locked to SWRs (whereas the hippocampal representations of upcoming choice are; see Fig 5a and Fig 6a), and not time-locked to hippocampal theta phase (whereas the hippocampal representations are; see Fig 5c and Fig 6c). Finally, the representations of upcoming and alternative choices in the PFC do not show a large overlap in time with the representations in the hippocampus (see updated Fig 4e were we added a statistical test to show the likelihood of the overlap of decoded timepoints). All these results together lead us to hypothesize that SWR-modulation is not the driving factor behind non-local decoding in the PFC.

      REVIEWER COMMENT: I do not see how these precisions address my remark. The main claim in the title used to be "Neurons in the medial prefrontal cortex that are not modulated by hippocampal sharp-wave ripples are involved in spatial tuning and signaling upcoming choice." It is now "Neurons in the medial prefrontal cortex are involved in spatial tuning and signaling upcoming choice independently from hippocampal sharp-wave ripples." The substance has not changed. This specific claim is supported solely by Figure 5.

      The other analyses cited describe functional characteristics of SWR-unmodulated neurons but, unless linked by explicit new analyses, do not substantiate independence/orthogonality between SWR modulation and non-local decoding in PFC. If there is an analysis that makes this link explicit, it should be clearly presented; as it stands, I cannot find an explanation in the manuscript for why "all these results together" justify the conclusion that "All these results together lead us to hypothesize that SWR-modulation is not the driving factor behind non-local decoding in the PFC". Also: is the main result of this work a "hypothesis"? If so, this should be clearly differentiated from a conclusion supported by results and analyses.

      AUTHOR RESPONSE: (3) Based on the reviewers suggestion, we have added a statistical test to compare the phase-locking based of the non-local decoding to hippocampal SWRs and theta phase to shuffled posterior probabilities. Instead of looking at all SWRs in a -2 to 2 second window, we have now only selected the closest SWR in time within that window, and did the statistical comparison in the bin of 0-20 ms from SWR onset. With this new analysis we are looking more directly at the time-locking of the decoded segments to SWR onset (see updated Fig 5a and 6a).

      REVIEWER COMMENT: I appreciate the added analysis focusing on the closest SWR and a 0-20 ms bin. My understanding is that you consider the revised analyses in Figures 5a and 6a sufficient to show that predictive non-local representations in mPFC are not linked to hippocampal SWRs and theta phase.

      First, the manuscript should explicitly explain the rationale for this analysis and why it is sufficient to support the claim. From the main text it is not possible to understand what was done; the Methods are hard to follow, and the figure legends are not clearly described (e.g. the shuffle is not even defined there).

      Specific points I could not reconcile:

      i) The gray histograms in the revised Figures 5a and 6a now show a peak at zero lag, whereas in the previous version they were flat, although they are said to plot the same data. What changed?

      ii) Why choose a 20 ms bin? A single narrow bin invites false negatives. Please justify this choice.

      iii) Comparing to a shuffle is a useful control, but when the p-value is non-significant we only learn that no difference was detected under that shuffle-not that there is no difference or that the processes are independent.

      ORIGINAL COMMENT: (2) The main claim of the work is also based on Figure 3, where the authors show that SWRs-unmodulated mPFC neurons have higher spatial tuning, and higher directional selectivity scores, and a higher percentage of these neurons show theta skipping. This is used to support the claim that SWRs-unmodulated cells encode spatial information. However, it must be noted that in this kind of task, it is not possible to disentangle space and specific task variables involving separate cognitive processes from processing spatial information such as decision-making, attention, motor control, etc., which always happen at specific locations of the maze. Therefore, the results shown in Figure 3 may relate to other specific processes rather than encoding of space and it cannot be unequivocally claimed that mPFC neurons "encode spatial information". This limitation is presented by Mashoori et al (2018), an article that appears to be a major inspiration for this work. Can the authors provide a control analysis/experiment that supports their claim? Otherwise, this claim should be tempered. Also, the authors say that Jadhav et al. (2016) showed that mPFC neurons unmodulated by SWRs are less tuned to space. How do they reconcile it with their results?

      AUTHOR RESPONSE: The reviewer is right to assert caution when talking about claims such as spatial tuning where other factors may also be involved. Although we agree that there may be some other factors influencing what we are seeing as spatial tuning, it is very important to note that the behavioral task is executed on a symmetrical 4-armed maze, where two of the arms are always used for the start of the trajectory, and the other two arms (North and South) function as the goal (reward) arms. Therefore, if the PFC is encoding cognitive processes such as task phases related to decision-making and reward, we would not be able to differentiate between the two start arms and the two goal arms, as these represent the same task phases. Note also that the North and South arm are illuminated in a pseudo-random order between trials and during cue-based rule learning this is a direct indication of where the reward will be found. Even in this phase of the task, the PFC encodes where the animal will turn on a trial-to-trial basis (meaning the North and South arm are still differentiated correctly on each trial even though the illumination and associated reward are changing).

      REVIEWER COMMENT: I appreciate that the departure location was pseudorandomized. However, this control does not rule out that PFC activity reflects motor preparation (left vs right turns) and associated perceptual decision-making/attentional processes that are inherently tied to a specific action. As such, it cannot by itself support the claim that PFC neurons "encode spatial information." Moreover, the authors acknowledge here that "other factors may also be involved," yet this caveat is not reflected in the manuscript. Why?

      AUTHOR RESPONSE: Secondly, importantly, the reviewer mentions that we claimed that Jadhav et al. (2016) showed that mPFC neurons unmodulated by SWRs are less tuned to space, but this is incorrect. Jadhav et al. (2016) showed that SWR-unmodulated neurons had lower spatial coverage, meaning that they are more spatially selective (congruent with our results). We have rephrased this in the text to be clearer.

      REVIEWER COMMENT: Thanks for clarifying this.

      ORIGINAL COMMENT: (3) My reading is that the rest of the paper mainly consists of replications or incremental observations of already known phenomena with some not necessarily surprising new observations:<br /> a) Figure 2 shows that a subset of mPFC neurons is modulated by HPC SWRs and theta (already known), that vmPFC neurons are more strongly modulated by SWRs (not surprising given anatomy), and that theta phase preference is different between vmPFC and dmPFC (not surprising given the fact that theta is a travelling wave).

      AUTHOR RESPONSE: The finding that vmPFC neurons are more strongly modulated by SWRs than dmPFC indeed matches what we know from anatomy, but that does not make it a trivial finding. A lot remains unknown about the mPFC subregions and their interactions with the hippocampus, and not every finding will be directly linked to the anatomy. Therefore, in our view this is a significant finding which has not been studied before due to the technical complexity of large-scale recordings along the dorsal-ventral axis of the mPFC.

      REVIEWER COMMENT: This finding is indeed non-trivial; however, it seems completely irrelevant to the paper's main claim unless the Authors can argue otherwise.

      AUTHOR RESPONSE: Similarly, theta being a traveling wave (which in itself is still under debate), does not mean we should assume that the dorsal and ventral mPFC should follow this signature and be modulated by different phases of the theta cycle. Again, in our view this is not at all trivial, but an important finding which brings us closer to understanding the intricate interactions between the hippocampus and PFC in spatial learning and decision-making.

      REVIEWER COMMENT: Yes, but in what way does this support the manuscript's primary claim? This is unclear to me.

      ORIGINAL COMMENT: b) Figure 4 shows that non-local representations in mPFC are predictive of the animal's choice. This is mostly an increment to the work of Mashoori et al (2018). My understanding is that in addition to what had already been shown by Mashoori et al here it is shown how the upcoming choice can be predicted. The author may want to emphasize this novel aspect.

      AUTHOR RESPONSE: In our view our manuscript focuses on a completely different aspect of learning and memory than the paper the reviewer is referring to (Mashoori et al. 2018). Importantly, the Mashoori et al. paper looked at choice evaluation at reward sites and shows that disappointing reinforcements are associated with reactivations in the ACC of the unselected target. This points to the role of the ACC in error detection and evaluation. Although this is an interesting result, it is in essence unrelated to what we are focusing on here, which is decision making and prediction of upcoming choices. The fact that the turning direction of the animal can be predicted on a trial-to-trial basis, and even precedes the behavioral change over the course of learning, sheds light on the role of the PFC in these important predictive cognitive processes (as opposed to post-choice reflective processes).

      REVIEWER COMMENT: Indeed, as I said, the new element here is that the upcoming choice can be predicted. This appears only incremental and could belong to another story; as the manuscript is currently written, it does not support the article's main claim. I would like to specify that, regarding this and the other points above, my inability to see how these minor results support the Authors' claim may reflect my misunderstanding; nevertheless, this suggests that the manuscript should be extensively rewritten and reorganized to make the Authors' meaning clear.

      ORIGINAL COMMENT: c) Figure 6 shows that prospective activity in the HPC is linked to SWRs and theta oscillations. This has been described in various forms since at least the works of Johnson and Redish in 2007, Pastalkova et al 2008, and Dragoi and Tonegawa (2011 and 2013), as well as in earlier literature on splitter cells. These foundational papers on this topic are not even cited in the current manuscript.

      AUTHOR RESPONSE: We have added these citations to the introduction (line 37).

      REVIEWER COMMENT: This is an example of how the Authors fail to acknowledge the underlying problem with how the manuscript is written; the issue has not been addressed except with a cosmetic change like the one described above. The Results section contains a series of findings that are well-known phenomena described previously (see below). Prior results should be acknowledged at the beginning of each relevant paragraph, followed by an explicit statement of what is new, so that readers can distinguish replication from novelty. Here, I pointed specifically to the results of Figure 6, and the Authors deemed it sufficient simply to add the citations I indicated to an existing sentence in the Introduction, while keeping the Results description unchanged. As written, this reads as if these phenomena are being described for the first time. This is incorrect. It is hard to avoid the impression that the Authors did not take this concern seriously; the same issue appears elsewhere in the manuscript, and I fail to see how the Authors "have improved clarity of the text throughout to highlight the novelty of our results better."

    2. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      Summary:

      The authors used high-density probe recordings in the medial prefrontal cortex (PFC) and hippocampus during a rodent spatial memory task to examine functional sub-populations of PFC neurons that are modulated vs. unmodulated by hippocampal sharp-wave ripples (SWRs), an important physiological biomarker that is thought to have a role in mediating information transfer across hippocampal-cortical networks for memory processes. SWRs are associated with the reactivation of representations of previous experiences, and associated reactivation in hippocampal and cortical regions has been proposed to have a role in memory formation, retrieval, planning, and memory-guided behavior. This study focuses on awake SWRs that are prevalent during immobility periods during pauses in behavior. Previous studies have reported strong modulation of a subset of prefrontal neurons during hippocampal SWRs, with some studies reporting prefrontal reactivation during SWRs that have a role in spatial memory processes. The study seeks to extend these findings by examining the activity of SWR-modulated vs. unmodulated neurons across PFC sub-regions, and whether there is a functional distinction between these two kinds of neuronal populations with respect to representing spatial information and supporting memory-guided decision-making.

      Strengths:

      The major strength of the study is the use of Neuropixels 1.0 probes to monitor activity throughout the dorsal-ventral extent of the rodent medial prefrontal cortex, permitting an investigation of functional distinction in neuronal populations across PFC sub-regions. They are able to show that SWR-unmodulated neurons, in addition to having stronger spatial tuning than SWR-modulated neurons as previously reported, also show stronger directional selectivity and theta-cycle skipping properties.

      Weaknesses:

      (1) While the study is able to extend previous findings that SWR-modulated PFC neurons have significantly lower spatial tuning that SWR-unmodulated neurons, the evidence presented does not support the main conclusion of the paper that only the unmodulated neurons are involved in spatial tuning and signaling upcoming choice, implying that SWR-modulated neurons are not involved in predicting upcoming choice, as stated in the abstract. This conclusion makes a categorical distinction between two neuronal populations, that SWR-modulated neurons are involved and SWR-unmodulated are not involved in predicting upcoming choice, which requires evidence that clearly shows this absolute distinction. However, in the analyses showing non-local population decoding in PFC for predicting upcoming choice, the results show that SWR-unmodulated neurons have higher firing rates than SWR-modulated neurons, which is not a categorical distinction. Higher firing rates do not imply that only SWR-unmodulated neurons are contributing to the non-local decoding. They may contribute more than SWR-modulated neurons, but there are no follow-up analyses to assess the contribution of the two sub-populations to non-local decoding.

      We agree with the reviewer that this is indeed not a categorical distinction, and do not wish to claim that the SWR-modulated neurons have absolutely no role in non-local decoding and signaling upcoming choice. We have adjusted this in the title, abstract and text to clarify this for the reader. Furthermore, we have performed additional analyses to elucidate the role of SWR-modulated neurons in non-local decoding by creating separate decoding models for SWR-modulated and unmodulated PFC neurons respectively. These analyses show that the SWR-unmodulated neurons are indeed encoding representations of the upcoming choice more often than the alternative choice, whereas the SWR-modulated neurons do not reliably differentiate the upcoming and alternative choices in non-local decoding at the choice point (see new Fig 4d).

      (2) Further, the results show that during non-local representations of the hippocampus of the upcoming options, SWR-excited PFC neurons were more active during hippocampal representations of the upcoming choice, and SWR-inhibited PFC neurons were less active during hippocampal representations of the alternative choice. This clearly suggests that SWR-modulated neurons are involved in signaling upcoming choice, at least during hippocampal non-local representations, which contradicts the main conclusion of the paper.

      This does not contradict the main conclusion of the paper, but in fact strengthens the hypothesis we are putting forward: that the SWR-modulated neurons are more linked to the hippocampal non-local representations, whereas the SWR-unmodulated neurons seem to have their own encoding of upcoming choice which is not linked to the signatures in the hippocampus (almost no time overlap with hippocampal representations, no phase locking to hippocampal theta, no time locking to hippocampal SWRs, no increased firing during hippocampal representations of upcoming choice).

      (3) Similarly, one of the analyses shows that PFC nonlocal representations show no preference for hippocampal SWRs or hippocampal theta phase. However, the examples shown for non-local representations clearly show that these decodes occur prior to the start of the trajectory, or during running on the central zone or start arm. The time period of immobility prior to the start arm running will have a higher prevalence of SWRs and that during running will have a higher prevalence of theta oscillations and theta sequences, so non-local decoded representations have to sub-divided according to these known local-field potential phenomena for this analysis, which is not followed.

      These analyses are in fact separated based on proximity to SWRs (only segments that occurred within 2 seconds of SWR onset were included, see Methods) and theta periods respectively (selected based on a running speed of more than 5 cm/s and the absence of SWRs in the hippocampus, see Methods). We have clarified this in the main text.

      (4) The primary phenomenon that the manuscript relies on is the modulation of PFC neurons by hippocampal SWRs, so it is necessary to perform the PFC population decoding analyses during SWRs (or examine non-local decoding that occurs specifically during SWRs), as reported in previous studies of PFC reactivation during SWRs, to see if there is any distinction between modulated and unmodulated neurons in this reactivation. Even in the case of independent PFC reactivation as reported by one study, this PFC reactivation was still reported to occur during hippocampal SWRs, therefore decoding during SWRs has to be examined. Similarly, the phenomenon of theta cycle skipping is related to theta sequence representations, so decoding during PFC and hippocampal theta sequences has to be examined before coming to any conclusions.

      The histograms shown in Figure 5a (see updated Fig 5a where we look at the closest SWR in time and compare the occurrence with shuffled data) show that there is no increased prevalence of decoding upcoming and alternative choices in the PFC during hippocampal SWRs. The lack of overlap of non-local decoding between the hippocampus and PFC further shows that these non-local representations occur at different timepoints in the PFC and hippocampus (see updated Fig 4e where we added a statistical test to show the likeliness of the overlap between the decoded segments in the PFC and hippocampus). Based on the reviewer's suggestion, we have additionally decoded the information in the PFC during hippocampal SWRs exclusively, and found that the direction on the maze could not be predicted based on the decoding of SWR time points in the PFC. See figure below. Similarly, we can see from the histograms in Figure 5c that there is no phase locking to the hippocampal theta phase for non-local representations in the PFC, and in contrast there is phase locking of the hippocampal encoding of upcoming choice to the rising phase of the theta cycle (Fig 6c), further highlighting the separation between these two regions in the non-local decoding.

      Reviewer #2 (Public review):

      Summary:

      This work by den Bakker and Kloosterman contributes to the vast body of research exploring the dynamics governing the communication between the hippocampus (HPC) and the medial prefrontal cortex (mPFC) during spatial learning and navigation. Previous research showed that population activity of mPFC neurons is replayed during HPC sharp-wave ripple events (SWRs), which may therefore correspond to privileged windows for the transfer of learned navigation information from the HPC, where initial learning occurs, to the mPFC, which is thought to store this information long term. Indeed, it was also previously shown that the activity of mPFC neurons contains task-related information that can inform about the location of an animal in a maze, which can predict the animals' navigational choices. Here, the authors aim to show that the mPFC neurons that are modulated by HPC activity (SWRs and theta rhythms) are distinct from those "encoding" spatial information. This result could suggest that the integration of spatial information originating from the HPC within the mPFC may require the cooperation of separate sets of neurons.

      This observation may be useful to further extend our understanding of the dynamics regulating the exchange of information between the HPC and mPFC during learning. However, my understanding is that this finding is mainly based upon a negative result, which cannot be statistically proven by the failure to reject the null hypothesis. Moreover, in my reading, the rest of the paper mainly replicates phenomena that have already been described, with the original reports not correctly cited. My opinion is that the novel elements should be precisely identified and discussed, while the current phrasing in the manuscript, in most cases, leads readers to think that these results are new. Detailed comments are provided below.

      Major concerns:

      (1) The main claim of the manuscript is that the neurons involved in predicting upcoming choices are not the neurons modulated by the HPC. This is based upon the evidence provided in Figure 5, which is a negative result that the authors employ to claim that predictive non-local representations in the mPFC are not linked to hippocampal SWRs and theta phase. However, it is important to remember that in a statistical test, the failure to reject the null hypothesis does not prove that the null hypothesis is true. Since this claim is so central in this work, the authors should use appropriate statistics to demonstrate that the null hypothesis is true. This can be accomplished by showing that there is no effect above some size that is so small that it would make the effect meaningless (see https://doi.org/10.1177/070674370304801108).

      We would like to highlight a few important points here. (1) We indeed do not intend to claim that the SWR-modulated neurons are not at all involved in predicting upcoming choice, just that the SWR-unmodulated neurons may play a larger role. We have rephrased the title and abstract to make this clearer. (2) The hypothesis that we put forward is based not only on a negative effect, but on the findings that: the SWR-unmodulated neurons show higher spatial tuning (Fig 3b), more directional selectivity (Fig 3d), more frequent encoding of the upcoming choice at the choice point (new analysis, added in Fig 4d), and higher spike rates during the representations of the upcoming choice (Fig 5b). This is further highlighted by the fact that the representations of upcoming choice in the PFC are not time locked to SWRs (whereas the hippocampal representations of upcoming choice are;  see Fig 5a and Fig 6a), and not time-locked to hippocampal theta phase (whereas the hippocampal representations are; see Fig 5c and Fig 6c). Finally, the representations of upcoming and alternative choices in the PFC do not show a large overlap in time with the representations in the hippocampus (see updated Fig 4e were we added a statistical test to show the likelihood of the overlap of decoded timepoints). All these results together lead us to hypothesize that SWR-modulation is not the driving factor behind non-local decoding in the PFC. (3) Based on the reviewers suggestion, we have added a statistical test to compare the phase-locking based of the non-local decoding to hippocampal SWRs and theta phase to shuffled posterior probabilities. Instead of looking at all SWRs in a -2 to 2 second window, we have now only selected the closest SWR in time within that window, and did the statistical comparison in the bin of 0-20 ms from SWR onset. With this new analysis we are looking more directly at the time-locking of the decoded segments to SWR onset (see updated Fig 5a and 6a).

      (2) The main claim of the work is also based on Figure 3, where the authors show that SWRs-unmodulated mPFC neurons have higher spatial tuning, and higher directional selectivity scores, and a higher percentage of these neurons show theta skipping. This is used to support the claim that SWRs-unmodulated cells encode spatial information. However, it must be noted that in this kind of task, it is not possible to disentangle space and specific task variables involving separate cognitive processes from processing spatial information such as decision-making, attention, motor control, etc., which always happen at specific locations of the maze. Therefore, the results shown in Figure 3 may relate to other specific processes rather than encoding of space and it cannot be unequivocally claimed that mPFC neurons "encode spatial information". This limitation is presented by Mashoori et al (2018), an article that appears to be a major inspiration for this work. Can the authors provide a control analysis/experiment that supports their claim? Otherwise, this claim should be tempered. Also, the authors say that Jadhav et al. (2016) showed that mPFC neurons unmodulated by SWRs are less tuned to space. How do they reconcile it with their results?

      The reviewer is right to assert caution when talking about claims such as spatial tuning where other factors may also be involved. Although we agree that there may be some other factors influencing what we are seeing as spatial tuning, it is very important to note that the behavioral task is executed on a symmetrical 4-armed maze, where two of the arms are always used for the start of the trajectory, and the other two arms (North and South) function as the goal (reward) arms. Therefore, if the PFC is encoding cognitive processes such as task phases related to decision-making and reward, we would not be able to differentiate between the two start arms and the two goal arms, as these represent the same task phases. Note also that the North and South arm are illuminated in a pseudo-random order between trials and during cue-based rule learning this is a direct indication of where the reward will be found. Even in this phase of the task, the PFC encodes where the animal will turn on a trial-to-trial basis (meaning the North and South arm are still differentiated correctly on each trial even though the illumination and associated reward are changing).

      Secondly, importantly, the reviewer mentions that we claimed that Jadhav et al. (2016) showed that mPFC neurons unmodulated by SWRs are less tuned to space, but this is incorrect. Jadhav et al. (2016) showed that SWR-unmodulated neurons had lower spatial coverage, meaning that they are more spatially selective (congruent with our results). We have rephrased this in the text to be clearer.

      (3) My reading is that the rest of the paper mainly consists of replications or incremental observations of already known phenomena with some not necessarily surprising new observations:

      (a) Figure 2 shows that a subset of mPFC neurons is modulated by HPC SWRs and theta (already known), that vmPFC neurons are more strongly modulated by SWRs (not surprising given anatomy), and that theta phase preference is different between vmPFC and dmPFC (not surprising given the fact that theta is a travelling wave).

      The finding that vmPFC neurons are more strongly modulated by SWRs than dmPFC indeed matches what we know from anatomy, but that does not make it a trivial finding. A lot remains unknown about the mPFC subregions and their interactions with the hippocampus, and not every finding will be directly linked to the anatomy. Therefore, in our view this is a significant finding which has not been studied before due to the technical complexity of large-scale recordings along the dorsal-ventral axis of the mPFC.

      Similarly, theta being a traveling wave (which in itself is still under debate), does not mean we should assume that the dorsal and ventral mPFC should follow this signature and be modulated by different phases of the theta cycle. Again, in our view this is not at all trivial, but an important finding which brings us closer to understanding the intricate interactions between the hippocampus and PFC in spatial learning and decision-making.

      (b) Figure 4 shows that non-local representations in mPFC are predictive of the animal's choice. This is mostly an increment to the work of Mashoori et al (2018). My understanding is that in addition to what had already been shown by Mashoori et al here it is shown how the upcoming choice can be predicted. The author may want to emphasize this novel aspect.

      In our view our manuscript focuses on a completely different aspect of learning and memory than the paper the reviewer is referring to (Mashoori et al. 2018). Importantly, the Mashoori et al. paper looked at choice evaluation at reward sites and shows that disappointing reinforcements are associated with reactivations in the ACC of the unselected target. This points to the role of the ACC in error detection and evaluation. Although this is an interesting result, it is in essence unrelated to what we are focusing on here, which is decision making and prediction of upcoming choices. The fact that the turning direction of the animal can be predicted on a trial-to-trial basis, and even precedes the behavioral change over the course of learning, sheds light on the role of the PFC in these important predictive cognitive processes (as opposed to post-choice reflective processes).

      (c) Figure 6 shows that prospective activity in the HPC is linked to SWRs and theta oscillations. This has been described in various forms since at least the works of Johnson and Redish in 2007, Pastalkova et al 2008, and Dragoi and Tonegawa (2011 and 2013), as well as in earlier literature on splitter cells. These foundational papers on this topic are not even cited in the current manuscript.

      We have added these citations to the introduction (line 37).

      Although some previous work is cited, the current narrative of the results section may lead the reader to think that these results are new, which I think is unfair. Previous evidence of the same phenomena should be cited all along the results and what is new and/or different from previous results should be clearly stated and discussed. Pure replications of previous works may actually just be supplementary figures. It is not fair that the titles of paragraphs and main figures correspond to notions that are well established in the literature (e.g., Figure 2, 2nd paragraph of results, etc.).

      We have changed the title of paragraph 2 and Figure 2 to highlight more clearly the novel result (the difference between the dorsal and ventral mPFC), and have improved clarity of the text throughout to highlight the novelty of our results better.

      (d) My opinion is that, overall, the paper gives the impression of being somewhat rushed and lacking attention to detail. Many figure panels are difficult to understand due to incomplete legends and visualizations with tiny, indistinguishable details. Moreover, some previous works are not correctly cited. I tried to make a list of everything I spotted below.

      We have addressed all the comments in the Recommendations for Authors.

      Reviewer #1 (Recommendations for the authors):

      (1) Expanding on the points above, one of the strengths of the study is expanding the previous result that SWR-unmodulated neurons are more spatially selective (Jadhav et al., 2016), across prefrontal sub-regions, and showing that these neurons are more directionally selective and show more theta cycle skipping. Theta cycle skipping is related to theta sequence representations and previous studies have established PFC theta sequences in parallel to hippocampal theta sequences (Tang et al., 2021; Hasz and Redish, 2020; Wang et al., 2024), and the theta cycle skipping result suggests that SWR-unmodulated neurons should show stronger participation than SWR-modulated neurons in PFC theta sequences that decode to upcoming or alternative location, which can be tested in this high-density PFC physiology data. This is still unlikely to make a categorical distinction that only SWR-unmodulated neurons participate in theta sequence decoding, but will be useful to examine.

      We thank the reviewer for their suggestion and have now included results based on separate decoding models that only use SWR-modulated or SWR-unmodulated mPFC neurons. From this analysis we see that indeed SWR-unmodulated neurons are not the only group contributing to theta sequence decoding, but they do distinguish more strongly between the upcoming and alternative arms at the choice point (see new Fig 4d).

      (2) Non-local decoding in 50ms windows on a theta timescale is a valid analysis, but ignoring potential variability in the internal state during running vs. immobility, and as indicated by LFPs by the presence of SWRs or theta oscillations, is incorrect especially when conclusions are being made about decoding during SWRs and theta oscillation phase, and in light of previous evidence that these are distinct states during behavior. There are multiple papers on PFC theta sequences (Tang et al., 2021; Hasz and Redish, 2020; Wang et al., 2024), and on PFC reactivation during SWRs (Shin et al., 2019; Kaefer et al., 2020; Jarovi et al., 2023), and this dataset of high-density prefrontal recordings using Neuropixels 1.0 provides an opportunity to investigate these phenomena in detail. Here, it should be noted that although Kaefer et al. reported independent prefrontal reactivation from hippocampal reactivation, these PFC reactivation events still occurred during hippocampal SWRs in their data, and were linked to memory performance.

      From our data we see that the time segments that represent upcoming or alternative choice in the prefrontal cortex are in fact not time-locked to hippocampal SWRs (updated Fig 5a where we look only at the closest SWR in time and compare this to shuffled data). In addition, these segments do not overlap much with the decoded segments in the hippocampus (see updated Fig 4e where we added a shuffling procedure to assess the likelihood of the overlap with hippocampal decoded segments). Importantly, we are not ignoring the variability during running and immobility, as theta segments were selected based on a running speed of more than 5 cm/s and the absence of SWRs in the hippocampus (see Methods), ensuring that the theta and SWR analyses were done on the two different behavioral states respectively. We have  clarified this in the main text.

      (3) The majority of rodent studies make the distinction between ACC, PrL, and IL, although as the authors noted, there are arguments that rodent mPFC is a continuum (Howland et al., 2022), or even that rodent mPFC is a unitary cingulate cortical region (van Heukelum et al., 2020). The authors choose to present the results as dorsal (ACC + dorsal PrL) vs. ventral mPFC (ventral PrL + IL), however, in my opinion, it will be more useful to the field to see results separately for ACC, PrL, and IL, given the vast literature on connectivity and functional differences in these regions.

      We appreciate the reviewer’s suggestion. Initially, we did perform all analyses separately for the ACC, PLC and ILC subregions. However, we observed that the differences between subregions (strength of SWR-modulation and the phase locking to theta) varied uniformly along the dorsal-ventral axis, i.e., the PLC showed a profile of SWR-modulation and theta phase locking that fell in between that of the ACC and the ILC. This is also highlighted in paragraph 3 of the introduction (lines 52-56). For that reason, and for the sake of reducing the number of variables, increasing statistical power, and improving readability, we focused on the dorsal-ventral distinction instead, as this is where the main differences were seen.

      (4) I suggest that the authors refrain from making categorical distinctions as in their title and abstract, such as "neurons that are involved in predicting upcoming choice are not the neurons that are modulated by hippocampal sharp-wave ripples" when the evidence presented can only support gradation of participation of the two neuronal sub-populations, not an absolute distinction. The division of SWR-modulated and SWR-unmodulated neurons itself is determined by the statistic chosen to divide the neurons into one or two sub-classes and will vary with the statistical threshold employed. Further, previous studies have suggested that SWR-excited and SWR-inhibited neurons comprise distinct functional sub-populations based on their activity properties (Jadhav et al., 2016; Tang et al., 2017), but it is not clear to what degree is SWR-modulated neurons a distinct and singular functional sub-population. In the absence of connectivity information and cross-correlation measures within and across sub-populations, it is prudent to be conservative about this interpretation of SWR-unmodulated neurons.

      We agree with the reviewer that the distinction is not categorical and have changed the wording in the title and abstract. We also do not intend to claim that the SWR-modulated neurons are a distinct and singular functional sub-population, and for that reason the firing rates from the SWR-excited and SWR-inhibited groups are reported separately throughout the paper.

      Reviewer #2 (Recommendations for the authors):

      Minor detailed remarks:

      (1) The authors should provide a statistical test, perhaps against shuffled data, for Figures 5a,c and 6a,c.

      We thank the reviewer for their suggestion and have added statistical tests in Figures 5a, 5c, 6a and 6c.

      (2) The behavioral task is explained only in the legend of Figure 1c, and the explanation is quite vague. In this type of article format, readers need to have a clear understanding of the task without having to refer to the methods section. A clear understanding of the task is crucial for interpreting all subsequent analyses. In my opinion, the word 'trial' in the figure is misleading, as these are sessions composed of many trials.

      We have added a more thorough description of the behavioral task, both in the main text and the Figure legend.

      (3) Figure 1d, legend of markers missing.

      We have added a legend for the markers.

      (4) When there are multiple bars and a single p-value is presented, it is unclear which group comparisons the p-value pertains to. For instance, Figures 2c-f and 3b, d, f (right parts), and 5b...

      For all p-values we have added lines to the figures that indicate the groups that were compared and have added descriptions of the statistical test to the figure legends to indicate what each p-value represents.

      (5) In Figure 3c, the legend does not explain what the colored lines represent, and the lines themselves are very small and almost indistinguishable.

      We have changed the colored lines to quadrants on the maze to clarify what each direction represents.

      (6) Figure 4a is too small, and the elements are so tiny that it is impossible to distinguish them and their respective colors. The term 'segment' has not been unequivocally explained in the text. All the different elements of the panel should be explicitly explained in the legend to make it easily understandable. What do the pictograms of the maze on the left represent? What does the dashed vertical line indicate?

      We have added the definition of a segment in the text (lines 283-286) and have improved the clarity and readability of Figure 4a.

      (7) In Figure 5, what do the red dots on the right part relate to? The legend should explicitly explain what is shown in the left and right parts, respectively. What comparisons do the p-values relate to?

      We have adjusted the legend to explain the left and right parts of the figure and we have added the statistical test that was used to get to the p-value (in addition to the text which already explained this).

      (8) Panels b of Figures 5 and 6 should have the same y-axis scale for comparison. The position of the p-values should also be consistent. With the current arrangement in Figure 6, it is unclear what the p-values relate to.

      We have adjusted the y-scale to be the same for Figures 5 and 6, and we have added a description of the statistical test to the legend.

      (9) Multiple studies have previously shown that mPFC activity contains spatial information (e.g., refs 24-27). It is important that, throughout the paper, the authors frame their results in relation to previous findings, highlighting what is novel in this work.

      We thank the reviewer for this valuable suggestion. In the revised manuscript, we have indicated more clearly which results replicate previous findings and highlighted novel results.

      (10) Please note that Peyrache et al. (2009) do not show trajectory replay, nor do they decode location. I am not familiar with all the cited literature, but this makes me think that the authors may want to double-check their citations to ensure they assign the correct claims to each past work.

      We have adjusted the reference to the work to exclude the word ‘trajectory’ and doublechecked our other citations.

      (11) The authors perform theta-skipping analysis, first described by Kay et al., but do not cite the original paper until the discussion.

      Thank you pointing out this oversight. We have now included this citation earlier in the paper (line 231).

      (12) Additionally, some parts of the text are difficult to grasp, and there are English vocabulary and syntax errors. I am happy to provide comments on the next version of the text, but please include page and line numbers in the PDF. The authors may also consider using AI to correct English mistakes and improve the fluency and readability of their text.

      We have carefully gone through the text to correct any errors.  We have now also included page and line numbers and we will be happy to address any specific issues the reviewer may spot in the revised manuscript.

    1. Author response:

      The following is the authors’ response to the original reviews

      Public Reviews: 

      Reviewer #1 (Public review): 

      This study presents evidence that remote memory in the APP/PS1 mouse model of Alzheimer's disease (AD) is associated with PV interneuron hyperexcitability and increased inhibition of cortical engram cells. Its strength lies in the fact that it explores a neglected aspect of memory research - remote memory impairments related to AD (for which the primary research focus is usually on recent memory impairments) -which has received minimal attention to date. While the findings are intriguing, the weakness of the paper hovers around purely correlational types of evidence and superficial data analyses, which require substantial revisions as outlined below. 

      We thank the reviewer for their feedback, and we appreciate the recognition of the study’s novelty in addressing remote memory impairments in AD. We acknowledge the reviewer’s concerns and have implemented revisions to strengthen the manuscript.

      Major concerns: 

      (1) In light of previous work, including that by the authors themselves, the data in Figure 1 should be implemented by measurements of recent memory recall in order to assess whether remote memories are exclusively impaired or whether remote memory recall merely represents a continuation of recent memory impairments.

      We agree with the reviewer that is an important point. In line with their suggestion in minor comment 1, we now omitted the statement on recent memory in the results (previously on lines 109-111 and 117). Nonetheless, previous independent experiments from our group have repeatedly shown recent memory deficits in APP/PS1 mice at 12 weeks of age, including a recent article published in 2023. We refer the reviewer to figure 2c in Végh et al. (2014) and figure 2i in Kater et al. (2023). We have added a reference of the latter paper to our discussion section (line 458-459). Therefore, we are confident that the recent memory deficit at 12 weeks of age is a stable phenotype in our APP/PS1 mice.

      With these data in mind, we argue that the remote memory recall impairment is not a continuation of recent memory impairments. Recent memory deficits emerge already at 12 weeks of age, and when remote memory is assessed at 16 weeks (4 weeks after training at 12 weeks of age), APP/PS1 mice are still capable of forming and retrieving a remote memory. This suggests that remote memory retrieval can occur even when recent memory is compromised, arguing against the idea that the remote memory deficit observed at 20 weeks is a continuation of earlier recent memory impairments. We have clarified this point in the revised manuscript by adding the following sentence to the discussion section (line 462-465): 

      ‘This suggests that a remote memory can be formed even when recent memory expression is already compromised, indicating that the remote memory deficit in 20-week-old APP/PS1 mice is not a continuation of earlier recent memory impairments.’

      (2) Figure 2 shows electrophysiological properties of PV cells in the mPFC that correlate with the behavior shown in Figure 1. However, the mice used in Figure 2 are different than the mice used in Figure 1. Thus, the data are correlative at best, and the authors need to confirm that behavioral impairments in the APP/PS1 mice crossed to PV-Cre (and SST-Cre mice) used in Figure 2 are similar to those of the APP/PS1 mice used in Figure 1. Without that, no conclusions between behavioral impairments and electrophysiological as well as engram reactivation properties can be made, and the central claims of the paper cannot be upheld. 

      We thank the reviewer for raising this concern. Indeed, the remote memory impairment and PV hyperexcitability are correlative data, and therefore we do not make causal claims based on these data. However, please note that most of our key findings, including behavioural impairments, characterization of the engram ensemble and reactivation thereof, as well as inhibitory input measurements, were acquired using the same mouse line (APP/PS1), strengthening the coherence of our conclusions. Also, our electrophysiological findings in APP/PS1 (enhanced sIPSC frequency) and APP/PS1-PV-Cre-tdTomato (enhanced PV cell excitability) mice align well. Direct comparisons between the transgenic mouse lines APP/PS1 and APP/PS1 Parv-Cre were performed in our previous studies, confirming that these lines are similar in terms of behaviour and pathology. Specifically, we demonstrated that APP/PS1 mice display spatial memory impairments at 16 weeks of age, Fig 4a-d, consistent with the deficits observed in APP/PS1 Parv-Cre mice at 16 weeks of age, Fig 5a-c (Hijazi et al., 2020a). Additionally, Hijazi et al. (2020a) showed that soluble and insoluble Aβ levels do not differ between APP/PS1 Parv-Cre and APP/PS1 mice (sFig. 1), indicating comparable levels of pathology between these lines. While we do not have a similar characterization of the APP/PS1 SST-Cre line, we should mention that we also did not observe excitability differences in SST cells. We now acknowledge the limitation in the revised discussion section (line 480-487), and stress that our electrophysiology and behavioural findings are correlative in nature:

      ‘Although the excitability measurements were performed in APP/PS1-PV-Cre-tdTomato mice, and not in the APP/PS1 parental line, we previously found that these transgenic mouse lines exhibit comparable amyloid pathology (both soluble and insoluble amyloid beta levels) as well as similar spatial memory deficits (Hijazi et al., 2020a; Kater et al., 2023). Thus, our observations indicate that the APP/PS1 PV-Cre-tdTomato and APP/PS1 lines are similar in terms of pathology and behaviour. Nonetheless, further work is needed to identify a causal link between PV cell hyperexcitability and remote memory impairment.’ 

      (3) The reactivation data starting in Figure 3 should be analysed in much more depth: 

      a) The authors restrict their analysis to intra-animal comparisons, but additional ones should be performed, such as inter-animal (WT vs APP/PS1) as well as inter-age (12-16w vs 16-20w). In doing so, reactivation data should be normalized to chance levels per animal, to account for differences in labelling efficiency - this is standard in the field (see original Tonegawa papers and for a reference). This could highlight differences in total reactivation that are already apparent, such as for instance in WT vs APP/PS1 at 20w (Figure 3o) and highlight a decrease in reactivation in AD mice at this age, contrary to what is stated in lines 213-214. 

      We would like to thank the reviewer for the valuable input on the reactivation data in Figure 3. 

      We agree with the reviewer and now depict the data as normalized to chance levels (Figure 3). The original figures are now supplemental (sFig. 5). The reactivation data normalized to chance are similar to the original results, i.e. no difference was observed in the reactivation of the mPFC engram ensemble between genotypes. The reviewer may have overlooked that we did perform inter-animal (WT vs. APP/PS1) comparisons, however these were not significantly different. We have made this clearer in the main text, lines 277, 288-289, 294-295 and 303-304. Moreover, the reviewer recommended including inter-age group comparisons, which have now been added to the supplemental figures (sFig. 6). No genotype-dependent differences were observed. While a main effect of age group did emerge, indicating that there is a potential increased overlap between Fos+ and mCherry+ in animals aged 16-20 weeks, we caution against overinterpreting this finding. These experimental groups were processed in separate cohorts, with viral injection and 4TM-induced tagging performed at different moments in time, which may have contributed to the observed differences in overlap. We have addressed this point in the revised discussion (line 612-617):

      ‘Furthermore, we also observed an increase in the amount overlap between Fos+ and mCherry+ engram cells when comparing the 12-16w and 16-20w age groups. This finding should be interpreted with caution, as the experimental groups were processed in separate cohorts, with viral injections and 4TM-induced tagging performed at different moments in time. This may have contributed to the observed differences between ages.’

      b) Comparing the proportion of mcherry+ cells in PV- and PV+ is problematic, considering that the PV- population is not "pure" like the PV+, but rather likely to represent a mix of different pyramidal neurons (probably from several layers), other inhibitory neurons like SST and maybe even glial cells. Considering this, the statement on line 218 is misleading in saying that PVs are overrepresented. If anything, the same populations should be compared across ages or groups.  

      We thank the reviewer for their insightful comment and agree that the PV- population of cells is likely more heterogenous than the PV+ population. However, we would like to clarify that all quantified cells were selected based on Nissl immunoreactivity, and to exclude non-neuronal cells, stringent thresholding was applied in the script that was used to identify Nissl+ cells. The threshold information has now been added to the methods section (line 758-760). Thus, although heterogenous, the analysed PV- population reflects a neuronal subset. In response to the reviewer’s suggestion, we have now included overlap measurements relative to chance levels (Figure 3). These analyses did not reveal differences with the original analyses, i.e., there are no genotype specific differences. We have also incorporated the suggested inter-age group comparisons (sFig. 6) and found no differences between age groups. In light of the raised concerns, we have removed the statement that PV cells were overrepresented in the engram ensemble.

      c) A similar concern applies to the mcherry- population in Figure 4, which could represent different types of neurons that were never active, compared to the relatively homogeneous engram mcherry+ population. This could be elegantly fixed by restricting the comparison to mCherry+Fos+ vs mCherry+Fos- ensembles and could indicate engram reactivation-specific differences in perisomatic inhibition by PV cells. 

      The comparison the reviewer suggests, comparing mCherry+Fos+ to mCherry+Fos- is indeed conceptually interesting and could provide more insight into engram reactivation and PV input. However, there are practical limitations to performing this analysis, as neurons in close proximity need to be compared in a pairwise manner to account for local variability in staining intensity. As shown in Figure 3c+k and Figure 4a+b, d+e, PV immunostaining intensity varies to a certain extend within a given image. While pairwise comparisons of neighbouring neurons were feasible when analysing mCherry+ and mCherry- cells, they are unfortunately not feasible for the mCherry+Fos+ vs. mCherry+Fos- comparison. The occurrence of spatially adjacent mCherry+Fos+ and mCherry+Fos- neurons is too sparse for a pairwise comparison. This analysis would therefore result in substantial under-sampling and limit the reliability of the analysis. Nonetheless, we agree with the reviewer that the mCherry- population may be more heterogenous than the mCherry+ population, despite the fact that PV+ neurons and that non-neuronal cells were excluded from both populations in the analyses. We therefore added a statement to the discussion to acknowledge this limitation (line 536-539): 

      ‘Although PV+ cells were not included in this analysis and we excluded non-neuronal cells based on the area of the Nissl stain, the mCherry- population was potentially more heterogenous than the mCherry+ population, which may have contributed to the differences we observed.’

      (4) At several instances, there are some doubts about the statistical measures having been employed: 

      a) In Figure 4f, it is unclear why a repeated measurement ANOVA was used as opposed to a regular ANOVA. 

      b) In Supplementary Figure 2b, a Mann-Whitney test was used, supposedly because the data were not normally distributed. However, when looking at the individual data points, the data does seem to be normally distributed. Thus, the authors need to provide the test details as to how they measured the normalcy of distribution. 

      a) Based on the pairwise comparison of neighbouring neurons within animals, the data in Figure 4f was analysed with a repeated measure ANOVA. 

      b) We thank the author for their comment on Supplementary Figure 2b. The data is indeed normally distributed, and we have analysed it using a D’Agostino & Pearson test. We have corrected this in the supplemental figure. 

      Minor concerns: 

      (1) Line 117: The authors cite a recent memory impairment here, as shown by another paper. However, given the notorious difficulty in replicating behavioral findings, in particular in APP/PS1 mice (number of backcrossings, housing conditions, etc., might differ between laboratories), such a statement cannot be made. The authors should either show in their own hands that recent memory is indeed affected at 12 weeks of age, or they should omit this statement. 

      We thank the reviewer for this thoughtful comment. As noted in our response to major concern (1), we have addressed this concern by providing additional information and clarification in the discussion (line 462-465) regarding the possibility that remote memory impairments are a continuation of recent memory impairments. As mentioned in our response, we have added a reference to a more recent study from our lab (Kater et al. (2023). These findings are consistent with the earlier report from our lab (Végh et al. (2014), underscoring the reproducibility of this phenotype across independent cohorts and time. Notably, the experiments in the 2023 and present study were performed using the same housing and experimental conditions. Nevertheless, in light of the reviewer’s suggestion, and to avoid overstatement or speculation, we have now omitted the sentence referring to recent memory impairments at 12 weeks of age from the results section.

      (2) Pertaining to Figure 3, low-resolution images of the mPFC should be provided to assess the spread of injection and the overall degree of double-positive cells.  

      We agree with the reviewer and have added images of the mPFC as a supplemental figure (sFig. 3) that show the spread of the injection. Unfortunately, it is not possible to visualize the overall degree of double-positive cells at a lower magnification (or low-resolution). Representative examples of colocalization are presented in Figure 3.

      Reviewer #2 (Public review): 

      This study presents a comprehensive investigation of remote memory deficits in the APP/PS1 mouse model of Alzheimer's disease. The authors convincingly show that these deficits emerge progressively and are paralleled by selective hyperexcitability of PV interneurons in the mPFC. Using viral-TRAP labeling and patch-clamp electrophysiology, they demonstrate that inhibitory input onto labeled engram cells is selectively increased in APP/PS1 mice, despite unaltered engram size or reactivation. These findings support the idea that alterations in inhibitory microcircuits may contribute to cognitive decline in AD. 

      However, several aspects of the study merit further clarification. Most critically, the central paradox, i.e., increased inhibitory input without an apparent change in engram reactivation, remains unresolved. The authors propose possible mechanisms involving altered synchrony or impaired output of engram cells, but these hypotheses require further empirical support. Additionally, the study employs multiple crossed transgenic lines without reporting the progression of amyloid pathology in the mPFC, which is important for interpreting the relationship between circuit dysfunction and disease stage. Finally, the potential contribution of broader network dysfunction, such as spontaneous epileptiform activity reported in APP/PS1 mice, is also not addressed. 

      We thank the reviewer for their evaluation and appreciate the positive assessment of our study’s contributing to understanding remote memory deficits and the dysfunction of inhibitory microcircuits in AD. We also acknowledge the relevant points raised and have revised the manuscript to clarify our interpretations. 

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors): 

      (1) Line 68: What are "APP23xPS45" mice? This is most likely a typo.

      This line is a previously reported double transgenic amyloid beta mouse model that was obtained by crossing APP23 (overexpressing human amyloid precursor protein with the Swedish double mutation at position 670/671) with PS45 (carrying a transgene for mutant Presenilin 1, G384A mutation) (Busche et al., 2008; Grienberger et al., 2012). 

      (2) Line 148: The authors should also briefly describe in the main text that APP/PS1 x SST-Cre mice were generated and used here.  

      We thank the reviewer for their comment and have added their suggestion to the main text (line 166-168):

      ‘To do this, APP/PS1 mice were crossed with SST-Cre mice to generate APP/PS1 SST-Cre mice. Following microinjection of AAV-hSyn::DIO-mCherry into the mPFC, recordings were obtained from SST neurons.’

      (3) The discussion should be condensed because of redundancies on several occasions. For example, memory allocation is discussed starting on line 371, then again on line 392. This should be combined. Likewise, how the correlative nature of the findings about PV interneurons could be further functionally addressed is discussed on lines 413 and 454, and should be condensed into one paragraph. 

      We thank the reviewer for this suggestion and have revised the discussion to remove the redundancies as proposed.  

      Reviewer #2 (Recommendations for the authors): 

      To strengthen the manuscript, the following points should be addressed: 

      (1) Quantify amyloid pathology: It is essential to assess amyloid-β levels (soluble and insoluble) in the mPFC of APP/PS1-PV-Cre-tdTomato mice at the studied ages. This would help determine whether the observed circuitlevel changes track with disease progression as seen in canonical APP/PS1 models. 

      We thank the reviewer for this valuable suggestion and agree that assessing Aβ levels in the mPFC is important to determine whether the observed circuit level alterations in APP/PS1 mice coincide with the progression of amyloid pathology. Therefore, we assessed the amyloid plaque load in the mPFC of APP/PS1 mice at 16 and 20 weeks of age (new supplemental figure sFig. 1) and observed no difference in plaque load between these two time points. This suggests that the increased excitability in the mPFC cannot be attributed to differences in plaque load (insoluble amyloid beta).

      In line with this, we previously studied both soluble and insoluble Aβ levels in the CA1 and reported that there are no differences between 12 and 16 weeks of age (Kater et al., 2023), while PV cell hyperexcitability is present at 16 weeks of age (Hijazi et al., 2020a). From 24 weeks onwards, the level of amyloid beta increases. Similarly, Végh et al. (2014) showed using immunoblotting that monomeric and low molecular weight oligomeric forms of soluble Aβ are already present as early as 6 weeks of age and become more prominent at 24 weeks of age. Although the soluble Aβ measurements were performed in the hippocampus, we think these findings can be extrapolated to cortical regions, as the APP and PS1 mutations in APP/PS1 mice are driven by a prion promotor, which should induce consistent expression across brain regions. Data from other research groups support this hypothesis (Kim et al., 2015; Zhang et al., 2011). Thus, large regional differences in soluble Aβ are not expected. The temporal progression suggests that increasing levels of soluble amyloid beta might contribute to the emergence of PV cell hyperexcitability. We have added this point to the manuscript (line 585-591):

      ‘Since amyloid beta plaque load in the mPFC remains comparable between 16- and 20-week-old APP/PS1 mice, the observed increased excitability is unlikely the result of changes in insoluble amyloid beta levels. Previous data from our lab show that soluble amyloid beta is already present as early as 6 weeks of age and becomes more prominent at 24 weeks of age (Kater et al., 2023; Végh et al., 2014). The progressive increase in soluble amyloid beta levels may contribute to the emergence of PV cell hyperexcitability.’

      Finally, we previously compared soluble and insoluble amyloid beta levels in APP/PS1 and APP/PS1 Parv Cre mice and show that these are similar (Hijazi et al., 2020a). While our current study shows the progression of amyloid beta accumulation in APP/PS1 mice, these mice also exhibit altered microcircuitry (enhanced sIPSC frequency on engram cells) at 20 weeks of age, the same age at which we observed PV cell hyperexcitability in APP/PS1 Parv Cre tdTomato mice. This further supports the generalizability of our findings across genotypes, between APP/PS1 and APP/PS1 Parv Cre tdTomato mice. 

      (2) Examine later disease stages: Since the current effects are modest, assessing memory performance, PV cell excitability, and engram inhibition at more advanced stages could clarify whether these alterations become more pronounced with disease progression. 

      We thank the reviewer for this thoughtful suggestion. Investigating advanced disease stages could indeed provide valuable insights into whether the observed alterations in memory performance, PV cell hyperexcitability and engram inhibition become more pronounced over time. Our previous work has shown that changes in pyramidal cell excitability emerge at a later stage than in PV cells, supporting the idea of progressive circuit dysfunction (Hijazi et al., 2020a). However, at these more advanced stages, additional pathological processes, such as an increased gliosis (Janota, Brites, Lemere, & Brito, 2015; Kater et al., 2023) and synaptic loss (Alonso-Nanclares, MerinoSerrais, Gonzalez, & DeFelipe, 2013; Bittner et al., 2012), will likely contribute to both electrophysiological and behavioural measurements. Furthermore, we would like to point out that the current changes observed in memory performance, PV hyperexcitability and increased inhibitory input on engram cells at 16-20 weeks of age are not modest, but already quite substantial. Our focus on these early time points in APP/PS1 mice were intentional, as it helps us understand the initial changes in Alzheimer’s disease at a circuit level and to identify therapeutic targets early intervention. What happens at later stages is certainly of interest, but beyond the scope of this study and should therefore be addressed in future studies. We have incorporated a discussion related to this point into the revised manuscript (line 602-606):

      ‘Moreover, it is relevant to investigate whether changes in PV and PYR cell excitability, as well as input onto engram cells in the mPFC, become more pronounced at later disease stages. Nonetheless, by focussing on early disease timepoints in the present study, we aimed to understand the initial circuit-level changes in AD and identify targets for early therapeutic intervention.’

      (3) Address network hyperexcitability: Spontaneous epileptiform activity has been reported in APP/PS1 mice from 4 months of age (Reyes-Marin & Nuñez, 2017). Including EEG data or discussing this point in relation to your findings would help contextualize the observed inhibitory remodeling within broader network dysfunction. 

      We thank the reviewer for this valuable input and for highlighting the study by Reyes-Marin and Nuñez (2017). In line with this, we recently reported longitudinal local field potential (LFP) recordings in freely behaving APP/PS1 Parv-Cre mice and wild type control animals between the ages of 3 to 12 months (van Heusden et al., 2023). Weekly recordings were performed in the home cage under awake mobile conditions. These data showed no indications of epileptiform activity during wakefulness, consistent with previous findings that epileptic discharges in APP/PS1 mice predominantly occur during sleep (Gureviciene et al., 2019). Recordings were obtained from the prefrontal cortex (PFC), parietal cortex and the hippocampus. In contrast, the study by Reyes-Marin and Nuñez (2017) recorded from the somatosensory cortex in anesthetized animals. Here, during spontaneous recordings, no differences were observed in delta, theta or alpha frequency bands between APP/PS1 and WT mice. Interestingly, we observed an early increase in absolute power, particularly in the hippocampus and parietal cortex from 12 to 24 weeks of age in APP/PS1 mice. In the PFC we found a shift in relative power from lower to higher frequencies and a reduction in theta power. Connectivity analyses revealed a progressive, age-dependent decline in theta/alpha coherence between the PFC and both the parietal cortex and hippocampus. Given the well-established role of PV interneurons network synchrony and coordinating theta and gamma oscillations critical for cognitive function (Sohal, Zhang, Yizhar, & Deisseroth, 2009; Xia et al., 2017), these findings support the idea of early circuit dysfunction in APP/PS1 mice. Our findings, i.e. hyperexcitability of PV cells, align with these LFP based networklevel observations. These data suggest an early shift in the E/I balance, contributing to altered oscillatory dynamics and impaired inter-regional connectivity, possibly leading to alterations in memory. However, whether the observed PV hyperexcitability in our study directly contributes to alterations in power and synchrony remains to be elucidated. Furthermore, it would be interesting to determine the individual contribution of PV cell hyperexcitability in the hippocampus versus the mPFC to network changes and concurrent memory deficits. We have added a statement on network hyperexcitability to the discussion (line 561-565). 

      ‘Interestingly, we recently found a progressive disruption of oscillatory network synchrony between the mPFC and hippocampus in APP/PS1 Parv-Cre mice (van Heusden et al., 2023). However, whether the observed PV cell hyperexcitability directly contributes to changes in inter-regional synchrony, and whether this leads to alterations at a network level, i.e. increased inhibitory input on engram cells, and consequently to memory deficits, remains to be elucidated in future studies.’ 

      (4) Mechanisms responsible for PV hyperexcitability: Related to the previous point, a discussion of the possible underlying mechanisms, e.g., direct effects of amyloid-β, inflammatory processes, or compensatory mechanisms, would strengthen the discussion. 

      We agree with the reviewer that this will strengthen the discussion. We have now added a comprehensive discussion in the revised manuscript to address potential mechanisms responsible for PV cell hyperexcitability (line 579-594).:

      ‘Prior studies have shown that neurons in the vicinity of amyloid beta plaques show increased excitability (Busche et al., 2008). We demonstrated that PV neurons in the CA1 are hyperexcitable and that treatment with a BACE1 inhibitors, i.e. reducing amyloid beta levels, rescues PV excitability (Hijazi et al., 2020a). In line with this, we also reported that addition of amyloid beta to hippocampal slices increases PV excitability, without altering pyramidal cell excitability (Hijazi et al., 2020a). Finally, applying amyloid beta to an induced mouse model of PV hyperexcitability further impairs PV function (Hijazi et al., 2020b). Since amyloid beta plaque load in the mPFC remains comparable between 16- and 20-week-old APP/PS1 mice, the observed increased excitability is unlikely the result of changes in insoluble amyloid beta levels. Previous data from our lab show that soluble amyloid beta is already present as early as 6 weeks of age and becomes more prominent at 24 weeks of age (Kater et al., 2023; Végh et al., 2014). The progressive increase in soluble amyloid beta levels may contribute to the emergence of PV cell hyperexcitability. We hypothesize that the hyperexcitability induced by amyloid beta may result from disrupted ion channel function, as PV neuron dysfunction can result from altered potassium (Olah et al., 2022) and sodium channel activity (Verret et al., 2012).’

      (5) Excitatory-inhibitory balance: While the main focus is on increased inhibition onto engram cells, the reported increase in sEPSC frequency (Figure 5g) across genotypes suggests the presence of excitatory remodelling as well. A brief discussion of how this may interact with increased inhibition would be valuable.  

      We thank the reviewer for this comment regarding the interaction between excitatory and inhibitory remodelling. We have now incorporated this discussion point into the revised manuscript (line 528-534):

      ‘Interestingly, both WT and APP/PS1 mice showed an increase in sEPSC frequency onto engram cells, suggesting that increased excitatory input is a consequence of memory retrieval and not affected by genotype. However, only in APP/PS1 mice, the augmented excitatory input coincided with an elevation of inhibitory input onto engram cells. The resulting imbalance between excitation and inhibition could therefore potentially disrupt the precise control of engram reactivation and contribute to the observed remote memory impairment.’

      References

      Alonso-Nanclares, L., Merino-Serrais, P., Gonzalez, S., & DeFelipe, J. (2013). Synaptic changes in the dentate gyrus of APP/PS1 transgenic mice revealed by electron microscopy. J Neuropathol Exp Neurol, 72(5), 386-395. doi:10.1097/NEN.0b013e31828d41ec

      Bittner, T., Burgold, S., Dorostkar, M. M., Fuhrmann, M., Wegenast-Braun, B. M., Schmidt, B., . . . Herms, J. (2012). Amyloid plaque formation precedes dendritic spine loss. Acta Neuropathologica, 124(6), 797807. doi:10.1007/s00401-012-1047-8

      Busche, M. A., Eichhoff, G., Adelsberger, H., Abramowski, D., Wiederhold, K. H., Haass, C., . . . Garaschuk, O. (2008). Clusters of hyperactive neurons near amyloid plaques in a mouse model of Alzheimer's disease. Science, 321(5896), 1686-1689. doi:10.1126/science.1162844

      Grienberger, C., Rochefort, N. L., Adelsberger, H., Henning, H. A., Hill, D. N., Reichwald, J., . . . Konnerth, A. (2012). Staged decline of neuronal function in vivo in an animal model of Alzheimer's disease. Nat Commun, 3, 774. doi:10.1038/ncomms1783

      Gureviciene, I., Ishchenko, I., Ziyatdinova, S., Jin, N., Lipponen, A., Gurevicius, K., & Tanila, H. (2019). Characterization of Epileptic Spiking Associated With Brain Amyloidosis in APP/PS1 Mice. Front Neurol, 10, 1151. doi:10.3389/fneur.2019.01151

      Hijazi, S., Heistek, T. S., Scheltens, P., Neumann, U., Shimshek, D. R., Mansvelder, H. D., . . . van Kesteren, R. E. (2020a). Early restoration of parvalbumin interneuron activity prevents memory loss and network hyperexcitability in a mouse model of Alzheimer's disease. Mol Psychiatry, 25(12), 3380-3398. doi:10.1038/s41380-019-0483-4

      Hijazi, S., Heistek, T. S., van der Loo, R., Mansvelder, H. D., Smit, A. B., & van Kesteren, R. E. (2020b). Hyperexcitable Parvalbumin Interneurons Render Hippocampal Circuitry Vulnerable to Amyloid Beta. iScience, 23(7), 101271. doi:10.1016/j.isci.2020.101271

      Janota, C. S., Brites, D., Lemere, C. A., & Brito, M. A. (2015). Glio-vascular changes during ageing in wild-type and Alzheimer's disease-like APP/PS1 mice. Brain Res, 1620, 153-168. doi:10.1016/j.brainres.2015.04.056

      Kater, M. S. J., Huffels, C. F. M., Oshima, T., Renckens, N. S., Middeldorp, J., Boddeke, E., . . . Verheijen, M. H. G. (2023). Prevention of microgliosis halts early memory loss in a mouse model of Alzheimer's disease. Brain Behav Immun, 107, 225-241. doi:10.1016/j.bbi.2022.10.009

      Kim, H. Y., Kim, H. V., Jo, S., Lee, C. J., Choi, S. Y., Kim, D. J., & Kim, Y. (2015). EPPS rescues hippocampus-dependent cognitive deficits in APP/PS1 mice by disaggregation of amyloid-β oligomers and plaques. ature Communications, 6(1), 8997. doi:10.1038/ncomms9997

      Olah, V. J., Goettemoeller, A. M., Rayaprolu, S., Dammer, E. B., Seyfried, N. T., Rangaraju, S., . . . Rowan, M. J. M. (2022). Biophysical Kv3 channel alterations dampen excitability of cortical PV interneurons and contribute to network hyperexcitability in early Alzheimer’s. Elife, 11, e75316. doi:10.7554/eLife.75316

      Reyes-Marin, K. E., & Nuñez, A. (2017). Seizure susceptibility in the APP/PS1 mouse model of Alzheimer's disease and relationship with amyloid β plaques. Brain Res, 1677, 93-100. doi:10.1016/j.brainres.2017.09.026

      Sohal, V. S., Zhang, F., Yizhar, O., & Deisseroth, K. (2009). Parvalbumin neurons and gamma rhythms enhance cortical circuit performance. Nature, 459(7247), 698-702. doi:10.1038/nature07991

      van Heusden, F. C., van Nifterick, A. M., Souza, B. C., França, A. S. C., Nauta, I. M., Stam, C. J., . . . van Kesteren, R. E. (2023). Neurophysiological alterations in mice and humans carrying mutations in APP and PSEN1 genes. Alzheimers Res Ther, 15(1), 142. doi:10.1186/s13195-023-01287-6

      Végh, M. J., Heldring, C. M., Kamphuis, W., Hijazi, S., Timmerman, A. J., Li, K. W., . . . van Kesteren, R. E. (2014). Reducing hippocampal extracellular matrix reverses early memory deficits in a mouse model of Alzheimer's disease. Acta Neuropathol Commun, 2, 76. doi:10.1186/s40478-014-0076-z

      Verret, L., Mann, E. O., Hang, G. B., Barth, A. M., Cobos, I., Ho, K., . . . Palop, J. J. (2012). Inhibitory interneuron deficit links altered network activity and cognitive dysfunction in Alzheimer model. Cell, 149(3), 708-721. doi:10.1016/j.cell.2012.02.046

      Xia, F., Richards, B. A., Tran, M. M., Josselyn, S. A., Takehara-Nishiuchi, K., & Frankland, P. W. (2017). Parvalbumin-positive interneurons mediate neocortical-hippocampal interactions that are necessary for memory consolidation. Elife, 6. doi:10.7554/eLife.27868

      Zhang, W., Hao, J., Liu, R., Zhang, Z., Lei, G., Su, C., . . . Li, Z. (2011). Soluble Aβ levels correlate with cognitive deficits in the 12-month-old APPswe/PS1dE9 mouse model of Alzheimer's disease. Behavioural Brain Research, 222(2), 342-350. doi:https://doi.org/10.1016/j.bbr.2011.03.072

    1. With all music viewed as spiritual and thus medicinalwithin Indigenous cultures, it is often difficult for non-In-digenous people to see the spiritual significance of ourmusic.

      I feel this is true because usually people do not understand what someone else is going through until they feel the same or something similar. So of course they cant understand how music or any other therapy could possibly help. But this is normal now. But Back then it meant that an entire culture of people would be judged criticized and more just because they could not understand them.

    2. The answer that appears tohave eluded this question is that songs do not sing ofremedies because, to the Indigenous, the music ismedicine.

      This line touched me because for me its the same. When i listen to music it is amazing and it does help me heal in many ways, But if i sing, create, or connect with people because of the music. That right there is the magic. That is the medicine for me. I know many people who feel the same and its not even about being a godd musician or having skill. It really does somethin gfor the soul when we sing or create music, even if its a random hum or a silly sound.

    3. Western medical practices thatmedicine when respected, can cure, but when disre-spected, can harm.

      I believe this applies to many things in life. When it comes to music, I think it is similar, i agree that too much of anything is bad. When we feel a strong emotion and we listen to something that matches our mood, its like it is now reinforced, almost like it amplifies whatever you are feeling.

    1. A second weakness was to focus a chapter on internal stakeholders. This was since in all of my clinical work, the executives leaped to the conclusion that they had internal stakeholders as well. There is some usefulness here, but it is also misleading from the standpoint of making businesses more sensitive to the external stakeholders.

      The term “internal stakeholders” is somewhat vague and could confuse readers about the scope of stakeholder theory. Freeman admits it may be misleading, which highlights the need for precise definitions. Defining “internal” versus “external” stakeholders and explaining why the focus may mislead would make the argument clearer.

    2. In many of the chapters in the book, I would put the ideas a bit differently today but that is the price of a book written nearly 40 years ago, However, SMASA also has many weaknesses, I think that the most glaring one is the imposition of a strategic planning framework on the book.

      Freeman acknowledges weaknesses in SMASA while also defending its enduring relevance. While generally consistent, there is minor tension between admitting flaws and asserting the book’s continued authority. Clarifying how the book’s strengths outweigh its weaknesses would improve internal consistency and reader trust.

    3. basic logic of the book is that if we use the vocabulary of 'stakeholders', we will have a better understanding of how businesses actually work and create value, and how they can work better.

      Freeman’s premise that using the stakeholder vocabulary improves understanding of business is reasonable but not universally provable. While stakeholder theory has influenced management, it may not always lead to better outcomes in every context. The claim is sound as a general observation, but acknowledging limitations or exceptions would strengthen credibility.

    1. A thesis is not your paper’s topic, but rather your interpretation of the question or subject. For whatever topic your professor gives you, you must ask yourself, “What do I want to write about it?” Asking and then answering this question is vital to forming a thesis that is precise, forceful, and confident.

      The thesis will have the question that you use the rest of your essay to support

    1. need using data and technology, and to offer them only products, services,and advice that are targeted to meet those desires and requirements.

      The authors present a logically structured definition that connects understanding customer needs to offering tailored solutions. The reasoning is valid because it links cause, which is data-driven understanding, to effect, which is personalized offerings. However, the argument assumes that more data automatically leads to deeper understanding without considering issues of interpretation or bias. The logical structure is sound, but it rests on a strong assumption that data equals understanding.

    1. China and the US have had a largely symbiotic relationship since the1980s regarding labor, manufacturing, and technology. In the early 1980s, thegovernments of the two countries agreed that US corporations could contractwith Chinese organizations to move their manufacturing operations to China toutilize the competent work and low wages of Chinese employees. As a result,approximately one sixth of US manufacturing jobs were lost to Chinese facto-ries.

      The author claims that the relationship between China and the U.S. has been "largely symbiotic," but this statement may not be logically supported by the evidence provided. A truly symbiotic relationship implies mutual benefit, yet the example focuses mainly on U.S. job losses. The argument might be valid in structure but lacks balance in demonstrating how both sides benefited. More evidence on what the U.S. gained (e.g., lower consumer prices or corporate profits) would strengthen the validity.

    1. On Wednesday, Victoria's Secret announced that its fashion show will have an all-female musical lineup, starring Missy Elliott, Madison Beer, KAROL G and TWICE.

      Announcing the people performing at the show in an article about Angel Reese joining the runway is an interesting choice. It may be a way to broaden the audience of who will be watching. Some people may not know Angel Reese, but some may be fans of Missy Elliot or TWICE.

    1. Marx got much of the problemnght; he got the solution wrong. Many of those in business ethics are trying bypiecemeal change to help society improve by helping corporations improve. Alltoo often even those engaged in political philosophy deal only with governmentalchange or individualistic approaches to human rights, ignonng the very real andoften dominating influence of the modern global corporation. Those in businessethics focus on business and see it not only as one of the causes of the ills thatMarx described but as one of the key players in the amelioration of those ills. WhileRorty has been battling academic philosophy in its analytic incarnation, througha quiet revolution in philosophy departments those in applied and business ethicshave been pursuing what he seems to agree is important. Where he and they maydisagree is on whether Marx was indeed correct that practice needs to be informedby theory. Those in business ethics believe that it does

      In this part of the article, De George supports the idea that small, gradual improvements in business ethics are more realistic and effective than trying to completely change the entire economic system. This argument makes sense because practical, step-by-step changes are often easier to achieve and sustain within existing business structures. However, the strength of this claim depends on how effective those small reforms actually are—sometimes they only create the appearance of progress without fixing deeper issues. To make his argument stronger, De George could acknowledge these trade-offs and explain when gradual reform is enough and when bigger structural changes are necessary, using real examples of companies or industries where ethical improvements led to meaningful change.

    2. Rorty ends by expressing a need for something to replace Marx's communistutopia-even though Marx was trained as a philosopher and wrote no novels orstories. Rorty's observations about "jungle capitalism" are not likely to move any-one to action or even to serious thought. Rorty concludes, "Perhaps the businessethics community will provide an environment in which such dreams are encour-aged" (381). The business ethics community, of which those trained in philosophyTHE RELEVANCE OF PHILOSOPHY TO BUSINESS ETHICS 389form a large part and had a large hand in bringing about, is attempting and has inpart succeeded in providing such an environment. Marx got much of the problemnght; he got the solution wrong. Many of those in business ethics are trying bypiecemeal change to help society improve by helping corporations improve. Alltoo often even those engaged in political philosophy deal only with governmentalchange or individualistic approaches to human rights, ignonng the very real andoften dominating influence of the modern global corporation.

      In this section, De George praises philosophers who take a practical approach by working directly with real-world ethical issues, but he also emphasizes the influence of major thinkers like Kant and Mill on business ethics. This creates a bit of tension because if the strength of applied ethics is its focus on context and practicality, then relying too much on broad, abstract theories could seem inconsistent. De George does hint that both can work together—using traditional theory as guidance while solving concrete problems but he doesn’t fully explain how they connect. To make his argument clearer, he could show exactly how universal theories can be applied in specific business contexts without losing their practical value.

    3. to add unless it can be considered enlarging society's moral imagination. My sug-gestion is that even if he chooses to use that criterion, he will End that the field ofbusiness ethics, as developed by philosophers, measures up to it, providing he doesnot use an arbitrarily narrow definition of moral imagination.The third strand of business ethics is the business ethics movement. That move-ment can be dated from the mid-1980s. It is not entirely irrelevant that it came afterthe development of the field, and has been influenced by the field. Not all busi-nesses, not all business persons, not all professors of business have embraced thefield or the movement. But the movement clearly is part of the existing social scene.The field has influenced ethics in business and has influenced the business ethicsmovement, and each of them has influenced the others

      In this part of the article, De George disagrees with Rorty’s idea that philosophy should be judged by whether it improves people’s character. Instead, he suggests that philosophy’s value lies in helping expand moral imagination and guide institutions. This argument makes logical sense because changing the standard for evaluation naturally changes how we see philosophy’s relevance. Still, De George never clearly defines what “moral imagination” means, which makes his point feel a bit vague and open to interpretation. To make his argument stronger, he could give specific examples of what an expanded moral imagination looks like in practice, like when companies start recognizing more stakeholder interests or include human-rights principles in their values.

    4. Included in the movementhas been the development of codes of conduct, compliance programs, corporateethics officer positions, ethics training programs, and other trappings of ethics.More importantly, as public consciousness has been raised about sweatshops andenvironmental protection, so corporate policies have at least to some extent changed.Whether corporations taLk of the triple bottom line, or of corporate social respon-sibility, or of ethics and whether one sees their actions as merely public relationsreactions to criticism or as more nobly inspired-corporations are slowly changingtheir behavior and taking into account not only shareholders but what have becomeknown as other stakeholder

      In this section, De George suggests that the work of philosophers has influenced corporate behavior and the way companies talk about ethics. The argument makes sense because it follows a clear logic—philosophers create moral frameworks, those ideas spread through education and public discussion, and eventually, companies adopt new ethical policies or language. However, De George doesn’t fully prove that philosophers were the main cause of these changes, since factors like laws, market demands, media pressure, or social movements could also explain them. This makes his claim seem a bit overstated. To make it stronger, he could mention clear examples of philosophers directly advising businesses or policymakers, showing how their ideas actually shaped corporate reforms.

    5. The same is true of our philosophy departments. Taking a course-or severalcourses- in ethics is no guarantee that one's character will be improved. But thereTHE RELEVANCE OF PHILOSOPHY TO BUSINESS ETHICS 387is some likelihood that one's moral sensibility-or one's moral imagination willbe expanded and, even using Rorty's criteria, that may help students be better per-sons, if they are so inclined or motivate

      In this part of the article, De George argues that moral imagination by itself is not enough and that ethical theory is needed to make sense of conflicting intuitions and to justify actions. The argument follows a logical path because if intuitions often clash, then using structured frameworks can help organize and clarify them. However, De George presents theory as the only real way to create coherence, without acknowledging that there are other valid ethical methods like narrative ethics, casuistry, or reflective equilibrium. This creates a small false dichotomy because he makes it seem like we must choose between imagination and theory when in reality both can work together. To make his point stronger, he could include examples of how theory and intuition complement each other in ethical reasoning.

    1. Prior research on AI limitations indicates that,although modern AI systems with advanced deep learn-ing capabilities are very impressive, humans are stillmore advanced in such qualities as creativity, imagina-tion, and emotions in general (Braga and Logan 2017)and creativity and emotional and social intentions indesign and art specifically (Hertzmann 2018, Mazzoneand Elgammal 2019). Recent advances in generativeadversarial networks (i.e., so-called creative adversarialnetworks) suggest that algorithms can be trained to usethe same distribution of styles used by human artists butat the same time to maximize the differences between anew algorithmically generated art and all prior works,thus making the AI-generated art as novel as possible

      this shows that AI in these fields is still beat out by the best workers. Therefore, it is not really a threat to the industry as a whole but only the workers who are not as hard working. This would be beneficial for both the industry and workers. It would push workers to do greater things.

    2. AI system. To understandwhether and how designers change their design sub-missions in response to the threat from AI, we measurethe designs’ emotional content and complexity becausethose variables have been shown (by prior research inpsychology and marketing) to affect esthetic perceptionof art and design images as described later. We thenexamine whether and how the emotional content andcomplexity of design submissions affect the likelihoodof winning a contest and how these differ for successfuland unsuccessful designers before and after the intro-duction of the AI system.

      The area of creative design, which can be expanded into marketing as a whole, once felt safe from the advent of technology. But what was once believed to be untouchable due to its reliance on human intellect is now being threatened by AI. people's newfound reliance on AI is creating a dangerous situation for people in these fields.

    1. “There is no excuse for this administration to abandon families who rely on SNAP, or food stamps, as a lifeline.”

      This is an informed opinion from the state attorney general, Letitia James. She is knowledgeable about legal policy issues, but it still reflects her personal judgment.

    1. “This life, as thou livest it at present, and hast lived it, thou must live it once more, and also innumerable times; and there will be nothing new in it, but every pain and every joy and every thought and every sigh, and all the unspeakably small and great in thy life must come to thee again,

      if we were to live the same life over and over again we will expierence the same joy and pain. Nothing new will happen or change. Every big or small thing will come back to us, reliving the moment as if it was the first time.

  5. drive.google.com drive.google.com
    1. Thedeconstruction of the counterfactual premises ofneoclassical economics lets us appreciate that there is,in fact, a considerable room for ethics not onlyoutside but also inside the economy (Brodbeck,2000).

      In proposing "humanistic management" as an answer, Dierksmeier identifies moral rationality as the wellspring of better business decisions. Ethically persuasive, this can be supported further with empirical or policy based evidence like the integration of behavioral economics or stakeholder theory on order to show how moral and functional incentives converge. This would transform his proposal from philosophical to a pragmatically testable approach, making it more useful for management educators.

    2. Instead ofcontinuing to hold fast to the elusive ideal of value-free science, we should rather concentrate on thevalues intrinsic to any and all scientific research(Douglas, 2009).

      The writer contends that economics should not be "value free" but should come out explicitly with its normative foundations. But previously in the chapter, he criticizes utilities as being too relativistic and subjective about values. There is a slight inconsistency here, he wants economics to be normatively sensitive and yet not subjectivist. To get out of this contradictions, he might have to be more precise about his ideal normativity that it is humanistic universalism, rather than utilitarian relativism. This clarification would make the general ethical stance more coherent.

    3. All responsibility presupposes freedom, and cor-porate responsibility makes no exception; the de-mand for (more and better) corporate responsibilityrequires managerial freedom, e.g., to deviate frommaxims of sheer profit-maximization in favor ofmore comprehensive managerial objectives.

      The author argues that freedom is required to all responsibility, and corporate responsibility therefore requires managerial discretion. This argument is philosophically sound in moral theory but dubious in applied economics. It rests on the presumption that freedom and constraint are mutually exclusive, ignoring the fact that much of moral action is under constraints. The argument would sound much stronger if Dierksmeier allowed for degrees of freedom or situational limits to moral responsibility rather than an either or solutions.

    4. . I arguethat neglect of the idea of freedom in economic theoryhas led to an inadequate conceptualization of the ethicalresponsibilities of corporations within management the-ory.

      Dierksmeier argues that the failure to include freedom in economic theory leads to flawed models of corporate responsibility . The structure of his argument follows a logical path, he connects the failure to mention "freedom" in theory and ethical failure in practice with the case of business school education. his conclusion, however, rests on huge assumption that managerial l ethics is directly affected by theoretical assumptions. That casual relationship, while convincible, is asserted but not established with sound empirical evidence. A stronger version would have such evidence included in comparison of ethics outcomes in humanistic schools versus non-humanistic schools.

    1. The intention of this article was to illustrate the seven building blocks of the dark side of social media, in great part to motivate further research that tries to untangle the underlying mechanisms in new ways. Existing theories cannot necessarily be transferred to the social media sphere (Naylor, Lamberton, & West, 2012). New theories, or combinations of existing theories, might better suit the inherent characteristics of social media, akin for example, to Scheiner, Krämer, and Baccarella (2016) who base their theoretical framework to explain unethical behavior on social media by entrepreneurs on the concept of moral disengagement and regulatory focus theory. We believe that our dark side honeycomb framework can help to motivate and guide the combination of lenses from different disciplines in order to develop novel theories, models, and classification frameworks that shed light on the dark side of social media.4.2. Use adequate methodologies for online and dark contexts!There is a significant opportunity for future research studies using contemporary methodologies that suit the characteristics of social media. For instance, a recent and effective development for understanding online behavior might be netnography (a portmanteau of Internet and ethnography), which allows researchers to study social interaction in modern digital communication contexts. However, a lot has happened since its introduction by Kozinets in 1998: smart phones with high-definition cameras, ubiquitous data networks, and social media networks that did not exist at all. The activities in the sharing building block of the dark side honeycomb, for instance, certainly were not the same before the widespread adoption of these tools, and neither were likely any of the other building blocks. These technological developments and their pervasiveness in our society certainly warrant the advancement of digital data collection and analysis methodologies. Especially in light of recent advancements (e.g., artificial intelligence-powered social media content analysis tools included in IBM Watson), we hope that fellow researchers will develop and test new ways in which we can study the dark side of social media.

      The authors’ call to action is clear but focuses mainly on future research rather than immediate solutions. They could improve the argument by connecting each research goal to a specific action that platforms or organizations can take now. For example, linking the framework to privacy guidelines or anti-harassment measures would make the recommendations feel more urgent. Expanding these ideas would create a stronger bridge between academic theory and everyday application

    2. the more outrageous the posting, the quicker it tends to spread and harm reputations.

      This observation is believable, but the authors do not provide enough evidence to fully support it. It assumes that all viral posts are outrageous, which may not always be true. Adding research about how emotional or shocking content affects engagement would make this claim more credible. Without that support, the statement feels more like an assumption than a verified fact

    3. The dark side of this functionality is that the location and availability of users are known and can be tracked without their awareness or consent.

      This example effectively shows the privacy risks connected to location tracking on social media. However, the authors could improve this section by suggesting realistic solutions such as opt-in tracking or time-limited data storage. These ideas would not only highlight the problem but also guide readers toward potential policy changes. Offering a concrete path forward would make the paper more practical and relevant for real-world use

    4. The “shallowing hypothesis,” for instance, suggests that certain types of social media activity (e.g., sharing and conversing) lead to a decline in ordinary daily reflective thinking and instead promote quick and superficial thoughts that can result in cognitive and moral triviality.

      The idea of “shallowing” is interesting but not clearly defined in this context. The authors should explain how reflective thinking is measured and what evidence supports this decline. Without defining the term, the statement feels vague and difficult to evaluate critically. Providing a short summary of a related study or explaining the criteria for “superficial thinking” would make this claim more concrete and understandable.

    5. Many studies have touted the advantages that social media would bring to individuals and firms (e.g., Kumar, Bezawada, Rishika, Janakiraman, & Kannan, 2016; Sabate, Berbegal-Mirabent, Cañabate, & Lebherz, 2014; Wagner, 2017). They highlight the “bright side of social media” and how engagement between firms and consumers is being democratized (Kietzmann, Hermkens, McCarthy, & Silvestre, 2011). For firms, this means social media would improve marketing, public relations, customer service, product development, personnel decision-making, and other business activities that rely on information exchanges and engagement with consumers and employees. Many of these advantages have materialized, thus leading almost 50% of all EU firms to use at least one form of social media in 2017 (Eurostat, 2017b). These firms use social media to not only broadcast company content but also track sentiment worldwide by analyzing user-generated content (Paniagua, Korzynski, & Mas-Tur, 2017), consumer-generated intellectual property (Berthon, Pitt, Kietzmann, & McCarthy, 2015), and interactions on social networking sites (Wagner, Baccarella, & Voigt, 2017), to adjust their business and marketing strategies appropriately.Regardless of the numerous opportunities social media offer, an increasing number of incidents demonstrate that there is undoubtedly a “dark side” to social media.

      The authors make a logically valid point by contrasting the benefits of social media with its negative consequences. The reasoning makes sense because the same features that allow communication and connection can also spread harm. Still, the paragraph would be stronger if the authors connected specific social media features directly to the harms they cause. This would make the relationship between benefits and drawbacks more precise instead of relying on general contrast

    1. The third component of the love triangle is commitment/decision (Sternberg, 1986, 1988). This component refers to the decision to love someone and the commitment to maintain that love. Because commitment is based on cognition and decision making, Sternberg referred to it as the “cool” or “cold” component. Of the three components of the love triangle, commitment is most stable over time with commitment typically building gradually and then stabilizing (Acker & Davis, 1992). Commitment is a stronger predictor of relationship satisfaction and longevity than either intimacy or passion (Acker & Davis, 1992; S. S. Hendrick, Hendrick, & Adler, 1988). In a study by Fehr (1988), college-aged students rated how closely various words or phrases, such as affection and missing each other when apart, relate to love. Of the 68 words and phrases Fehr listed, the word trust was rated as most central to love. Commitment ranked 8th overall, suggesting that it is also critical in people’s conceptualizations of love. The other two components of the triangular theory of love were also important, although less central, with intimacy ranking 19th and sexual passion rating 40th. Fehr (1988) also had college-aged students rate words and phrases describing the concept of commitment. Loyalty, responsibility, living up to one’s word, faithfulness, and trust were the top five descriptors of commitment, suggesting that commitment involves being there for someone over the long haul. Yet commitment alone is not enough to keep a relationship happy. Fatuous love is rooted in commitment and passion without intimacy. This type of love is relatively rare in modern times. Relationships that exemplify fatuous love are committed but are based on sex rather than intimacy. Historically these included mistress relationships where there was an arrangement for long-term support for sex without emotional intimacy. Some modern-day friends-with-benefits relationships also fit this description to some extent, especially when two people are long-term hookup buddies but do not have the type of emotional connection that romantic couples have (see Chapters 9 and 10 for more on friends-with-benefits relationships). Most hookup buddies, however, have little commitment. In general, these relationships are less satisfying than those characterized by consummate or romantic love. Fatuous love: A type of love characterized by commitment and passion without intimacy. The least satisfying relationships are characterized by empty love, which means they have commitment but relatively low levels of intimacy and passion. Some long-term relationships fall into this category. For instance, if partners no longer feel attached to each other but stay together for religious reasons or because of the children, their love might be characterized as empty. In other cases, empty love characterizes the beginning of a relationship. For example, spouses in arranged marriages may begin their relationships with empty love. Intimacy and passion may, or may not, emerge later.

      This passage explains how commitment functions as the rational or “cool” part of Sternberg’s triangular theory of love, emphasizing decision making and long term stability. I found it interesting that commitment is described as a stronger predictor of relationship satisfaction than intimacy or passion. It shows that emotional or physical connection alone isn’t enough to sustain love. The mention of trust being rated as most central to love makes sense because trust reinforces reliability and loyalty which are essential for long term relationships. What stands out is that even though commitment is crucial, the passage reminds us that it can’t exist in isolation. Without intimacy, relationships can become mechanical or unfulfilling, like the “fatuous love” described. This makes me think about how modern relationships often emphasize passion early on but may struggle to build the consistent trust and loyalty that real commitment requires.

  6. drive.google.com drive.google.com
    1. It does not propose toidentify the risk level of AI systems but rather focuses on its outcome toregulate.

      If the UK’s plan doesn’t identify risk levels and only focuses on outcomes, they wouldn't know which systems are dangerous. It feels like they’re assuming outcome-based regulation will work just as well without proving it. I would say that shows hasty generalization because skipping risk evaluation could mean missing red flags early on. The author should explain why this approach is still reliable or give an example of it actually working.

    2. s of May and June2020, the Global Privacy Assembly found that 68 per cent of the 38 memberssurveyed did not have laws or guidelines specific to accountability in theuse of AI.

      It was shocking to me that most countries that were surveyed did not have specific laws for AI accountability in 2020. This shows that even though people have been talking about regulating AI for a while, actual rules haven’t caught up. The paragraph says that having laws automatically makes AI safer, but it doesn’t explain how these rules would actually be enforced. And I feel like there is some false cause in this because it just assumes that passing laws will simply make AI safer. It is also important to see what countries created good regulations or showed cases where the lack of laws caused real problems.

    1. But what does it mean to be a compelling character? Some distinguish between round and flat characters. A round character is a complex, often conflicted character with a deep internal life who usually undergoes some kind of change over the course of the story. A flat character lacks that complexity, does not change at all over the course of the story, and is usually there only to help the more round characters on their journeys.

      Round characters feel real because they grow and wrestle with conflict, while flat characters are more like support beams holding up the main story.

    2. Act three, which is usually about the same length as act one, is all about our protagonist rallying to overcome that last obstacle leading to a climactic showdown and a resolution to their story. Usually that means they reach the goal defined in act one. But sometimes the journey clarifies a new goal, or they realize they always had what they were searching for and just needed to see it in themselves (insert eye roll here). But you get the idea, act three brings some kind of resolution.

      Always thought about the process and usual times of events in movies. To see that even literature follows something similar is very interesting.

    1. First, there was the Codex Arun­del. It is, writes Jonathan Jones at The Guardian, “the liv­ing record of a uni­ver­sal mind”—but also, specif­i­cal­ly, the mind of a “technophile.”

      Seeing a tiny bit of Codex Atlanticus when it came to DC was so fun. Awesome seeing this wide ranging set of ideas online now! So cool. What a mind.

  7. drive.google.com drive.google.com
    1. Therehave been more unsolved bombings of Negro homes and churches in Birmingham than in any othercity in the nation.

      Unsolved... who knows who could've committed these crimes but it seems that the police are indifferent. To many, indifference is viewed as complicity.

    1. Acetate sunglasses frames deliver punchy style and a comfort factor you can’t ignore, but metal frames shine when you want lightweight durability and modern minimalism

      Punchy as in

    1. a user will want to move their passkeys to the Credential Manager of a different vendor or platform. This is currently challenging to do, but FIDO and vendors are actively working to address this issue and we wait to see support for this take hold across the market.

      Good list of issues in this article. This issue of Credential Exchange Protocol / Format is so key to me, and so timely for this article, since the initial 1.0 was done a year ago. AFAIK there aren't implementations yet, Passkeys are locked on a device.

    1. Acquiring good taste comes through using various things, discarding the ones you don’t like and keeping the ones you do. if you never try various things, you will not acquire good taste.

      This is the core point I was hoping for. I don't think it's super well supported or well argued here. But it shows up.

      We don't know who we are unless we try new things, unless we put ourselves through processes of discovery. Taste comes from trying, and the tinkerers are down to FAFO.

    1. Most complex software ships with a few bugs. Obviously, we want to avoid them, but the more complex a feature is, the harder it is to cover all the use-cases. As we get closer to our RC date, do we feel confident that what we're shipping has as few blocking bugs as possible? I would like to say we're close, but the truth is I have no idea. It feels like we'll have to keep trying the features for a bit until we don't run into anything - but we have less than 3 weeks before the RC ships. Here's a few surprising bugs that need to get fixed before I would feel comfortable shipping node12 in stable.
    1. Why 2025 Signals the End of the Traditional Event LoopWith multishot receives and accepts, an io_uring-based server can initialize its I/O intents once and then simply process completions in a loop. The “event loop” becomes a completion-processing loop,

      section header says 2025, but 6.0 was released in October 2022. still, exciting shift, and phase shifts tend to happen slower.

    1. Whether in old times, beyond the reach of history or tradition, and in some remote period of civilization, when, as some imagine, the arts may have flourished to a degree unknown to those whom we term the Ancients, there existed an intercourse between the opposite shores of the Atlantic; whether the Egyptian legend, narrated by Plato, respecting the island of Atalantis, was indeed no fable, but the obscure tradition of some vast country, engulphed by one of those mighty convulsions of our globe, which have left traces of the ocean on the summits of lofty mountains, must ever remain matters of vague and visionary speculation.

      Observation : The author is talking about how in ancient times there may have been an island called Atlantis

      Interpretation : Since there were many unexplored and unfound places in ancient times, the author is saying that there was an island that was called Atlantis that was never discovered

      Connection : This connects to how many places in ancient times were discovered and that there were many unknown places at the time.

      Context : This is to show what is coming up because the author is talking about the ice of Christopher Columbus and highlight his achievements

    1. The topic of one-way doors fascinated me. These are decisions that can’t be undone. They require a mix of experience, intuition, optimism, and authority.

      Yet in software, I think engineering culture has adopted an eagerness not for adaptability and remakeability, but a penchant for simple-as-can-be directness. We want to build only enough, even if that means not supporting broader possible futures.

    1. . If there are more than three, each one shall pay five shillings. 3. Those who shall have been present with boats shall be sentenced to three shillings

      This law may have encouraged kidnappers or rapist to work in groups since the penalty would be less if they worked together. but they also had a law enforcing guilty by association. Would this this added onto the original charge of kidnapping.

    1. An example I often cite to drive home AI’s importance to our careers is that of Microsoft Office. Back in the 90s, MS Office did not seem like a mandatory tool to familiarize oneself with. It became one over time. Today, you are expected to know how to use Word, PowerPoint, or Excel. That, however, does not mean that you’ve had to learn how to build these applications, only to understand what they can and cannot do and to leverage them. That is the trajectory AI is on.

      This is an important point that shows what the future of AI might be. Looking back, a lot of things that are now a major part of how work gets done were all new at one point and people were against them. This is the way a lot of people view change and they don't believe it will ever work out. AI is new and people are worried about the use of AI but in the future, it might be very involved with most things.

    1. My original thinking going into this class is that basically in the Medieval times people try to live their everyday lives nicely, in their villages, always outside with the only thing surrounding them is people, nature, and their village, where it seems calm. But, entirely unfree, I imagine that people dont really have a say at all and they are not free, where one wrong move or anything displeasing to the king or guards in off with your head. So on page 207 when the friend says he cannot lend his oxen because it wont do its job otherwise he will be severely punished does help my original thoughts, I wonder if this will change.

    1. 'Her face card ate and her walk was good for her first ever show.' Another added, 'She did so good, I wish they gave her bigger wings.'

      this is an opinion since this sentence is showing a subjective statement by a viewer of the show, making it a difference from "good" and "bad", good and bad depending on someones own opinion of what they believe is good or bad. Someone might have liked her wings the way they were but this viewer wanted her wings to be bigger (an subjective pinion)

    1. First, AI‐relatedethics, fairness and privacy are probably the most important topicsthat have not had the prominence they deserve in our three fields ofstudy.

      As I feel like this statement is subjective and sort of factual, the author dives straight into how the most important topic we should be focusing on is the increase in AI. As I do agree that the importance of AI is needed but I think it can be misleading that it is our only importance to focus on AI and not to focus on other things that are impacting us too. When only focusing on certain aspects it can mislead the reader and cause the readers to also fall off task.

    1. Some participants argued that moredata is necessary to capture the full range of cultural expressions, while others contended thatthe focus should be on developing thicker development pipelines that incorporate expertiseand context. They discussed the limitations of current models, which often operate on crudemetrics and may not adequately represent the richness of cultural data.

      This more vs thicker data debate is a good one, but it is also begging the question - do we want to model all cultural variation? e.g. https://aclanthology.org/2025.naacl-long.273.pdf

    2. Institutional aspects and interdisciplinarity play a significant role in the cultures of AIproduction. There is a need for alternative imaginaries of technology that go beyond thecorporate inclusion of data.

      This feels like a real throughline regardless - how do we do technology that is not adapted noblesse oblige, but by the stakeholders who use it?

    1. So, are we like Plato’s Cave dwellers — not just in infancy, but throughout our adult lives? It seems we are, at least in one important way: I refer to the unsettling fact that we too are haunted by things we do not know we do not know; and we cannot imagine how drastically those unknowns would alter our lives and our view of the world.

      We are all ignorant in some way or form, be it book/street smarts. Lacking communication, knowledge, money management, or other topics. Someone can’t know everything.

    2. Human beings tend to prefer cognitive comfort, the reinforcement of the familiar, to an encounter with the unknown.

      YES! I've been saying that for years! In different wording, but still!

    3. Utter ignorance, however, for which the dictionary offers the term ignoration, is yet more profound: The prisoners in Plato’s Cave do not know what they do not know; they do not even know that they do not know. They dwell in ignorance, but cannot recognize it. Ignoration is thus a predicament, a trap — one that is not comprehended by those who are caught in it and dwell there. In a sense, they are not in a place at all: Theirs is rather a placelessness in which one doesn’t even know one is lost.Fortunately, this trap, like a Chinese finger puzzle, has a simple solution: learning. And yet, it is remarkable that an escape occurs — how does one come to learn what one does not know one does not know? After all, the prisoners have no ability to free themselves; more to the point, they have no motivation to escape, since even that desire would presuppose a sense of possibility they lack. Their bondage seems natural to them; it is their form of life; nothing better calls to them. They cannot see their ignorance as ignorance. As the influential Muslim philosopher Al-Ghazzali put it: “Heedlessness is an illness which the afflicted person cannot cure himself.”

      It's the biggest trap in human existence that caused millions of lives throughout history. following blindly.

    4. Suppose, however, that you had never heard of the Singer automobile. Despite your familiarity with antique automobile manufacturers and models, you might be surprised to learn of a make or model that had escaped your notice. Or, imagine that you, somewhat less expert, only knew the names of a few sports car manufacturers. In either case, you would have some sense of what acquiring such new knowledge would be like; you could specify its parameters beforehand. You would grasp in a general way what learning about an unfamiliar automaker would entail; and given that possibility, you could identify what it is you do not know — albeit with less precision than in the first case. Such factual ignorance can be delineated in this way because you possess other general, relevant knowledge (in this case, knowledge about cars, their manufacturers, the meaning of “roadster,” and so on). In these ordinary situations, it is the knowledge we possess that serves to awaken and focus our sense of our own ignorance.

      "Awaken and focus our sense of our own ignorance," we can specialize in a broad topic. For example, an animator is experienced in 3D animation but not so much in 2D animation.