10,000 Matching Annotations
  1. Last 7 days
    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review): 

      Summary: 

      This study presents convincing findings that oligodendrocytes play a regulatory role in spontaneous neural activity synchronisation during early postnatal development, with implications for adult brain function. Utilising targeted genetic approaches, the authors demonstrate how oligodendrocyte depletion impacts Purkinje cell activity and behaviours dependent on cerebellar function. Delayed myelination during critical developmental windows is linked to persistent alterations in neural circuit function, underscoring the lasting impact of oligodendrocyte activity. 

      Strengths: 

      (1) The research leverages the anatomically distinct olivocerebellar circuit, a well-characterized system with known developmental timelines and inputs, strengthening the link between oligodendrocyte function and neural synchronization. 

      (2) Functional assessments, supported by behavioral tests, validate the findings of in vivo calcium imaging, enhancing the study's credibility. 

      (3) Extending the study to assess the long-term effects of early-life myelination disruptions adds depth to the implications for both circuit function and behavior.

      We appreciate these positive evaluation.

      Weaknesses: 

      (1) The study would benefit from a closer analysis of myelination during the periods when synchrony is recorded. Direct correlations between myelination and synchronized activity would substantiate the mechanistic link and clarify if observed behavioral deficits stem from altered myelination timing. 

      We appreciate the reviewer’s thoughtful suggestion and have expanded the manuscript to clarify how oligodendrocyte maturation relates to the development of Purkinje-cell synchrony. The developmental trajectory of Purkinje-cell synchrony has already been comprehensively characterized by Good et al. (2017, Cell Reports 21: 2066–2073): synchrony drops from a high level at P3–P5 to adult-like values by P8. We found that the myelination in the cerebellum starts to appear from P5-P7 (Figure S1A, B), indicating that the timing of Purkinje cell desynchronization coincides with the initial appearance of oligodendrocytes and myelin in the cerebellum. To determine whether myelin growth could nevertheless modulate this process, we quantified ASPA-positive oligodendrocyte density and MBP-positive bundle thickness and area at P10, P14, P21 and adulthood (Fig. 1J, K, Fig. S1E). Both metrics increase monotonically and clearly lag behind the rapid drop in synchrony, indicating that myelination could be not the primary trigger for the desynchronization. When oligodendrocytes were ablated during the second postnatal week, the synchrony was reduced (new Fig. 2). Thus, once myelination is underway, oligodendrocytes become critical for maintaining the synchrony, acting not as the initiators but as the stabilizers and refiners of the mature network state.

      We have added the new subsection in discussion (lines 451–467) now in which we propose a two-phase model. Phase I (P3–P8): High early synchrony is generated by non-myelin mechanisms (e.g. transient gap junctions, shared climbing-fiber input). Phase II (P8-). As oligodendrocytes proliferate and ensheath axons, they fine-tune conduction velocity and stabilize the mature, low-synchrony network state.

      We believe these additions fully address the reviewer’s concerns.

      (2) Although the study focuses on Purkinje cells in the cerebellum, neural synchrony typically involves cross-regional interactions. Expanding the discussion on how localized Purkinje synchrony affects broader behaviors - such as anxiety, motor function, and sociality - would enhance the findings' functional significance.

      We appreciate the reviewer’s helpful suggestion and have expanded the Discussion (lines 543–564) to clarify how localized Purkinje-cell synchrony can influence broader behavioral domains. In the revised text we note that changes in PC synchrony propagate  into thalamic, prefrontal, limbic, and parietal targets, thereby impacting distributed networks involved in motor coordination, affect, and social interaction. Our optogenetic rescue experiments further support this framework, as transient resynchronization of PCs normalized sociability and motor coordination while leaving anxiety-like behavior impaired. This dissociation highlights that different behavioral domains rely to varying degrees on precise cerebellar synchrony and underscores how even localized perturbations in Purkinje timing can acquire system-level significance.

      (3) The authors discuss the possibility of oligodendrocyte-mediated synapse elimination as a possible mechanism behind their findings, drawing from relevant recent literature on oligodendrocyte precursor cells. However, there are no data presented supporting this assumption. The authors should explain why they think the mechanism behind their observation extends beyond the contribution of myelination or remove this point from the discussion entirely.

      We thank the reviewer for pointing out that our original discussion of oligodendrocyte-mediated synapse elimination was not directly supported by data in the present manuscript. Because we are actively analyzing this question in a separate, follow-up study, we have deleted the speculative passage to keep the current paper focused on the demonstrated, myelination-dependent effects. We believe this change sharpens the mechanistic narrative and fully addresses the reviewer’s concern.

      (4) It would be valuable to investigate the secondary effects of oligodendrocyte depletion on other glial cells, particularly astrocytes or microglia, which could influence long-term behavioral outcomes. Identifying whether the lasting effects stem from developmental oligodendrocyte function alone or also involve myelination could deepen the study's insights. 

      We thank the reviewer for raising this point and have performed the requested analyses. Using IBA1 immunostaining for microglia and S100b for Bergmann glia, we quantified cell density and these marker signal intensity at P14 and P21. Neither microglial or Bergmann-glial differed between control and oligodendrocyte-ablated mice at either time‐point (new Figure S2). These results indicate that the behavioral phenotypes we report are unlikely to arise from secondary activation or loss of other glial populations.

      We now added results (lines 275–286) and also discuss myelination and other oligodendrocyte function (lines 443–450). It remains difficult to disentangle conduction-related effects from myelination-independent trophic roles of oligodendrocytes. We therefore note explicitly that future work employing stage-specific genetic tools or acute metabolic manipulations will be required to parse these contributions more definitively.

      (5) The authors should explore the use of different methods to disturb myelin production for a longer time, in order to further determine if the observed effects are transient or if they could have longer-lasting effects.

      We agree that distinguishing transient from enduring effects is critical. Importantly, our original submission already included data demonstrating a persistent deficit of PC population synchrony (Fig. 4, previous Fig. 3): (i) at P14—the early age after oligodendrocyte ablation—population synchrony is reduced, and (ii) the same deficit is still present in adults (P60–P70) despite full recovery of ASPA-positive cell density and MBP-area and -thickness (Fig. 2H-K, Fig. S1E, and Fig. 4). We also performed the ablation of oligodendrocytes after the third postnatal week. Despite a similar acute drop in ASPA-positive cells, neither population synchrony nor anxiety-, motor-, or social behaviors differed from littermate controls. Thus, extending myelin disruption beyond the developmental window does not exacerbate or prolong the phenotype, whereas a short perturbation within that window leaves a permanent timing defect. These findings strengthen our conclusion that it is the developmental oligodendrocyte/myelination program itself—rather than ongoing adult myelin production—that is essential for establishing stable network synchrony. We now highlight this point explicitly in the revised Discussion (lines 507–522).

      (6) Throughout the paper, there are concerns about statistical analyses, particularly on the use of the Mann-Whitney test or using fields of view as biological replicates.

      We appreciate the reviewer’s guidance on appropriate statistical treatment. To address these concerns we have re-analyzed all datasets that contained multiple measurements per animal (e.g., fields of view, lobules, or trials) using nested statistics with animal as the higher-order unit. Specifically, we applied a two-level nested ANOVA when more than two groups were compared and a nested t-test when two conditions were present. The re-analysis confirmed all original conclusions. Because the nested models yielded comparable effect sizes to the Mann–Whitney tests, we have retained the mean ± SEM for ease of comparison with prior literature but now also report all values for each mouse in Table 1. In cases where a single measurement per mouse was compared between two groups, we used the Mann–Whitney test and present the results in the graphs as median values.

      Major

      (1) The authors present compelling evidence that early loss of myelination disrupts synchronous firing prematurely. However, synchronous neuronal firing does not equate to circuit synchronization. It is improbable that myelination directly generates synchronous firing in Purkinje cells (PCs). For instance, Foran et al. (1992) identified that cerebellar myelination begins around postnatal day 6 (P6), while Good et al. (2017) recorded a developmental decline in PC activity correlation from P5-P11. To clarify myelin's role, we recommend detailed myelin imaging through light microscopy (MBP staining at higher magnification) to assess the extent of myelin removal accurately. Myelin sheaths, as shown by Snaidero et al. (2020), can persist after oligodendrocyte (OL) death, particularly following DTA induction (Pohl et al. 2011). Quantification of MBP+ area, rather than mean MBP intensity, is necessary to accurately measure myelin coverage.

      We appreciate the reviewer’s concern that residual sheaths might remain after oligodendrocyte ablation and have therefore re-examined myelin at higher spatial resolution. Then, two independent metrics were extracted: MBP⁺ area fraction in the white matter and MBP⁺ bundle thickness (new Figure 1J, K, and Fig. S1E). We confirm a robust, transient loss of myelin at P10 and P14 as shown by the reduction of MBP⁺ area and MBP⁺ bundle thickness. Both parameters recovered to control values by P21 and adulthood, indicating effective remyelination. These data demonstrate that, in our paradigm, oligodendrocyte ablation is accompanied by substantial sheath loss rather than the persistent myelin reported after acute toxin exposure. We have added them in Result (lines 266–271).

      The results reinforce the view that myelin removal and/or loss of trophic support during a narrow developmental window drive the long-term hyposynchrony and behavioral phenotypes we report. We have added the new subsection in discussion (lines 443–450) now in which we propose a two-phase model. Phase I (P3–P8): High early synchrony is generated by non-myelin mechanisms (e.g. transient gap junctions, shared climbing-fiber input). Phase II (P8-). As oligodendrocytes proliferate and ensheath axons, they fine-tune conduction velocity and stabilize the mature, low-synchrony network state. We believe these additions fully address the reviewer’s concerns.

      (2) Surprisingly, the authors speculate about oligodendrocyte-mediated synaptic pruning without supportive data, shifting the focus away from the potential impact of myelination. Even if OLs perform synaptic pruning, OL depletion would likely maintain synchrony, yet the opposite was observed. Further characterisation of the model and the potential source of the effect is needed. 

      We thank the reviewer for pointing out that our original discussion of oligodendrocyte-mediated synapse elimination was not directly supported by data in the present manuscript. Because we are actively analyzing this question in a separate, follow-up study, we have deleted the speculative passage to keep the current paper focused on the demonstrated, myelination-dependent effects. We believe this change sharpens the mechanistic narrative and fully addresses the reviewer’s concern.

      (3) Improved characterization of the DTA model would add clarity. Although almost all infected cells are reported as OLs, quantification of infected OL-lineage cells (e.g., via Olig2 staining) would verify this. It remains possible that observed activity changes are driven by OL-independent demyelination effects. We suggest cross-staining with Iba1 and GFAP to rule out inflammation or gliosis. 

      We thank the reviewer for this important suggestion and have expanded our histological characterization accordingly. First, to verify that DTA expression is confined to mature oligodendrocytes, we co-stained cerebellar sections collected 7 days after AAV-hMAG-mCherry injection with Olig2 (pan-OL lineage) and ASPA (mature OL marker) as shown in Figure S1C-D. Quantitative analysis revealed that 100 % of mCherry⁺ cells were Olig2⁺/ASPA⁺, whereas mCherry signal was virtually absent in Olig2⁺/ASPA⁻ immature oligodendrocytes. These data confirm that our DTA manipulation targets mature myelinating OLs rather than earlier lineage stages. We have added them in Result (lines 260–262).

      Second, to examine indirect effects mediated by other glia, we performed cross-staining with IBA1 (microglia) and S100β (Bergmann). Cell density and fluorescence intensity for each marker were indistinguishable between control and DTA groups at P14 and P21 (Figure S2A-H). Thus, neither inflammation nor astro-/microgliosis accompanies OL ablation. We have added them in Result (lines 275–286).

      Collectively, these results demonstrate that the observed desynchronization and behavioral phenotypes arise from specific loss of mature oligodendrocytes and their myelin, rather than from off-target viral expression or secondary glial responses.

      (4) The use of an independent model of myelin loss, such as the inducible Myrf knockout mouse with a MAG promoter, to assess if oligodendrocyte loss causes temporary or sustained impacts, employing an extended knockout model like Myrf cKO with MAG-Cre viral methods would be advantageous.

      We agree that distinguishing transient from enduring effects is critical. Importantly, our original submission already included data demonstrating a persistent deficit of PC population synchrony (Fig. 4, previous Fig. 3): (i) at P13-15—the early age after oligodendrocyte ablation—population synchrony is reduced, and (ii) the same deficit is still present in adults (P60–P70) despite full recovery of ASPA-positive cell density and MBP-area and -thickness (Fig. 2H-K, Fig. S1E, and Fig. 4). We also performed the ablation of oligodendrocytes after the third postnatal week. Despite a similar acute drop in ASPA-positive cells, neither population synchrony nor anxiety-, motor-, or social behaviors differed from littermate controls. Thus, extending myelin disruption beyond the developmental window does not exacerbate or prolong the phenotype, whereas a short perturbation within that window leaves a permanent timing defect. These findings strengthen our conclusion that it is the developmental oligodendrocyte/myelination program itself—rather than ongoing adult myelin production—that is essential for establishing stable network synchrony. We now highlight this point explicitly in the revised Discussion (lines 507–522).

      (5) For statistical robustness, the use of non-parametric tests (Mann-Whitney) necessitates reporting the median instead of the mean as the authors do. Furthermore, as repeated measurements within the same animal are not independent, the authors should ideally use nested ANOVA (or nested t-test comparing two conditions) to validate their findings (Aarts et al., Nat. Neuroscience 2014). Alternatively use one-way ANOVA with each animal as a biological replicate, although this means that the distribution in the data sets per animal is lost.

      We appreciate the reviewer’s guidance on appropriate statistical treatment. To address these concerns we have re-analyzed all datasets that contained multiple measurements per animal (e.g., fields of view, lobules, or trials) using nested statistics with animal as the higher-order unit. Specifically, we applied a two-level nested ANOVA when more than two groups were compared and a nested t-test when two conditions were present. The re-analysis confirmed all original conclusions. Because the nested models yielded comparable effect sizes to the Mann–Whitney tests, we have retained the mean ± SEM for ease of comparison with prior literature but now also report all values for each mouse in Table 1. In cases where a single measurement per mouse was compared between two groups, we used the Mann–Whitney test and present the results in the graphs as median values.

      Minor Points 

      (1) In all figures, please specify the ages at which each procedure was conducted, as demonstrated in Figure 2A.

      All main and supplementary figures now specify the exact postnatal age.

      (2) Clarify the selection criteria for regions of interest (ROI) in calcium imaging, and provide representative ROIs.

      We appreciate the reviewer’s guidance. We have clarified that our ROI detection followed the protocol reported by our previous paper (Tanigawa et al., 2024, Communications Biology) (lines 177-178) and representative Purkinje cell ROIs are now shown in Fig. 2B.

      (3) Include data on the proportion of climbing fiber or inferior olive neurons expressing Kir and the total number of neurons transfected, which would help contextualize the observed effects on PC synchronization and its broader implications for cerebellar circuit function.

      We appreciate the reviewer’s guidance. New Fig. 7C summarizes the efficiency of AAV-GFP and AAV-Kir2.1-GFP injections into the inferior olive. Across 4 mice PCs with GFP-labeled CFs was detected in 19.3 ± 11.9 (mean ± S.D.) % for control and 26.2 ± 11.8 (mean ± S.D.) % for Kir2.1 of PCs. These numbers are reported in the Results (lines 373–375).

      (4) Higher magnification images in Figures 1 and S3 would improve visual clarity. 

      We have addressed the request for higher-magnification images in two ways. First, all panels in Figure S3 were placed on a larger canvas. Second, in Figure 1 we adjusted panel sizes to emphasize fine structure: panel 1C already represents an enlargement of the RFP positive cells shown in 1B, and panel 1H and 1J now occupies a wider span so that every ASPA-positive cell body can be distinguished. Should the reviewer still require an even closer view, we have additional ready for upload.

      (5) Consider language editing to enhance overall clarity and readability.

      The entire manuscript was edited to improve flow, consistency, and readability.

      (6) Refine the discussion to align with the presented data.

      We have refined the discussion.

      Thank you once again for your constructive suggestions and comments. We believe these changes have improved the clarity and readability of our manuscript.

      Reviewer #2 (Public review):

      We appreciate Reviewer #2’s positive evaluation of our work and thank him/her for the constructive suggestions and comments. We followed these suggestions and comments and have conducted additional experiments. We have rewritten the manuscript and revised the figures according to the points Reviewer #1 mentioned. Our point-by-point responses to the comments are as follows.

      Summary:

      In this manuscript, the authors use genetic tools to ablate oligodendrocytes in the cerebellum during postnatal development. They show that the oligodendrocyte numbers return to normal post-weaning. Yet, the loss of oligodendrocytes during development seems to result in decreased synchrony of calcium transients in Purkinje neurons across the cerebellum. Further, there were deficits in social behaviors and motor coordination. Finally, they suppress activity in a subset of climbing fibers to show that it results in similar phenotypes in the calcium signaling and behavioral assays. They conclude that the behavioral deficits in the oligodendrocyte ablation experiments must result from loss of synchrony. 

      Strengths:

      Use of genetic tools to induce perturbations in a spatiotemporally specific manner.

      We appreciate these positive evaluation.

      Weaknesses: 

      The main weakness in this manuscript is the lack of a cohesive causal connection between the experimental manipulation performed and the phenotypes observed. Though they have taken great care to induce oligodendrocyte loss specifically in the cerebellum and at specific time windows, the subsequent experiments do not address specific questions regarding the effect of this manipulation.

      Calcium transients in Purkinje neurons are caused to a large extent by climbing fibers, but there is evidence for simple spikes to also underlie the dF/F signatures (Ramirez and Stell, Cell Reports, 2016).

      We thank the reviewer for drawing attention to the work of Ramirez & Stell (2016), which showed that simple-spike bursts can elicit Ca²⁺ rises, but only in the soma and proximal dendrites of adult Purkinje cells. In our study, Regions of Interest were restricted to the dendritic arbor, where SS-evoked signals are essentially undetectable (Ramirez & Stell, 2016), whereas climbing-fiber complex spikes generate large, all-or-none transients (Good et al., 2017). Accordingly, even if a rare SS-driven event reached threshold it would likely fall outside our ROIs.

      In addition, we directly imaged CF population activity by expressing GCaMP7f in inferior-olive neurons. Correlation analysis of CF boutons revealed that DTA ablation lowers CF–CF synchrony at P14 (new Fig. 3A–D). Because CF synchrony is a principal driver of Purkinje-cell co-activation, this reduction provides a mechanistic link between oligodendrocyte loss and the hyposynchrony we observe among Purkinje cells. Consistent with this interpretation, electrophysiological recordings showed that parallel-fiber EPSCs and inhibitory synaptic inputs onto Purkinje cells were unchanged by DTA treatment (Fig. 3E-H) , which makes it less likely that the reduced synchrony simply reflects changes in other excitatory or inhibitory synaptic drive.

      That said, SS-dependent somatic Ca²⁺ signals could still influence downstream plasticity and long-term cerebellar function. In future work we therefore plan to combine somatic imaging with stage-specific oligodendrocyte manipulations to test whether SS-evoked Ca²⁺ dynamics are themselves modulated by oligodendrocyte support. We have added these descriptions in the Results (lines 288–294) and Discussion (lines 423–434).

      Also, it is erroneous to categorize these calcium signals as signatures of "spontaneous activity" of Purkinje neurons as they can have dual origins.

      Thank you for pointing out the potential ambiguity. In the revised manuscript we have clarified how we use the term “spontaneous activity” in the context of our measurements (lines 289-290). Our calcium imaging was restricted to the dendritic arbor of Purkinje cells, where calcium transients are dominated by climbing-fiber (CF) inputs (Ramirez & Stell, 2016; Good et al., 2017). Thus, the synchrony values reported here primarily reflect CF-driven complex spikes rather than mixed signals of dual origin. We have revised the Results section accordingly (lines 289–293) to make this measurement-specific limitation explicit.

      Further, the effect of developmental oligodendrocyte ablation on the cerebellum has been previously reported by Mathis et al., Development, 2003. They report very severe effects such as the loss of molecular layer interneurons, stunted Purkinje neuron dendritic arbors, abnormal foliations, etc. In this context, it is hardly surprising that one would observe a reduction of synchrony in Purkinje neurons (perhaps due to loss of synaptic contacts, not only from CFs but also from granule cells).

      We appreciate the reviewer’s comparison to Mathis et al. (2003). Mathis et al. used MBP–HSV-TK transgenic mice in which systemic FIAU treatment eliminates oligodendrocytes. When ablation began at P1, they observed severe dysmorphology—loss of molecular-layer interneurons, Purkinje-cell (PC) dendritic stunting, and abnormal foliation. Crucially, however, the same study reports that starting the ablation later (FIAU from P6-P20) left cerebellar cyto-architecture entirely normal.

      Our AAV MAG-DTA paradigm resembles this later window. Our temporally restricted DTA protocol produces the same ‘late-onset’ profile—robust yet reversible hypomyelination with no loss of Purkinje cells, interneurons, dendritic length, or synaptic input (new Fig. S1–S2, Fig. 3E-H). The enduring hyposynchrony we report therefore cannot be attributed to the dramatic anatomical defects seen after prenatal ablation, but instead reveals a specific requirement for early-postnatal myelin in stabilizing PC synchrony, especially affecting CF-CF synchrony.

      This clarification shows that we have carefully considered the Mathis model and that our findings not only replicate, but also extend the earlier work. We have added these description in Result (lines 273-286)

      The last experiment with the expression of Kir2.1 in the inferior olive is hardly convincing.

      We appreciate the reviewer’s concern and have reinforced the causal link between Purkinje-cell synchrony and behavior. To test whether restoring PC synchrony is sufficient to rescue behavior, we introduced a red-shifted opsin (AAV-L7-rsChrimine) into PCs of DTA mice raised to adulthood. During testing we delivered 590-nm light pulses (10 ms, 1 Hz) to the vermis, driving brief, population-wide spiking (new Fig. 8). This periodic re-synchronization left anxiety measures unchanged (open-field center time remained low) but rescued both motor coordination (rotarod latency normalized to control levels) and sociability (time spent with a novel mouse restored). The dissociation implies that distinct behavioral domains differ in their sensitivity to PC timing precision and confirms that reduced synchrony—not cell loss or gross circuit damage (Fig. S1F, S2)—is the primary driver of the motor and social deficits. Together, the optogenetic rescue establishes a bidirectional, mechanistic link between PC synchrony and behavior, addressing the reviewer’s reservations about the original experiment. We have added these descriptions in Result (lines 394-415)

      In summary, while the authors used a specific tool to probe the role of developmental oligodendrocytes in cerebellar physiology and function, they failed to answer specific questions regarding this role, which they could have done with more fine-grained experimental analysis.

      Thank you once again for your constructive suggestions and comments. We believe these changes have improved the clarity and readability of our manuscript.

      Recommendations for the authors:

      Reviewer #2 (Recommendations for the authors):

      (1) Show that ODC loss is specific to the cerebellum.

      We thank the reviewer for requesting additional evidence. To verify that oligodendrocyte ablation was confined to the cerebellum, we injected an AAV carrying mCherry under the human MAG promoter (AAV-hMAG-mCherry) into the cerebellum, and screened the whole brain one week later. As shown in the new Figure 1E–G, mCherry positive cells were present throughout the injected cerebellar cortex (Fig. 1E), but no fluorescent cells were detected in extracerebellar regions—including cerebral cortex, medulla, pons, midbrain. These data demonstrate that our viral approach are specific to the cerebellum, ruling out off-target demyelination elsewhere in the CNS as a contributor to the behavioral and synchrony phenotypes. We have added these descriptions in Result (lines 262-264)

      (2) Characterize the gross morphology of the cerebellum at different developmental stages. Are major cell types all present? Major pathways preserved? 

      We thank the reviewer for requesting additional evidence. To ensure that the developmental loss of oligodendrocytes did not globally disturb cerebellar architecture, we performed a comprehensive histological and electrophysiological survey during development. New data are presented (new Fig. S1–S2, Fig. 3E-H).

      (1) Overall morphology. Low-magnification parvalbumin counterstaining revealed similar cerebellar area in DTA versus control mice at every age (Fig. S1F, G).

      (2) Major neuronal classes. Quantification of parvalbumin-positive Purkinje cells and interneurons showed no differences in density between control and DTA (Fig. S2E, H, M, N, P). Purkinje dendritic arbors were not different between control and DTA (Fig. S2G, O).

      (3) Excitatory and inhibitory synapse inputs. Miniature IPSCs and Parallel-fiber-EPSCs onto Purkinje cells were quantified; neither was differed between groups (Fig. 3E-G).

      (4) Glial populations. IBA1-positive microglia and S100β-positive astrocytes exhibited normal density and marker intensity (Fig. S2).

      Taken together, these analyses show that all major cell types are present at normal density, synaptic inputs from excitatory and inhibitory neurons are preserved, and gross cerebellar morphology is intact after DTA-mediated oligodendrocyte ablation.

      (3) Recording of PNs to see whether the lack of synchrony is due to CFs or simple spikes.

      We thank the reviewer for drawing attention to the work of Ramirez & Stell (2016), which showed that simple-spike bursts can elicit Ca<sup>2+</sup> rises, but only in the soma and proximal dendrites of adult Purkinje cells. In our study, Regions of Interest were restricted to the dendritic arbor, where SS-evoked signals are essentially undetectable (Ramirez & Stell, 2016), whereas climbing-fiber complex spikes generate large, all-or-none transients (Good et al., 2017). Accordingly, even if a rare SS-driven event reached threshold it would likely fall outside our ROIs.

      In addition, we directly imaged CF population activity by expressing GCaMP7f in inferior-olive neurons. Correlation analysis of CF boutons revealed that DTA ablation lowers CF–CF synchrony at P14 (new Fig. 3A–D). Because CF synchrony is a principal driver of Purkinje-cell co-activation, this reduction provides a mechanistic link between oligodendrocyte loss and the hyposynchrony we observe among Purkinje cells. Consistent with this interpretation, electrophysiological recordings showed that parallel-fiber EPSCs and inhibitory synaptic inputs onto Purkinje cells were unchanged by DTA treatment (Fig. 3E-H) , which makes it less likely that the reduced synchrony simply reflects changes in other excitatory or inhibitory synaptic drive.

      That said, SS-dependent somatic Ca<sup>2+</sup> signals could still influence downstream plasticity and long-term cerebellar function. In future work we therefore plan to combine somatic imaging with stage-specific oligodendrocyte manipulations to test whether SS-evoked Ca²⁺ dynamics are themselves modulated by oligodendrocyte support. We have added these descriptions in the Results (lines 301–312) and Discussion (lines 423–434).

      (4) Is CF synapse elimination altered? Test using evoked EPSCs or staining methods.

      We agree that directly testing whether oligodendrocyte loss disturbs climbing-fiber synapse elimination would provide a full mechanistic picture. We are already quantifying climbing fiber terminal number with vGluT2 immunostaining and recording evoked CF-EPSCs in the same DTA model; these data, together with an analysis of how population synchrony is involved in synapse elimination, will form the basis of a separate manuscript now in preparation. To keep the present paper focused on the phenomena we have rigorously documented—transient oligodendrocyte loss and the resulting long-lasting hyposynchrony and abnormal behaviors—we have removed the speculative sentence on oligodendrocyte-mediated synapse elimination. We believe this revision meets the reviewer’s request without over-extending the current dataset.

      Thank you once again for your constructive suggestions and comments. We believe these changes have improved the clarity and readability of our manuscript.

    1. This binary informs almost all scholarly writingon games and online play in the context of bodies

      Source? Notice we can't just focus on all the intersectionalities during an analysis. I for sure would love to only recommend Open Source games made by minoritised people through a local research citizen science exchange, in paid working labour condtions, without stolen content, no washing marketing campaigns, with accessibility features, with a proven social impact, and made using devices without rare exploitative materials... but this ain't possible.

      We pick our fights, for me it's biases, because they influence most of our daily acts, but activism has many other sides. I just don't think jumping into activism without awareness of bias is a safe avenue, as it can lead to radical violence as a means of change.

    2. radical separationof the body and the mind. This mythical separation, beginning from aCartesian framework

      Yes, but don't synonimise Plato's world of ideas to the Web's Internet of things. What I mean by this is that both are erroneous dichotomies, but they are different dichotomies. Believing in free will and a soul doesn't mean you separate the influences of the virtual-online, and the day-to-day physical space. They may be both real, but this conceptualisation can be a useful communicative tool to put into perspective that before globalisation you couldn't simply receive an email in 1 second from someone 10k miles away.

    3. Logged in as “Dead_in_Iraq,” DeLappe types the names of soldiers killed in Iraq, andthe date of their death, into the game’s text messaging system,such that the information scrolls across the screen for all users tosee. DeLappe’s goal is simple: He plans to memorialize the nameof every service member killed in Iraq.

      I hope it's not just American soldiers... and wydm just soldiers? If This War of Mine showed us something, it's that soldiers are not the only victims of war.

    Annotators

    1. only death settle the score

      Lamar compares the civil war between two African ethnic groups (Zulu and Xhosa) to the street fighting between the Crips and the Bloods. Indeed, street fighting between local gangs is read by the singer as a form of civil war since it involves people that live in the same area. A bitter ending awaits those who kill each other ("only death settle(s) the score").

    2. Remember this, every race start from the block

      Lamar refuses to accept that Blacks are "doomed from the start": making use of a sport metaphor, he speaks in terms of a "race", in which everyone begins from the same starting point.

    3. another slave in my head

      Double consciousness is a key concept to interpret this line: Lamar feels like a prisoner in his own head, enchained by his own thoughts. This occurs because he has internalized a way of perceiving and judging reality which pertains to the oppressor (in this case,whites).

    4. it's evident that I'm irrelevant to societyThat's what you're telling me

      These lines pose a critique towards societal impositions: Lamar feels irrelevant and deprived of any importance in American society. However, this feeling entirely depends on what white people have been telling him. Once again, double consciousness dominates the self.

    5. You're fuckin' evil

      As you may have gathered by now, Lamar's song has no filters: although he acknowledges the hierarchy that forces his community to remain "at the bottom of mankind", he does not feel inferior. On the contrary, he is proud of his identity and his African ancestry, so much as he does not hesitate in judging the oppressors.

    6. Came from the bottom of mankind

      Lamar's viewpoint is crystal clear: not only is there a social hierarchy in America, but also he identifies black as the ones "at the bottom". There is no possible equality in this scenario.

    7. you made me

      This sentence functions as an explanation of the previous one: Lamar claims that he may be experiencing life in a schizophrenic way but blames whites (the ideal interlocutors in this scenario) for it.

    8. There’s diamonds in the sidewalk the’s gutters lined in songDear I hear that beer flows through the faucets all night long

      Again, these sentences make a parallel with "Gold comes rushing out the rivers straight into your hands": the all evoke the illusionary hopes and dreams of immigrants entering a new land and abandoning their own.

    1. eLife Assessment

      By investigating spine nanostructure and dynamics across multiple genetic mouse models for neurodevelopmental disorders, this important study has the potential to uncover convergent or divergent synaptic phenotypes that may be specifically associated with autism versus schizophrenia risk. While the imaging and breadth are impressive, there are potential methodological concerns, especially around statistical analyses, which render the evidence incomplete and should be addressed. The purely in vitro nature of the study also slightly limits the generalisability of the findings.

    2. Reviewer #1 (Public review):

      Summary:

      Kashiwagi et al. undertook a population analysis of dendritic spine nanostructure applied to the objective grouping of 8 mouse models of neuropsychiatric disorders. They report that spine morphology in cultured hippocampal neurons shows a higher similarity among schizophrenia mouse models (compared with autism spectrum disorder (ASD) mouse models), and identify an effect of Ecrg4 (encoding small secretory peptides) on spine dynamics and shape in these models.

      Strengths:

      The study developed a method for objectively comparing spine properties in primary hippocampal neuron cultures from 8 mouse models of psychiatric disorders at the population level using high-resolution structured illumination microscopy (SIM) imaging. This novel technique identified two distinct groups of mouse models according to the population-level spine properties: those with ASD-related gene mutations and those with schizophrenia-related gene mutations. Functional studies, including gene knockdown and overexpression experiments, identified an effect of Ecrg4 on the spine phenotype of the schizophrenia model mice.

      Weaknesses:

      The main weakness is that the study is wholly in vitro, using cultured hippocampal neurons. The authors present this as an advantage, however, arguing that spine morphology as measured in a reduced culture system can demonstrate direct effects of gene mutations on neuronal phenotypes in the absence of indirect influences from nonneuronal cells or specific environments.

      Another weakness is that CaMKIIαK42R/K42R mutant mice are presented as a schizophrenia model, the authors justifying this by saying that "CaMKII-related signaling pathway disruption has been implicated in the working memory deficits found in schizophrenia patients". Since mutations in CAMK2A cause autosomal dominant intellectual developmental disorder-53 (OMIM 617798) and autosomal recessive intellectual developmental disorder-63 (OMIM 618095), and mice carrying the CAMK2A E183V mutation exhibit ASD-related synaptic and behavioral phenotypes (PMID: 28130356), I think it's stretching credibility to refer to the CaMKIIαK42R/K42R mice as a schizophrenia model.

      Although the manuscript is largely well written, there are some instances of ambiguous/unspecific language. This extends to the title (Decoding Spine Nanostructure in Mental Disorders Reveals a Schizophrenia-1 Linked Role for Ecrg4), which gives no indication that the work was in vitro on cultured neurons derived from mouse models.

    3. Reviewer #2 (Public review):

      Okabe and colleagues build on a super-resolution-based technique that they have previously developed in cultured hippocampal neurons, improving the pipeline and using it to analyze spine nanostructure differences across 8 different mouse lines with mutations in autism or schizophrenia (Sz) risk genes/pathways. It is a worthy goal to try to use multiple models to examine potential convergent (or not) phenotypes, and the authors have made a good selection of models. They identify some key differences between the autism versus the Sz risk gene models, primarily that dendritic spines are smaller in Sz models and (mostly) larger in autism risk gene models. They then focus on three models (2 Sz - 22q11.2 deletion, Setd1a; 1 ASD - Nlgn3) for time-lapse imaging of spine dynamics, and together with computational modelling provide a mechanistic rationale for the smaller spines in Sz risk models. Bulk RNA sequencing of all 8 model cultures identifies several differentially expressed genes, which they go on to test in cultures, finding that ecgr4 is upregulated in several Sz models and its misexpression recapitulates spine dynamics changes seen in the Sz mutants, while knockdown rescues spine dynamics changes in the Sz mutants. Overall, these have the potential to be very interesting findings and useful for the field. However, I do have a number of major concerns.

      (1) The main finding of spine nanostructure changes is done by carrying out a PCA on various structural parameters, creating spine density plots across PC1 and PC2, and then subtracting the WT density plot from the mutant. Then, spines in the areas with obvious differences only are analyzed, from which they derive the finding that, for example, spine sizes are smaller. However, this seems a circular approach. It is like first identifying where there might be a difference in the data, then only analyzing that part of the data. I welcome input from a statistician, but to me, this is at best unconventional and potentially misleading. I assume the overall means are not different (although this should be included), but could they look at the distribution of sizes and see if these are shifted?

      (2) Despite extracting 64 parameters describing spine structure, only 5 of these seemed to be used for the PCA. It should be possible to use all parameters and show the same results. More information on PC1 and PC2 would be helpful, given that the rest of the paper is based on these - what features are they related to? These specific features could then be analyzed in the full dataset, without doing the cherry picking above. It would also be helpful to demonstrate whether PC1 and 2 differ across groups - for example, the authors could break their WT data into 2 subsets and repeat the analysis.

      (3) Throughout the paper, the 'n' used for statistical analysis is often spine, which is not appropriate. At a minimum, cell should be used, but ideally a nested mixed model, which would take into account factors like cell, culture, and animal, would be preferable. Also, all of these factors should be listed, with sufficient independent cultures.

      (4) The authors should confirm that all mutants are also on the C57BL/6J background, and clarify whether control cultures are from littermates (this would be important). Also, are control versus mutant cultures done simultaneously? There can be significant batch effects with cultures.

      (5) The spine analysis uses cultures from 18-22 DIV - this is quite a large range. It would be worth checking whether age is a confounder or correlated with any parameters / principal components.

      (6) The computational modelling is interesting, but again, I am concerned about some circularity. Parameter optimization was used to identify the best fit model that replicated the spine turnover rates, so it is somewhat circular to say that this matched the observations when one of these is the turnover rate. It is more convincing for spine density and size, but why not go back and test whether parameter differences are actually seen - for example, it would be possible to extract the probability of nascent spine loss, etc. More compelling would be to repeat the experiments and see if the model still fits the data. In the interpretation (line 314-318) it is stated that '... reduced spine maturation rate can account for the three key properties of schizophrenia-related spines...', which is interesting if true, but it has just been stated that the probability of spine destabilization is also higher in mutants (line 303) - the authors should test whether if the latter is set to be the same as controls whether all the findings are replicated.

      (7) No validation for overexpression or knockdown is shown, although it is mentioned in the methods - please include. Also, for the knockdown, a scrambled shRNA control would be preferable.

      (8) The finding regarding ecgr4 is interesting, but showing that some ecgr4 is expressed at boutons and spines and some in DCVs is not enough evidence to suggest that actively involved in the regulation of synapse formation and maturation (line 356).

      (9) The same caveats that apply to the analysis also apply to the ecgr4 rescue. In addition, while for 22q the control shRNA mutant vs WT looks vaguely like Figure 2, setd1a looks completely different. And if rescued, surely shRNA in the mutant should now resemble control in WT, so there shouldn't be big differences, but in fact, there are just as many differences as comparing mutant vs wildtype? Plus, for spine features, they only compare mutant rescue with mutant control, but this is not ideal - something more like a 2-way ANOVA is really needed. Maybe input from a statistician might be useful here?

      (10) Although this is a study entirely focused on spine changes in mouse models for Sz, there is no discussion (or citation) of the various studies that have examined this in the literature. For example, for Setd1a, smaller spines or reduced spine densities have been described in various papers (Mukai et al, Neuron 2019; Chen et al, Sci Adv 2022; Nagahama et al, Cell Rep 2020).

      (11) There is a conceptual problem with the models if being used to differentiate autism risk from Sz risk genes. It is difficult to find good mouse models for Sz, so the choice of 22q11.2del and Setd1a haploinsufficiency is completely reasonable. However, these are both syndromic. 22qdel syndrome involves multiple issues, including hearing loss, delayed development, and learning disabilities, and is associated with autism (20% have autism, as compared to 25% with Sz). Similarly, Setd1a is also strongly associated with autism as well as Sz (and also involves global developmental delay and intellectual disability). While I think this is still the best we can do, and it is reasonable to say that these models show biased risk for these developmental disorders, it definitely can't be used as an explanation for the higher variability seen in the autism risk models.

      (12) I am not convinced that using dissociated cultures is 'more likely to reflect the direct impact of schizophrenia-related gene mutations on synaptic properties' - first, cultures do have non-neuronal cells, although here glial proliferation was arrested at 2 days, glia will be present with the protocol used (or if not, this needs demonstrating). Second, activity levels will affect spine size, and activity patterns are very abnormal in dissociated cultures, so it is very possible that spine changes may not translate into in vivo scenarios. Overall, it is a weakness that the dissociated culture system has been used, which is not to say that it is not useful, and from a technical and practical perspective, there are good justifications.

      (13) As a minor comment, the spine time-lapse imaging is a strength of the paper. I wonder about the interpretation of Figure 5. For example, the results in Figure 5G and J look as if they may be more that the spines grow to a smaller size and start from a smaller size, rather than necessarily the rate of growth.

    4. Author response:

      Reviewer #1

      (1) The main weakness is that the study is wholly in vitro, using cultured hippocampal neurons.

      We appreciate this reviewer's concern about the limitation of cultured hippocampal neurons in extracting disease-related spine phenotypes. While we fully recognize this limitation, we consider that this in vitro system has several advantages that contribute to translational research on mental disorders.

      First, our culture system has been shown to support the development of spine morphology similar to that of the hippocampal CA1 excitatory synapse in vivo. High-resolution imaging techniques confirmed that the in vitro spine structure was highly preserved compared with in vivo preparations (Kashiwagi et al., Nature Communications, 2019). The present study used the same culture system and SIM imaging. Therefore, the difference we detected in samples derived from disease models is likely to reflect impairment of molecular mechanisms underlying native structural development in vivo.

      Second, super-resolution imaging of thousands of spines in tissue preparations under precisely controlled conditions cannot be practically applied using currently available techniques. The advantage of our imaging and analytical pipeline is its reproducibility, which enabled us to compare the spine population data from eight different mouse models without normalization.

      Third, a reduced culture system can demonstrate the direct effects of gene mutations on synapse phenotypes, independent of environmental influences. This property is highly advantageous for screening chemical compounds that rescue spine phenotypes. Neuronal firing patterns and receptor functions can also be easily controlled in a culture system. The difference in spine structure between ASD and schizophrenia mouse models is valuable information to establish a drug screening system.

      Fourth, establishing an in vitro system for evaluating synapse phenotypes could reduce the need for animal experiments. Researchers should be aware of the 3Rs principles. In the future, combined with differentiation techniques for human iPS cells, our in vitro approach will enable the evaluation of disease-related spine phenotypes without the need for animal experiments. The effort to establish a reliable culture system should not be eliminated.

      (2) Another weakness is that CaMKIIαK42R/K42R mutant mice are presented as a schizophrenia model.

      We agree with this reviewer that CAMK2A mutations in humans are linked to multiple mental disorders, including developmental disorders, ASD, and schizophrenia. Association of gene mutations with the categories of mental disorders is not straightforward, as the symptoms of these disorders also overlap with each other. For the CaMKIIα K42R/K42R mutant, we considered the following points in its characterization as a model of mental disorder. Analysis of CaMKIIα +/- mice in Dr. Tsuyoshi Miyakawa's lab has provided evidence for the reduced CaMKIIα in schizophrenia-related phenotypes (Yamasaki et al., Mol Brain 2008; Frankland et al., Mol Brain Editorial 2008). It is also known that the CaMKIIα R8H mutation in the kinase domain is linked to schizophrenia (Brown et al., 2021). Both CaMKIIα R8H and CaMKIIα K42R mutations are located in the N-terminal domain and eliminate kinase activity. On the other hand, the representative CaMKIIα E183V mutation identified in ASD patients exhibits unique characteristics, including reduced kinase activity, decreased protein stability and expression levels, and disrupted interactions with ASD-associated proteins such as Shank3 (Stephenson et al., 2017). Importantly, reduced dendritic spines in neurons expressing CaMKIIα E183V is a property opposite to that of the CaMKIIα K42R/K42R mutant, which showed increased spine density (Koeberle et al. 2017).

      Different CAMK2A mutations likely cause distinct phenotypes observed in the broad spectrum of mental disorders. In the revised manuscript, we will include a discussion of the relevant literature to categorize this mouse model appropriately.

      References related to this discussion.

      (1) Yamasaki et al., Mol Brain. 2008 DOI: 10.1186/1756-6606-1-6

      (2) Frankland et al. Mol Brain. 2008 DOI: 10.1186/1756-6606-1-5

      (3) Stephenson et al., J Neurosci. 2017 DOI: 10.1523/JNEUROSCI.2068-16.2017

      (4) Koeberle et al. Sci Rep. 2017 DOI: 10.1038/s41598-017-13728-y

      (5) Brown et al., iScience. 2021 DOI: 10.1016/j.isci.2021.103184

      Reviewer #2

      We recognize the reviewer's comments as important for improving our manuscript. We outline our general approach to addressing major concerns. Detailed responses to each point, along with additional data, will be provided in a formal revised manuscript.

      (1) Demonstrating the robustness of statistical analyses

      We appreciate this reviewer's concern about our strategies for the quantitative analysis of the large spine population. For the PCA analysis (Point 2), our preliminary results indicated that including all parameters or the selected five parameters did not make a significant difference in the relative placement of spines with specific morphologies in the feature space defined by the principal components. This point will be discussed in the revised manuscript. The potential problem of selecting a particular region within a feature space for spine shape analysis (Point 1) can be addressed by using alternative simulation-based approaches, such as bootstrap or permutation tests. These analyses will be included in the revised manuscript. The use of sample numbers in statistical analyses should align with the analysis's purpose (Point 3). When analyzing the distribution of samples in the feature space, it is necessary to use spine numbers for statistical assessment. We will recheck the statistical methods and apply the appropriate method for each analysis. The spine population data in Figures 2 and 8 cannot be directly compared, as the spine visualization methods differ (Figure 2 with membrane DiI labeling; Figure 8 with cytoplasmic GFP labeling) (Point 9). Spine populations of the same size are inevitably plotted in different feature spaces. This point will be discussed more clearly in the revised manuscript.

      (2) Clarification of experimental conditions and data reliability

      Per this reviewer's suggestion, we will provide more information on the genetic background of mice and the differences in spine structure from DIV 18-22 (Points 4 and 5). We will also provide additional validation data for the functional analyses using knockdown and overexpression methods, for which we already have preliminary data (Point 7). Concerns about the interpretation of data obtained from in vitro culture (Point 12), raised by this reviewer, are also noted by reviewer #1. As explained in the response to reviewer #1, we intentionally selected an in vitro culture system to analyze multiple samples derived from mouse models of mental disorders for several reasons. Nevertheless, we will revise the discussion and incorporate the points this reviewer raised regarding the disadvantages of in vitro systems.

      (3) Validation of biological mechanisms and interpretation

      In the computational modeling (Point 6), we started from the data of spine turnover (excluding the data of spine volume increase/decrease), fitted the model with the data, and found that the best-fit model showed three features: fast spine turnover, lower spine density, and smaller size of transient spines in schizophrenia mouse models. As the reviewer noted, information about spine turnover is already present in the input data. However, the other two properties are generated independently of the input data, indicating the value of this model. We plan to add additional confirmatory analyses to this model in the revised manuscript.

      In response to Point 8, we will provide supporting data on the functional role of Ecgr4 in synapse regulation. We will also refine our discussion on the ASD and Schizophrenia phenotypes based on the suggested literature (Points 10 and 11). Quantification of the initial growth of spines is technically demanding, as it requires higher imaging frequency and longer time-lapse recordings to capture rare events. It is difficult to conclude which of the two possibilities, slow spine growth or initial size differences, is correct, based on our available data. This point will be discussed in the revised manuscript (Point 13).

    1. eLife Assessment

      This useful study provides a systematic and solid comparison of sex-biased enteroendocrine peptide expression, including AstC and Tk, to show that these peptides contribute to female-biased fat storage. The major research question of this study is based on the authors' previous papers, and therefore, the presented results are incremental. This study serves as a foundation for future investigation of regulatory mechanisms for the sex-biased fat content by AstC and Tk.

    2. Reviewer #1 (Public review):

      Summary of goals:

      The authors' stated goal (line 226) was to compare gene expression levels for gut hormones between males and females. As female flies contain more fat than males, they also sought to identify hormones that control this sex difference. Finally, they attempted to place their findings in the broader context of what is already known about established underlying mechanisms.

      Strengths:

      (1) The core research question of this work is interesting. The authors provide a reasonable hypothesis (neuro/entero-peptides may be involved) and well-designed experiments to address it.

      (2) Some of the data are compelling, especially positive results that clearly implicate enteropeptides in sex-biased fat contents (Figures 1 and 3).

      Weaknesses:

      (1) The greatest weakness of this work is that it falls short of providing a clear mechanism for the regulation of sex-biased fat content by AstC and Tk. By and large, feminization of neurons or enteroendocrine cells with UAS-traF did not increase fat in males (Figure 2). The authors mention that ecdysone, juvenile hormone or Sex-lethal may instead play a role (lines 258-270), but this is speculative, making this study incomplete.

      (2) Related to the above point, the cellular mechanisms by which AstC and Tk regulate fat content in males and females are only partially characterized. For example, knockdown of TkR99D in insulin-producing neurons (Figure 4E) but not pan-neuronally (Figure 4B) increases fat in males, but Tk itself only shows a tendency (Figure 3B). In females, the situation is even less clear: again, Tk only shows a tendency (Figure 3B), and pan-neuronal, but not IPC-specific knockdown of TkR99D decreases fat.

      (3) The text sometimes misrepresents or contradicts the Results shown in the figures. UAS-traF expression in neurons or enteroendocrine cells did sometimes alter fat contents (Figure 2H, S), but the authors report that sex differences were unaffected (lines 164-166). On the other hand, although knockdown of Tk in enteroendocrine cells caused no significant effect (Figure 3B), the authors report this as a trend towards reduction (lines 182-183). This biased representation raises concerns about the interpretation of the data and the authors' conclusions.

      (4) The authors find that in males, neuropeptide expression in the head is higher (Figure 1F-J). This may also play an important role in maintaining lower levels of fat in males, but this finding is not explored in the manuscript.

      Appraisal of goal achievement & conclusions:

      The authors were successful in identifying hormones that show sex bias in their expression and also control the male vs. female difference in fat content. However, elucidation of the relevant cellular pathways is incomplete. Additionally, some of their conclusions are not supported by the data (see Weaknesses, point 3).

      Impact:

      It is difficult to evaluate the impact of this study. This is in great part because the authors do not attempt to systematically place their findings about AstC/Tk in the broader context of their previous studies, which investigated the same phenomenon (Wat et al., 2021, eLife and Biswas et al., 2025, Cell Reports). As the underlying mechanisms are complex and likely redundant, it is necessary to generate a visual model to explain the pathways which regulate fat content in males and females.

    3. Reviewer #2 (Public review):

      Summary:

      This manuscript by Biswas and Rideout investigates sex differences in the expression and function of hormones derived from Drosophila enteroendocrine cells (EE). The authors report that while whole-body and head expression of several EE hormones (AstA, AstC, Tk, NPF, Dh31) is male-biased, gut-specific expression of AstC, Tk, and NPF is female-biased. Intriguingly, this sex-specific effect is not dependent on Tra - a surprising and important result. The authors then used an RNAi-based approach to demonstrate that gut-derived AstC and Tk promote fat storage specifically in females. Similar effects are observed when their receptors are knocked down in neurons. In addition, the authors were able to demonstrate that while Tk promotes female body fat via the insulin-producing cells. Together, these findings suggest that EE cell-derived hormones contribute to sex-specific fat storage regulation.

      Strengths:

      Overall, I find the paper quite interesting. While the findings are brief, they reveal novel aspects of the sex-specific lipid storage program that I believe are important. As noted by the authors in the discussion, there are many open questions, including how these neuronal effects translate into systemic sex-specific regulation of lipid storage. Regardless, I find the results to be convincing - this paper will serve as the launching point of many future studies.

      Weaknesses:

      My main criticisms are focused on two points:

      (1) If the sex specific differences are eliminated by tra overexpression, what else might be responsible? As the authors note, the differences in 20E titers might be responsible. I would encourage the authors to simply feed adult flies with food containing 20E and determine if this alters sex-specific 20E expression.

      (2) I'm quite intrigued by the discovery that Tra does not eliminate the sex-specific differences. There are quite a few recent studies demonstrating that fruitless influences sex-specific neuronal function - here to I would encourage the authors to examine whether this aspect of the sex-determination pathway is involved in the lipid accumulation phenotype.

    1. When using boolean expressions, you should remember that as far as the computer isconcerned, there is nothing special about boolean values

      what does this mean?

    2. at the beginning of the format specifier, before the field width; for example: %,12.3f.If you want the output to be left-justified instead of right justified, add a minus sign to thebeginning of the format specifier: for example, %-20s.

      above, they included the % when they defined the format specifier. But here, they did not add the - to the "beginning of the format specifier (before the %).

    3. By convention, enum values are given names that are made up of upper case letters, but thatis a style guideline and not a syntax rule. An enum value is a constant; that is, it representsa fixed value that cannot be changed. The possible values of an enum type are usually referredto as enum constants.

      Note that these classes are special because: 1. instead of storing variables, they store constants 2. there are no static subroutines 3. the constants are stored into variables of type Season. Therefore, the static constants behave like objects. We can conclude that classes are not limited to storing variables; they can also store constants. The definition of an enum must integrate the constants with subroutines to create objects.

    Annotators

    1. Using the Compose command line tool you can create and start one or more containers for each dependency with a single command (docker compose up).

      使用docker compose,相当于在yaml中一次启动多个容器,但除此之外还可以解决依赖问题(容器启动顺序)

    1. It would not be inaccurate to call Moldburg’s variety of NRx the most vivid example of “controlled opposition” ever seen on the alt right, certainly in effect and likely in intent.

      Wow this comment read 10 years later really has me thinking.

      What did "Moldbug was Controlled Opposition" mean in 2015

      These days we just call him by his real name "Curtis Yarbin" and he had a dating advise column called Uncle Yarv

    2. but have noticed his occasional tongue in cheek comments on Twitter regarding Jews.

      If they only knew how bad Twitter would become, dam 2015 is spiritually so far away. That was before the Culture Wars took hold. Twitter was "based" in 2015. Wow

    3. But neoreaction conflicts with White Nationalism in a way similar to other race realists (see American Renaissance) in that neoreactionaries refuse to give the Jewish question serious consideration.

      Well it's been a decade since this article came out, the JQ is quite popular via Nick Fuentes and on Elon's X (formally twitter). I wonder why this author mentions it here void of context.

    4. To describe this book as a guide is a bit of a misnomer. While it is fairly easy to navigate because of its brevity, it would be more useful as a guide if the chapters were broken down into subsections with bold headings and if an index were provided.

      This is something mememaps.net ought to be able to help with

    5. A spiritual critique of democracy is completely lacking here.

      Hmmm what would this consist of?

      The Leviathan and its enemies is pretty dam scary book, spiritually.

      The Oppression under a stupid King and defective Court would also be similarly depressing.

      I think the core of the issue is how these Democratic-Monarchy cycle (cyclones) occur in history and how we are now aware of them.

    6. In a democracy this doesn’t happen because people with greater capital have more influence over whether or not policies such as free trade and mass immigration are implemented, which may be detrimental to the nation but are good for those who prefer profit over cultural values.

      The way the "Capitalist Caste" are easily able to subvert the democracy.

      If only Democracy's participants were limited to people who can write essays, using pen and paper in a single sitting, stating what they believe in to be published for the world to see

    7. There are many differences between monarchy and fascism. The first is that fascism implies a totalitarian state, monarchy does not. Fascism implies no clear separation between the governing party and the governed, monarchy does. Fascism is socialist, monarchy is not. Fascism aggressively presents an overall vision of what society should be, imposed from the top down, monarchy does not. Fascism forbids “unearned income” on paper, meaning any revenue from investment whatsoever, monarchy does not. Fascism has a preoccupation with militarism and “society as barracks,” monarchy does not. Fascism has a leader that represents himself as carrying out the people’s will, monarchy does not. Fascism is about meritocracy independent of social background, monarchy is about heredity and ancestry. Fascism implies a government in control of much of the economy, monarchy implies a government that spends less than 20 percent of the GDP.

      If someone randomly asked me what the difference between Monarchy and Fascism is, I would invent and answer on the spot. It's nice to finally have an answer I can use as reference in the future.

    8. He also addresses the assumption made by many that unequal distribution of wealth is inherently unjust. In reality, a healthy nation must have some form of wealth inequality. He cautions that pointing to inequality as if it is a problem that must be solved is a tactic frequently used by politicians who seek to exploit the populace in a democracy by appealing to their most debased instinct—jealousy.

      I agree that Inequality is not itself a problem, but the Inflation we are experiencing in the mid 2020's is bullshit. Defective Aristocrats are getting a free ride. Index funds for index funds sake create weird market conditions

    9. However, other factors are involved in the disparity between the worst and the best of authoritarian governments, the most prominent correlation being the average IQ of the citizenry. Additionally, there is a wider degree of variance between leadership styles in different countries:

      Yes but "Liberal Democracies" have this "Elite Overproduction" problem producing over educated faggots.

      Bloom's 2 Sigma Problem shows that proper aristocratic education can produce proper Genuises.

    10. Anissimov’s primary source for this discussion is Hans-Hermann Hoppe’s Democracy: The God That Failed, and largely consists of contrasting the low time preference incentives of monarchy with the high time preference incentives of democracy.
    11. According to Anissimov, a study of European history reveals “that de facto nation states form along ethnic and cultural lines and that the United States is in fact composed of several such states.”

      I guess we will see how this plays out as we reach further into the 21st centuary

    12. Pointing out that liberal democracies prefer to focus on the Greeks rather than Indo-Europeans highlights a pervading theme in the book, that the bias toward democracy has led to lazy thinking and out-of-hand refusal to consider the merits of a more authoritarian style of government such as was found among the Indo-Europeans.

      I mean the "Liberal Institutions" are incentivized to indoctrinate "Liberal Values" into the "Dumb Youths" right?!?!?

    13. Citing Ricardo Duchesne’s The Uniqueness of Western Civilization, he makes the assertion that the founders of Western civilization were not Greek but Aryan:

      I wonder what they mean by "Aryan" here

    14. Anissimov traces the inevitability of hierarchy in society to evolutionary strategies, which can be deduced from observations of non-human primate behavior as well as archeological evidence. The implication is that there will always be leaders and followers and some are better suited for leadership. When this reality is accepted, society can move beyond the inhibiting belief that every individual deserves a vote.

      The experiment, "Let people make their own life decisions" seems seems to be turning in some results

    15. Anissimov’s A Critique of Democracy is short and simple, drawing primarily from a few scholarly sources to make the point that democracy ruins civilization.

      "Gatekeeping is based" and "Democracy" "Rule by LCD(Lowest Common Denominator)" can't maintain the gates.

    16. Neoreaction is inegalitarian, against democracy, and in favor of monarchy. The stereotype of neoreactionaries is that they are computer geeks who are interested in serious (but geeky) ethical issues surrounding technological innovation, as well as more banal and boyish pastimes like video games and Japanese animation.

      So "inegalitarian" and "monarchy".

      I thought "neoreacitonary" meant new-reactionary or as I like to think of it as "meta reactionary". Neoreactionaries interpret Hegel's Dialectic as "Fake and Gay".

      To get out of the "modernist" frame imagine being judged by your great great great grandparents.

      To really get out of the "Overton Window" ask what kind of civilization these Ancient Megalithic Structures and why they went extinct.

    1. Kinh77 is known as one of the leading entertainment brands today. To get a more objective view of this casino, don't miss the following article.

      Kinh77 duoc biet den nhu mot trong nhung thuong hieu giai tri hang dau hien nay. De co them cai nhin khach quan ho ve song bac nay, dung bo qua bai viet sau

      Dia chi: 478 D. Le Van Sy, Phuong 14, Quan 3, Ho Chi Minh, Viet Nam

      Email: eecdafymanoswal@gmail.com

      Website: https://kinh77.org/

      Dien thoai: (+84) 779346376

      kinh77 #kinh77org #nhacaikinh77 #kinh77casino #trangchukinh77 #appkinh77 #songbackinh77

      Social Links:

      https://kinh77.org/

      https://kinh77.org/chinh-sach-bao-mat/

      https://kinh77.org/gioi-thieu-kinh77/

      https://kinh77.org/lien-he-kinh77/

      https://kinh77.org/dieu-khoan-dich-vu/

      https://kinh77.org/ceo-hoai-an/

      https://kinh77.org/ban-ca-kinh77/

      https://kinh77.org/casino-kinh77/

      https://kinh77.org/the-thao-kinh77/

      https://kinh77.org/game-bai-kinh77/

      https://kinh77.org/xo-so-kinh77/

      https://kinh77.org/dang-ky-kinh77/

      https://kinh77.org/tai-app-kinh77/

      https://kinh77.org/nap-tien-kinh77/

      https://kinh77.org/khuyen-mai-kinh77/

      https://kinh77.org/ban-ca-an-xu/

      https://kinh77.org/craps/

      https://kinh77.org/ban-ca-tien/

      https://kinh77.org/da-ga-thomo/

      https://kinh77.org/rong-ho/

      https://kinh77.org/xo-so-mien-bac/

      https://www.youtube.com/channel/UCPliooVS-L_Sveb5ijxV_xA

      https://twitter.com/kinh77org

      https://www.reddit.com/user/kinh77org/

      https://www.pinterest.com/kinh77org/

      https://ameblo.jp/kinh77org/entry-12916094930.html

      https://gravatar.com/kinh77org

      https://www.band.us/band/99254249/intro

      https://www.blogger.com/profile/13605937135170705964

      https://eecdafymanoswal.wixsite.com/kinh77org

      https://www.tumblr.com/kinh77org

      https://kinh77org.wordpress.com/

      https://www.twitch.tv/kinh77org/about

      https://sites.google.com/view/kinh77org/home

      https://kinh77org.webflow.io/

      https://bookmarksclub.com/backlink/kinh77org/

      https://kinh77org.mystrikingly.com/

      https://kinh77org.amebaownd.com/

      https://telegra.ph/kinh77org-07-12

      https://mk.gta5-mods.com/users/kinh77org

      https://687218630210e.site123.me/

      https://myspace.com/kinh77org

      https://scholar.google.com/citations?hl=vi&user=klk93EEAAAAJ

      https://www.pearltrees.com/kinh77org/item726446973

      https://kinh77org.localinfo.jp/

      https://kinh77org.shopinfo.jp/

      https://kinh77org.hashnode.dev/kinh77org

      https://kinh77org.themedia.jp/

      https://rapidapi.com/user/kinh77org

      https://730713.8b.io/

      https://kinh77org.theblog.me/

      https://fliphtml5.com/homepage/ctqjs/kinh77org/

      https://kinh77org.therestaurant.jp/

      https://ask.mallaky.com/?qa=user/kinh77org

      https://kinh77org.website3.me/

      https://www.quora.com/profile/Kinh77org

      https://kinh77org.pixieset.com/

      https://kinh77org.gumroad.com/

      https://flipboard.com/@kinh77org/kinh77org-s1bk05uvy

      https://www.threadless.com/@kinh77org/activity

      https://wakelet.com/@kinh77org

      https://www.magcloud.com/user/kinh77org

      https://hackmd.io/@kinh77org/kinh77org

      https://kinh77org.blogspot.com/2025/07/kinh77org.html

      https://kinh77org.doorkeeper.jp/

      https://kinh77org.storeinfo.jp/

      https://velog.io/@kinh77org/about

      https://bato.to/u/2816785-kinh77org

      https://zb3.org/kinh77org/kinh77org

      https://github.com/kinh77org

      https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/1308079

      https://bit.ly/kinh77org

      https://tinyurl.com/kinh77org

      https://tawk.to/kinh77org

      https://gitlab.com/kinh77org

      https://rebrand.ly/kinh77org

      https://www.question-ksa.com/user/kinh77org

      https://bulkwp.com/support-forums/users/kinh77org/

      https://orcid.org/0009-0008-5984-1222

      https://rant.li/kinh77org/kinh77org

      https://linktr.ee/kinh77org

      https://archive.org/details/@kinh77org/web-archive

      https://wpfr.net/support/utilisateurs/kinh77org

      https://ameblo.jp/kinh77org

      https://plaza.rakuten.co.jp/kinh77org/diary/202507120000/

      https://pad.darmstadt.social/s/cHekp3CGx

      https://pixabay.com/users/51285668/

      https://disqus.com/by/kinh77org/about/

      https://www.reverbnation.com/artist/kinh77org

      https://es.gta5-mods.com/users/kinh77org

      https://www.gamblingtherapy.org/forum/users/kinh77org/

      https://heylink.me/kinh77org/

      http://forum.m5stack.com/user/kinh77org/

      https://app.readthedocs.org/profiles/kinh77org/

      https://gitee.com/kinh77org

      https://public.tableau.com/app/profile/kinh77.org/viz/kinh77org/Sheet1#1

      https://connect.garmin.com/modern/profile/541f094d-a6c1-4004-a14d-8ebba1abb263

      https://www.pixiv.net/en/users/117916596

      https://community.amd.com/t5/user/viewprofilepage/user-id/513406

      https://readtoto.com/u/2816785-kinh77org

      https://s.id/ZbK27

      https://qna.habr.com/user/kinh77org

      https://linkr.bio/kinh77org

      https://www.bark.com/en/gb/company/kinh77org/81O1eO/

      https://pastebin.com/u/kinh77org

      https://www.storeboard.com/kinh77org

      https://etextpad.com/3cbz0pzy2z

      https://md.darmstadt.ccc.de/s/1L7T6c-In

      https://vc.ru/id5097505

      https://qiita.com/kinh77org

      https://comicvine.gamespot.com/profile/kinh77org/

      https://padlet.com/kinh77org/kinh77org

      https://3dwarehouse.sketchup.com/by/kinh77org

      https://muckrack.com/kinh77-org/bio

      https://hedgedoc.k8s.eonerc.rwth-aachen.de/s/2SWs4kv-P

      https://connect.informs.org/network/speakerdirectory/speaker?UserKey=14d2e044-631a-444b-9fca-0197fe034978

      https://nl.gta5-mods.com/users/kinh77org

      https://openlibrary.org/people/kinh77org

      https://anyflip.com/homepage/iurch#About

      https://lu.ma/user/kinh77org

      https://pbase.com/kinh77org/kinh77org

    1. There was one thing to be done before I left, an awkward, unpleasant thing that perhaps had better have been let alone. But I wanted to leave things in order and not just trust that obliging and indifferent sea to sweep my refuse away. I saw Jordan Baker and talked over and around what had happened to us together, and what had happened afterward to me, and she lay perfectly still, listening, in a big chair. She was dressed to play golf, and I remember thinking she looked like a good illustration, her chin raised a little jauntily, her hair the colour of an autumn leaf, her face the same brown tint as the fingerless glove on her knee. When I had finished she told me without comment that she was engaged to another man. I doubted that, though there were several she could have married at a nod of her head, but I pretended to be surprised. For just a minute I wondered if I wasn’t making a mistake, then I thought it all over again quickly and got up to say goodbye.

      wow the breakup trauma,they didn't end well

    2. om,” I inquired, “what did you say to Wilson that afternoon?” He stared at me without a word, and I knew I had guessed right about those missing hours. I started to turn away, but he took a step after me and grabbed my arm. “I told him the truth,” he said. “He came to the door while we were getting ready to leave, and when I sent down word that we weren’t in he tried to force his way upstairs. He was crazy enough to kill me if I hadn’t told him who owned the car. His hand was on a revolver in his pocket every minute he was in the house—” He broke off defiantly. “What if I did tell him? That fellow had it coming to him. He threw dust into your eyes just like he did in Daisy’s, but he was a tough one. He ran over Myrtle like you’d run over a dog and never even stopped his car.”

      wow,tom told the truth to george,poor guy,but george doesn't seem to accept the truth

    3. He murdered her.” “It was an accident, George.” Wilson shook his head. His eyes narrowed and his mouth widened slightly with the ghost of a superior “Hm!” “I know,” he said definitely. “I’m one of these trusting fellas and I don’t think any harm to nobody, but when I get to know a thing I know it. It was the man in that car. She ran out to speak to him and he wouldn’t stop.” Michaelis had seen this too, but it hadn’t occurred to him that there was any special significance in it. He believed that Mrs. Wilson had been running away from her husband, rather than trying to stop any particular car.

      george start to accuse gastby as the murderer of his wife,i think hisa poor guy,everyone has been hiding secret from him.

    4. I wanted to get somebody for him. I wanted to go into the room where he lay and reassure him: “I’ll get somebody for you, Gatsby. Don’t worry. Just trust me and I’ll get somebody for you—”

      Nick cares about Gatsby. He wants to make sure he is not alone. It also shows how abandoned Gatsby is at the end.

    5. After the armistice he tried frantically to get home, but some complication or misunderstanding sent him to Oxford instead. He was worried now—there was a quality of nervous despair in Daisy’s letters. She didn’t see why he couldn’t come. She was feeling the pressure of the world outside, and she wanted to see him and feel his presence beside her and be reassured that she was doing the right thing after all.

      Daisy’s decision was shaped by fear and pressure than by a lack of love. She need reassurance and stability, but Gatsby couldn’t give her at the time, this is the reason why she turned to Tom.

    1. As we are on the precipice of a very large wave of lending, I also have to ask myself, is capitalism itself ready for it? More thoughts behind a paywall

      Is this a reference to new bonds being issued to cover future investment, now that costs are growing beyond the ability to be covered with free cash flow from even the biggest players?

    1. What Is the Quadruple Star System? The system contains four celestial bodies grouped in two pairs. One pair is young red dwarf stars, common and relatively bright. The other pair consists of cold brown dwarfs, faint objects about the size of Jupiter. The brown dwarfs orbit the red dwarfs in a hierarchical arrangement. This system is unique because brown dwarfs rarely have companions and are seldom found in multiple-star systems.

      Quadruple Star System is unique.

    1. ipns.publish now accepts key name strings rather than private keys Names previously publishing using an user controlled private key, will need to be explicitly published again by first importing the key into the keychain (await libp2p.keychain.importKey('my-key', key) and then published with ipns.publish('my-key', ...). uses libp2p v3 and updated block/data stores

      This is a critical improvment. Allows publishing permanent pointers to mutable information

    1. "The wicked understand, acknowledge and value the Wise—they depend on the Wise for their own cynical gain. The simple don’t see the point of wisdom. Those who do not know how to ask don’t even know wisdom is a thing." —The Four Children of the Seder as the Simulacra Levels

      What does Wicked mean in this context?

      The Wicked exist in opposition to the people of lived experience. The Cynic is a derivative experience from those who actually live life and try and do things.

    2. Level 4: Symbols need not pretend to describe reality.

      Okay I don't understand this level. Is this some port modern explanation, there is no objective truth or morals idea? Can you go another level deeper, symbols are more real than the world we live in.

      Here's an example, Aristocrats deal with a Simulacrum of reality. Their "Reality" is the social pressures via the competing lifestyles of status created by other Elites. The Symbols of status are what is real and valuable to the Aristocrat.

      A Coal Miner deal with the very "real" reality of mining actual energy out of the ground. The Physical Mine is what is real to the coal miner.

      Both the Coal Miner and Aristocrat deal with Money which is some sort of Simulacrum game theory trust relationship contract thing.

    3. By Strawperson:Level 1: “There’s a lion across the river.” = There’s a lion across the river.Level 2: “There’s a lion across the river.” = I don’t want to go (or have other people go) across the river.Level 3: “There’s a lion across the river.” = I’m with the popular kids who are too cool to go across the river.Level 4: “There’s a lion across the river.” = A firm stance against trans-river expansionism focus grouped well with undecided voters in my constituency.

      I never realized a simple statement can be viewed from so many perspectives, I wonder of a AI Prompt Ecology can do a similar level of Analysis from various perspectives and then Synthesize action items

    4. One way to test which level someone is on is what would make them say the opposite of what they say now:Level 1: If they see enough evidence in the opposite direction.Level 2: If people begin responding the opposite way to the same statement.Level 3: If your group starts saying the opposite.Level 4: If you benefit more from saying the opposite.

      The idea of saying stuff people will object to in conversation or debate to check if they are engaging is a pretty great strategy.

      Taking strong opinions is required for the Thesis, Antithesis, Synthesis cognitive pattern

    1. Since it is finite-dimensional, the kernelof the substitution homomorphism ε : K[z] −→ K[φ] given by z → φ is a non-zeropolynomial ideal.

      The claim is that ker(ε) ≠ {0}, meaning there exists a non-zero polynomial that annihilates φ.

    1. We analyzed wage and rent data for 400 German independent cities and districts from 2014 to 2024. The rent burden compares the median net income (tax class I, single) with the average monthly rent for a typical 50 m² unit. Net income was calculated using a simplified progressive tax model: deduction rates of 30% (under €30,000), 35% (€30,000–€60,000), and 40% (over €60,000) capture income tax and social security contributions typical for employment relationships. Wage data comes from the Federal Employment Agency and shows median gross monthly earnings for full-time employees. For national wage trends, we use Destatis earnings data (Table 81000-0008). Inflation adjustment is done using the Consumer Price Index (2016–2024: 25.58%). Real wages are calculated using geometric linking rather than simple subtraction to avoid overstating the effect over the eight-year period. Rent data is sourced from the empirica real estate price index, based on the VALUE market database—a collection of prepared real estate market data from more than 100 sources. The rents shown are calculated using a hedonic model to factor out qualitative differences (age, amenities, condition) and reveal pure price trends. The database uses a random sample independent of a specific date, with professional data cleaning methods. Rents include a 25% flat surcharge to estimate 'warm' rent (including utilities/heating). All values refer to asking rents for new contracts, not existing rents, which are typically lower due to tenant protection laws. The 30 percent threshold follows common economic guidelines; German law does not prescribe a fixed income-to-rent ratio. For the living space analysis, profession-specific salaries are only available at the state level. Cities like Frankfurt use the Hesse averages, Munich the Bavarian ones; for the city-states of Berlin and Hamburg, exact values are available. The four professions shown (Geriatric Care, Hospitality, IT, and Electrical Engineering) represent the two biggest winners and two biggest losers in wage growth from 2016–2024, thus spanning the spectrum of wage development in Germany. Data Limitations: This simplified model is for comparative analysis, not individual financial planning. Regional tax differences, household compositions, and existing rental agreements may lead to different results.

      We analyzed wage and rent data for 400 German independent cities and districts from 2014 to 2024. The rent burden compares the median net income (tax class I, single) with the average monthly rent for a typical 50 m² unit. Net income was calculated using a simplified progressive tax model: deduction rates of 30% (under €30,000), 35% (€30,000–€60,000), and 40% (over €60,000) capture income tax and social security contributions typical for employment relationships. Wage data comes from the Federal Employment Agency and shows median gross monthly earnings for full-time employees. For national wage trends, we use Destatis earnings data (Table 81000-0008). Inflation adjustment is done using the Consumer Price Index (2016–2024: 25.58%). Real wages are calculated using geometric linking rather than simple subtraction to avoid overstating the effect over the eight-year period. Rent data is sourced from the Empirica real estate price index, based on the VALUE market database—a collection of prepared real estate market data from more than 100 sources. The rents shown are calculated using a hedonic model to factor out qualitative differences (age, amenities, condition) and reveal pure price trends. The database uses a random sample independent of any specific date, with professional data-cleaning methods. Rents include a 25% flat surcharge to estimate 'warm' rent (including utilities/heating). All values refer to asking rents for new contracts, not existing rents, which are typically lower due to tenant protection laws. The 30 percent threshold follows common economic principles; German law does not prescribe a fixed income-to-rent ratio. For the living space analysis, profession-specific salaries are only available at the state level. Cities like Frankfurt use the Hesse averages, Munich the Bavarian ones; for the city-states of Berlin and Hamburg, exact values are available. The four professions shown (Geriatric Care, Hospitality, IT, and Electrical Engineering) represent the two biggest winners and two biggest losers in wage growth from 2016 to 2024, thus spanning the spectrum of wage development in Germany. Data Limitations: This simplified model is for comparative analysis, not individual financial planning. Regional tax differences, household composition, and existing rental agreements may yield different results.

    2. Why these 4 professions? We chose contrasting examples across the spectrum of real purchasing power (2016–2024): Geriatric Care (+24% real) and Hospitality (+14.3% real) represent the biggest winners, while Software Development (+3% real) and Electrical Engineering (−3.3% real) show how even highly skilled professions failed to keep pace with inflation. Apartment size calculated as 30% of net disposable professional income divided by the local rent per m². Net income calculated using a progressive German tax model (deductions of 30%, 35%, and 40% depending on income level). Data Note: Professional salary data is available at the state level. For non-city-states, the average salaries of the respective state were used (e.g., Frankfurt = Hesse, Munich = Bavaria). Rent data is city-specific.

      Why these four professions? We chose contrasting examples across the spectrum of real purchasing power (2016–2024): Geriatric Care (+24%) and Hospitality (+14.3%) represent the biggest winners, while Software Development (+3%) and Electrical Engineering (−3.3%) show how even highly skilled professions failed to keep pace with inflation. Apartment size calculated as 30% of net disposable professional income divided by the local rent per m². Net income calculated using a progressive German tax model (deductions of 30%, 35%, and 40% depending on income level). Data Note: Professional salary data is available at the state level. For non-city-states, the average salaries of the respective state were used (e.g., Frankfurt = Hesse, Munich = Bavaria). Rent data is city-specific.

    3. In 2016, a geriatric caregiver in Berlin could afford a 44-square-meter apartment. Today, the same professional can only afford 38 square meters—a loss of 6 square meters in less than a decade. In comparison: a software developer in Berlin lost as much as 14 square meters (78m² → 64m²). But while Berlin professionals lost space, geriatric caregivers in Dresden actually gained 17 square meters. The same salary now buys completely different standards of living depending on the place of work.

      In 2016, a geriatric caregiver in Berlin could afford a 44-square-meter apartment. Today, the same professional can only afford 38 square meters—a loss of 6 square meters in less than a decade. In comparison, a software developer in Berlin lost as much as 14 square meters (78 m² → 64 m²). But while Berlin professionals lost space, geriatric caregivers in Dresden actually gained 17 square meters. The same salary now buys completely different standards of living depending on the place of work.

    4. Nationwide, the ability to keep up with rent depends, to a certain extent, on wage development and geography. In affordable cities, rising wages in essential professions have actually improved financial stability. In expensive metropolitan regions, even strong wage growth cannot keep up with housing inflation.

      Nationwide, the ability to keep up with rent depends, to some extent, on wage growth and geography. In affordable cities, rising wages in essential professions have actually improved financial stability. In expensive metropolitan regions, even strong wage growth cannot keep up with housing inflation.

    5. Rising rents affect everyone, but not everyone faces the housing market with the same financial stability. A detailed analysis of wage development by profession reveals a surprising pattern and challenges old certainties about who is moving up and who is falling behind.

      Rising rents affect everyone, but not everyone faces the housing market with the same level of financial stability. A detailed analysis of wage development by profession reveals a surprising pattern and challenges old certainties about who is moving up and who is falling behind.

    6. We have compiled all the data in this interactive table. Here you can view the rent burden for each district and independent city, as well as its development over the last ten years.

      We have compiled all the data in this interactive table. Here, you can view the rent burden for each district and independent city, along with its development over the last 10 years.

    7. While these metropolitan effects increase housing costs, other districts, especially in eastern Germany and in industrial centers, remain comparatively affordable. In 2024, the lowest rent burdens are found in regions like Salzgitter, where a single-person household spends only 14.7% of their net income on a 50 m² apartment. Other areas in the lower group include Chemnitz (15.4%), Holzminden (16.0%), and Wolfsburg (16.3%), all well below the 20% threshold. Many of these more affordable regions are former industrial centers like Gelsenkirchen, Hagen, Salzgitter, or Wolfsburg, or rural and semi-rural eastern German districts like Chemnitz, Zwickau, Vogtlandkreis, and Salzlandkreis. These are not classic commuter belts of large cities or places with strong population growth. These regions tend to have slow or negative population growth, limited rental pressure, and only a moderate increase in housing demand. Rents here have remained relatively stable, and even with lower average incomes, households in these districts can maintain a comfortably low rent-to-income ratio—a rare form of financial freedom in today's market.

      While these metropolitan effects raise housing costs, other districts, especially in eastern Germany and industrial centers, remain comparatively affordable. In 2024, the lowest rent burdens are found in regions like Salzgitter, where a single-person household spends only 14.7% of their net income on a 50 m² apartment. Other areas in the lower group include Chemnitz (15.4%), Holzminden (16.0%), and Wolfsburg (16.3%), all well below the 20% threshold. Many of these more affordable regions are former industrial centers like Gelsenkirchen, Hagen, Salzgitter, or Wolfsburg, or rural and semi-rural eastern German districts such as Chemnitz, Zwickau, Vogtlandkreis, and Salzlandkreis. These are not classic commuter belts of large cities or places with strong population growth. These regions tend to have slow or negative population growth, limited rental pressure, and only a moderate increase in housing demand. Rents here have remained relatively stable, and even with lower average incomes, households in these districts can maintain a comfortably low rent-to-income ratio—a rare form of financial freedom in today's market.

    8. It is striking that not only have the inner cities become more expensive, but also the surrounding suburbs. Many people have moved to the outskirts in search of cheaper rents and more space, but the increased demand has also driven up prices there. As a result, commuters in the Munich region now have some of the highest rent-to-income ratios in Germany. At the top of the districts with the highest rent burden in 2024 is Fürstenfeldbruck, where tenants have to spend almost 40% of their net income on rent. The city of Munich follows with 39%, and the surrounding districts of Dachau (38%), Ebersberg (38%), and Miesbach (37%) are only slightly behind—and well above the 30 percent mark.

      It is striking that not only have the inner cities become more expensive, but also the surrounding suburbs. Many people have moved to the outskirts in search of cheaper rents and more space, but the increased demand has also driven up prices there. As a result, commuters in the Munich region now have some of the highest rent-to-income ratios in Germany. At the top of the list of districts with the highest rent burden in 2024 is Fürstenfeldbruck, where tenants spend almost 40% of their net income on rent. The city of Munich follows with 39%, and the surrounding districts of Dachau (38%), Ebersberg (38%), and Miesbach (37%) are only slightly behind and well above the 30 percent mark.

    9. German cities are recording one of the sharpest increases in rent burden. Even significant salary increases in these metropolitan areas are often not enough to keep pace with rising rents. For example, in Berlin: since 2014, rents have risen by 91%, while nominal wages have only increased by 45%. In Munich, the situation is only slightly better: rents climbed by 53%, while wages in the same period only rose by 38%. A similar trend can be seen in Frankfurt and Düsseldorf: rent increases of +42% and +44% respectively are set against wage gains of 32% and 29%. These cities illustrate where the real pressure in the housing market lies: in metropolitan areas with the strongest labor markets, rent inflation is outpacing income growth. Some cities show a more balanced relationship. In Hamburg, rents rose by 38%, while wages increased by 31%. Dresden shows a similar pattern: rents +41%, wages +38%. And then there are cities like Leipzig: still comparatively affordable, but rapidly changing. In Leipzig, rents have risen by 74% in the last ten years, while wages have increased by 49%. The gap is smaller than in Berlin or Munich, but the dynamic is remarkable.

      German cities are recording one of the sharpest increases in rent burden. Even significant salary increases in these metropolitan areas are often not enough to keep pace with rising rents. For example, in Berlin, rents have risen by 91% since 2014, while nominal wages have increased by only 45%. In Munich, the situation is only slightly better: rents climbed by 53%, while wages over the same period rose by only 38%. A similar trend can be seen in Frankfurt and Düsseldorf: rent increases of 42% and 44%, respectively, are set against wage gains of 32% and 29%. These cities illustrate where the real pressure in the housing market lies: in metropolitan areas with the strongest labor markets, where rent inflation is outpacing income growth. Some cities show a more balanced relationship. In Hamburg, rents rose by 38%, while wages increased by 31%. Dresden shows a similar pattern: rents +41%, wages +38%. And then there are cities like Leipzig: still comparatively affordable, but rapidly changing. In Leipzig, rents have risen by 74% in the last ten years, while wages have increased by 49%. The gap is smaller than in Berlin or Munich, but the dynamic is remarkable.

    10. A look at the 30 percent mark—the point at which housing costs begin to undermine financial stability—shows just how much the burden has intensified. In 2014, only 6 districts crossed this critical threshold, all near Munich. Ten years later, this number has more than quadrupled: in 2024, 26 regions are now among the particularly burdened. This increase shows how the housing crisis has spread far beyond Germany's traditional hotspots.

      A look at the 30 percent mark—the point at which housing costs begin to undermine financial stability—shows just how much the burden has intensified. In 2014, only six districts crossed this critical threshold, all near Munich. Ten years later, this number has more than quadrupled: in 2024, 26 regions are now among the particularly burdened. This increase shows how the housing crisis has spread far beyond Germany's traditional hotspots.

    11. One of the most reliable metrics for housing affordability is the rent-to-income ratio, which is the share of net salary spent on rent. A common guideline is: anyone who spends more than 30% of their net income on housing has little financial cushion. When this threshold is exceeded, the scope for savings and unforeseen expenses shrinks—even when nominal wages are rising.

      One of the most reliable metrics for housing affordability is the rent-to-income ratio, which is the share of net salary spent on rent. A standard guideline is: anyone who spends more than 30% of their net income on housing has little financial cushion. When this threshold is exceeded, the scope for savings and unforeseen expenses shrinks—even when nominal wages are rising.

    12. For years, the nominal wage growth of 27 percent was often presented as proof of a strong labor market. But this "pay bump" tells only part of the story. At the same time, prices rose: due to pandemic-related bottlenecks, the energy crisis, and permanently rising living costs. In the end, only about one percent of the wage increase remained in real terms. The following chart shows how much purchasing power in Germany has actually declined since 2016.

      For years, the nominal wage growth of 27 percent was often presented as proof of a strong labor market. But this "pay bump" tells only part of the story. At the same time, prices rose due to pandemic-related bottlenecks, the energy crisis, and permanently rising living costs. In the end, only about one percent of the wage increase remained in real terms. The following chart shows how much Germany's purchasing power has actually declined since 2016.

    13. German wages have risen by 27 percent over the past eight years, but a good 25 percent of these gains have been wiped out by inflation. What remains is a real wage growth of just 1.3 percent. This minimal progress evaporates almost completely because rents in many places are rising even faster than incomes. In Berlin, for example, rents increased by 91 percent, in Leipzig by 74 percent, and in Munich by 53 percent. For comparison: In 2014, only six districts and independent cities in Germany exceeded the critical rent burden threshold of 30 percent. Ten years later, there are already 26. The pressure is no longer limited to major hubs; it has become a nationwide phenomenon.

      German wages have risen by 27 percent over the past eight years, but a good 25 percent of those gains have been wiped out by inflation. What remains is real wage growth of just 1.3 percent. This minimal progress evaporates almost completely because rents in many places are rising even faster than incomes. In Berlin, for example, rents increased by 91 percent, in Leipzig by 74 percent, and in Munich by 53 percent. For comparison: In 2014, only six districts and independent cities in Germany exceeded the critical rent burden threshold of 30 percent. Ten years later, there are already 26. The pressure is no longer limited to major hubs; it has become a nationwide phenomenon.

    14. In 2016, software developers in Berlin earned a median net monthly income of about €2,802 per month and could afford to rent around 78 m². By 2024, the salary for the same position had risen to about €3,956—yet their rental budget stretched to only 64 m². Despite over €1,100 more in income, about 14 m² of living space were lost. This is not an isolated case. In most cities and professions, rising wages are being outpaced by even faster-growing rents.

      In 2016, software developers in Berlin earned a median net monthly income of about €2,802 and could afford to rent around 78 m². By 2024, the salary for the same position had risen to about €3,956—yet their rental budget stretched to only 64 m². Despite over €1,100 more in income, about 14 m² of living space were lost. This is not an isolated case. In most cities and professions, rising wages are being outpaced by even faster-growing rents.

    1. Pierre-François Bouchard’s men discovered the ancient stone slab
      <center>

      Rosetta Stone (RS)

      </center>

      Useful Links

      1. Rosetta Stone_ Wikipedia
      2. Explore the Rosetta Stone_ British Museum
      3. Rosetta Stone_ Britannica
      4. What is Rosetta Stone and why is it important?
      5. Rosetta Stone- Smithsonian

      On July 19, 1799, Pierre- Francois Bouchard's men discovered an ancient "basalt" slab in Rosetta (local name Rashid), Egypt. It was covered with 3 types of writing- Demotic, Hieroglyphics and ancient Greek. Scholars traced origin of the RS to 196 BCE in Egypt's Ptolemaic era

      Click map of the Ptolemaic dynasty

      <center>The Rosetta Stone decoded by AI</center> Click this YouTube Link

    2. How the Hieroglyphics were decodified?

      • Europeans were missing a key piece of the puzzle and had been for 2 000 years. They had been trying to figure out how to read hieroglyphics for centuries but the only instructions on how to do so came from ancient Greek and Roman writers who insisted that they were ideographies using pictures to indicate concepts. While that was true sometimes they could also be phonetic indicating sounds the same way as alphabetic languages do. This misunderstanding was inherited all the way to the 1800s.
      • Medieval Muslim researchers tried to crack the code and failed though two did discover that some of the code lined up with Coptic, a descendant of ancient Egyptian. Later when Renaissance alchemists attempted to read the texts hoping to learn ancient spells, healing practices and other wonders, they had even less luck.
      • It wasn't in until 1814 that an English polymath named Thomas Young made the first real progress. Young, a medical doctor, scientist and linguist at first just busied himself with translating the Demonic section of the Rosetta Stone. However, after a conversation with another researcher (who suggested that the ptolemies being Greek might have written their names phonetically in hieroglyphics) he decided to jump sections. He reasoned that finding the Royal name should be easy enough since it had been suggested that they were always in a circle that we now know as a Cartouche and sure enough he found the name Ptolemy. Upon further study, Young found 80 similarities between the hieroglyphic section of the stone and the Demonic one.
      • Young's work stalled as he incorrectly assumed that hieroglyphics were logographic symbols with each symbol representing a word (like Chinese or Japanese) and that only the Greek names would have phonetic equivalents.

      In comes Jean Francois Champollion!

      • Champollion had been attempting to translate hieroglyphics from his knowlege of Copic and Demotic believing that they were in fact phonetic. However, being in France he had to work off print copies of the stone and probably never got to see the actual Rosetta Stone.
      • champollion used his earlier work on demonic and knowledge of Coptic to reconstruct theoretical cartouches of common Egyptian royal names> His hope was that these cartouches, should he find them in inscriptions, would gradually unlock more hieroglyphic characters. This he did while also feuding with rivals and periodically going into exile for his continued support of Napoleon.
      • Then when Banks (see below about Banks) sent him a print of the inscriptions on his Obelisk champollion stopped dead. There on the side was his reconstruction of Cleopatra! He went into a feverish blitz of work and began to realize that Egyptian hieroglyphics were a mix of ideographic and phonetic characters.
      • It was i 1822 when it all clicked. He read the name Thutmose from an imported inscription, then checked it against the Rosetta Stone. He then bolted from his desk ran down the street to his brother's house and supposedly screamed "I've got it" before collapsing in a dramatic faint.
      • In1829 he fulfilled his lifelong dream of traveling to Egypt. Once there, he found a vanished world beginning to speak to him. Using his dictionary and grammar system, he read the words of Gods and Priests off the temple walls. He uncovered Kings whose names had not been spoken in a millennia and in the Papyrus Scrolls preserved in the Arid deserts of Upper Egypt he found the words of the common people even though he'd never laid eyes on it.

      About Banks mentioned above

      John Banks was touring Egypt when he fell in love with a 22 foot tall six-ton Obelisk and decided that it would look great in front of his yard as it also had inscriptions in hieroglyphics and Greek. He hoped it would be a second Rosetta Stone. So he did what anyone would do: hired an Italian circus strongman to coordinate hauling it back to his estate in England.

    3. <center>

      How the Rosetta Stone was discovered

      </center>

      <div style="background-color: silver; color: black; font-weight: bold;"> Nearly 2 000 years after the Rosetta stone decree was written, the French military engineer Pierre Francois Bouchard was repairing the defenses of an Arab Fort strengthening it against the ottoman Fleet that's expected to arrive within a short time. Only a year ago the French army had invaded Egypt and attempted to set up a colony but that increasingly looked doomed to failure as both the Ottomans and British were mustering to attack the French and the Army has to consistently battle internal revolts. </div>

      Discovery of the Rosetta Stone

      <div style="background-color: DarkCyan; color: white; font-weight: bold;"> During the repair work the engineers find the stone seemingly used as scrap construction material in an old wall. They immediately brought it to Bouchard's attention as they're supposed to do with any artifact. Seeing the script on the broken Stone Bouchard immediately realized the implications of this find as this inscription was in three different languages and it could be the key to finally deciphering Ancient Egyptian Hieroglyphics. so he sends out a message saying he's found a curious artifact near Rosetta the French name for Rasheed. </div>

      How Rosetta Stone was deciphered

      <div style="background-color: LightCyan; color: black; font-weight: bold;"> Bouchard sent out a message saying he's found a curious artifact near Rosetta, the French name for Rasheed. He then passed the stone to General Mano transferring it to his tent to be cleaned and the Greek to be translated while they dug in hopes of finding its missing pieces but then as the French army fought off an Ottoman landing at Abu kirbei Bouchard accompanied the 1700 pound Stone to the savants headquarters at Cairo. It arrived just in time for Napoleon to see it before he ditched the Expedition and sailed back to France. In doing so he left the savants with a deteriorating military situation alongside a Priceless but extremely heavy artifact that they had no way of getting back to France. </div>

      Note:The French Expedition also had an academic component of over 150 so-called savants - scientists, writers, linguists and other academics who had come along setting up a research institute in Cairo where they studied everything from local Wildlife to ancient artifacts.

      The deciphering of the Rosetta Stone

      Rosetta Stone changes hands from French to the British:

      It became soon obvious that the mysterious third language was not Syriac as originally thought but the Demotic mentioned in classical sources. At first they tried copying it by hand but it proved too intricate. Then they just smeared ink on the front and then pressed it with paper like it was a printing press. It worked ! All the while the French army hauled the stone around even to battlefields unwilling to leave it unguarded. Prints of the inscription had already reached Paris which was good because the Rosetta Stone could not be transported there.

      in 1801 General Menou, now in charge of the Expedition, signed a surrender agreement with the British and the Ottomans and one of the provisions was that all of the artifacts retrieved during the French expedition to Egypt were now Spoils of War and the personal property of King George III, especially the Rosetta Stone. In fact, the British were so pleased with its acquisition that they actually painted on the side "captured in Egypt by the British Army in 1801".

      Click Captured-by-British-Rosetta-Stone

      A year later when King George donated it to the British museum

    1. The new litmus test isn’t “Does it scale?” It’s: “Does it spread? Does it take root? Can it compost and regrow?”

      very much yes. Scaling is useless metaphor. Spread, evolution much more. Effective behaviour is contagious. Invisible hand of networks / communities [[Of Scaling TV Salons and the Invisible Hand of Networks – Interdependent Thoughts 20250803205329]]

    2. So the real work is mediation. Not purity, not retreat, but balancing these tensions in practice: holding space where native paths can grow without being co-opted or crushed, while at the same time still reaching out to shift the wider terrain.

      Systems convening, social learning landscapes [[Systems convening Wenger Trayner 20230825170358]]

    3. The problem arises when less-native, often externally imposed systems (driven by capitalist or institutional agendas) treat these messy, friction-full spaces as broken or backwards.

      The likelihood of it increases with social distance (no community) and with places where the underlying logic is different (vgl [[Waarheid en kennis kent historische periodes 20250914161603]] Foucault's periods of epistemic assumptions), at a smaller scale), clashes of differently positions 'Overton' type of windows of accepted discourse, and Rorty being forced wording the new in the language of the old. It's a language came underneath.

    4. It’s important to recognise that friction – the mess, the slowness, the need for constant negotiation – is not a flaw in native paths, it’s a virtue. It’s how trust, mutuality, and accountability are sustained over time.

      again yes. (Same is true for e.g. the EU. What others see as it weaknesses, endless talk and no swift action, is precisely why it endures and has more resilience and robustness than acknowledged)

    5. This is the norm across many #4opens spaces: a near-total lack of interest in building or maintaining shared paths. It’s a textbook case of right-wing Tragedy of the Commons. Developers show up when it suits them, use the space for their narrow needs, then drift off without contributing to the upkeep. They treat community like free infrastructure – something passive they can extract from – rather than a living, tended path we need.

      By def, we're not talking about community then. The behaviour mentioned is that of those who do not think they're part of a bigger whole here. Then by def whatever output is there to just use, as there is no social contract involved. Social asymmetry then is a given, and thus a breakdown of commons.

    1. Reviewer #1 (Public review):

      Summary:

      This study investigates how the brain processes facial expressions across development by analyzing intracranial EEG (iEEG) data from children (ages 5-10) and post-childhood individuals (ages 13-55). The researchers used a short film containing emotional facial expressions and applied AI-based models to decode brain responses to facial emotions. They found that in children, facial emotion information is represented primarily in the posterior superior temporal cortex (pSTC)-a sensory processing area-but not in the dorsolateral prefrontal cortex (DLPFC), which is involved in higher-level social cognition. In contrast, post-childhood individuals showed emotion encoding in both regions. Importantly, the complexity of emotions encoded in the pSTC increased with age, particularly for socially nuanced emotions like embarrassment, guilt, and pride.The authors claim that these findings suggest that emotion recognition matures through increasing involvement of the prefrontal cortex, supporting a developmental trajectory where top-down modulation enhances understanding of complex emotions as children grow older.

      Strengths:

      (1) The inclusion of pediatric iEEG makes this study uniquely positioned to offer high-resolution temporal and spatial insights into neural development compared to non-invasive approaches, e.g., fMRI, scalp EEG, etc.

      (2) Using a naturalistic film paradigm enhances ecological validity compared to static image tasks often used in emotion studies.

      (3) The idea of using state-of-the-art AI models to extract facial emotion features allows for high-dimensional and dynamic emotion labeling in real time.

      Weaknesses:

      (1) The study has notable limitations that constrain the generalizability and depth of its conclusions. The sample size was very small, with only nine children included and just two having sufficient electrode coverage in the posterior superior temporal cortex (pSTC), which weakens the reliability and statistical power of the findings, especially for analyses involving age. Authors pointed out that a similar sample size has been used in previous iEEG studies, but the cited works focus on adults and do not look at the developmental perspectives. Similar work looking at developmental changes in iEEG signals usually includes many more subjects (e.g., n = 101 children from Cross ZR et al., Nature Human Behavior, 2025) to account for inter-subject variabilities.

      (2) Electrode coverage was also uneven across brain regions, with not all participants having electrodes in both the dorsolateral prefrontal cortex (DLPFC) and pSTC, making the conclusion regarding the different developmental changes between DLPFC and pSTC hard to interpret (related to point 3 below). It is understood that it is rare to have such iEEG data collected in this age group, and the electrode location is only determined by clinical needs. However, the scientific rigor should not be compromised by the limited data access. It's the authors' decision whether such an approach is valid and appropriate to address the scientific questions, here the developmental changes in the brain, given all the advantages and constraints of the data modality.

      (3) The developmental differences observed were based on cross-sectional comparisons rather than longitudinal data, reducing the ability to draw causal conclusions about developmental trajectories. Also, see comments in point 2.

      (4) Moreover, the analysis focused narrowly on DLPFC, neglecting other relevant prefrontal areas such as the orbitofrontal cortex (OFC) and anterior cingulate cortex (ACC), which play key roles in emotion and social processing. Agree that this might be beyond the scope of this paper, but a discussion section might be insightful.

      (5) Although the use of a naturalistic film stimulus enhances ecological validity, it comes at the cost of experimental control, with no behavioral confirmation of the emotions perceived by participants and uncertain model validity for complex emotional expressions in children. A non-facial music block that could have served as a control was available but not analyzed. The validation of AI model's emotional output needs to be tested. It is understood that we cannot collect these behavioral data retrospectively within the recorded subjects. Maybe potential post-hoc experiments and analyses could be done, e.g., collect behavioral, emotional perception data from age-matched healthy subjects.

      (6) Generalizability is further limited by the fact that all participants were neurosurgical patients, potentially with neurological conditions such as epilepsy that may influence brain responses. At least some behavioral measures between the patient population and the healthy groups should be done to ensure the perception of emotions is similar.

      (7) Additionally, the high temporal resolution of intracranial EEG was not fully utilized, as data were downsampled and averaged in 500-ms windows. It seems like the authors are trying to compromise the iEEG data analyses to match up with the AI's output resolution, which is 2Hz. It is not clear then why not directly use fMRI, which is non-invasive and seems to meet the needs here already. The advantages of using iEEG in this study are missing here.

      (8) Finally, the absence of behavioral measures or eye-tracking data makes it difficult to directly link neural activity to emotional understanding or determine which facial features participants attended to. Related to point 5 as well.

      Comments on revisions:

      A behavioral measurement will help address a lot of these questions. If the data continues collecting, additional subjects with iEEG recording and also behavioral measurements would be valuable.

    2. Reviewer #2 (Public review):

      Summary:

      In this paper, Fan et al. aim to characterize how neural representations of facial emotions evolve from childhood to adulthood. Using intracranial EEG recordings from participants aged 5 to 55, the authors assess the encoding of emotional content in high-level cortical regions. They report that while both the posterior superior temporal cortex (pSTC) and dorsolateral prefrontal cortex (DLPFC) are involved in representing facial emotions in older individuals, only the pSTC shows significant encoding in children. Moreover, the encoding of complex emotions in the pSTC appears to strengthen with age. These findings lead the authors to suggest that young children rely more on low-level sensory areas and propose a developmental shift from reliance on lower-level sensory areas in early childhood to increased top-down modulation by the prefrontal cortex as individuals mature.

      Strengths:

      (1) Rare and valuable dataset: The use of intracranial EEG recordings in a developmental sample is highly unusual and provides a unique opportunity to investigate neural dynamics with both high spatial and temporal resolution.

      (2 ) Developmentally relevant design: The broad age range and cross-sectional design are well-suited to explore age-related changes in neural representations.

      (3) Ecological validity: The use of naturalistic stimuli (movie clips) increases the ecological relevance of the findings.

      (4) Feature-based analysis: The authors employ AI-based tools to extract emotion-related features from naturalistic stimuli, which enables a data-driven approach to decoding neural representations of emotional content. This method allows for a more fine-grained analysis of emotion processing beyond traditional categorical labels.

      Weaknesses:

      (1) While the authors leverage Hume AI, a tool pre-trained on a large dataset, its specific performance on the stimuli used in this study remains unverified. To strengthen the foundation of the analysis, it would be important to confirm that Hume AI's emotional classifications align with human perception for these particular videos. A straightforward way to address this would be to recruit human raters to evaluate the emotional content of the stimuli and compare their ratings to the model's outputs.

      (2) Although the study includes data from four children with pSTC coverage-an increase from the initial submission-the sample size remains modest compared to recent iEEG studies in the field.

      (3) The "post-childhood" group (ages 13-55) conflates several distinct neurodevelopmental periods, including adolescence, young adulthood, and middle adulthood. As a finer age stratification is likely not feasible with the current sample size, I would suggest authors temper their developmental conclusions.

      (4) The analysis of DLPFC-pSTC directional connectivity would be significantly strengthened by modeling it as a continuous function of age across all participants, rather than relying on an unbalanced comparison between a single child and a (N=7) post-childhood group. This continuous approach would provide a more powerful and nuanced view of the developmental trajectory. I would also suggest including the result in the main text.

    3. Author response:

      The following is the authors’ response to the original reviews.

      eLife Assessment

      This study examines a valuable question regarding the developmental trajectory of neural mechanisms supporting facial expression processing. Leveraging a rare intracranial EEG (iEEG) dataset including both children and adults, the authors reported that facial expression recognition mainly engaged the posterior superior temporal cortex (pSTC) among children, while both pSTC and the prefrontal cortex were engaged among adults. However, the sample size is relatively small, with analyses appearing incomplete to fully support the primary claims. 

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      This study investigates how the brain processes facial expressions across development by analyzing intracranial EEG (iEEG) data from children (ages 5-10) and post-childhood individuals (ages 13-55). The researchers used a short film containing emotional facial expressions and applied AI-based models to decode brain responses to facial emotions. They found that in children, facial emotion information is represented primarily in the posterior superior temporal cortex (pSTC) - a sensory processing area - but not in the dorsolateral prefrontal cortex (DLPFC), which is involved in higher-level social cognition. In contrast, post-childhood individuals showed emotion encoding in both regions. Importantly, the complexity of emotions encoded in the pSTC increased with age, particularly for socially nuanced emotions like embarrassment, guilt, and pride. The authors claim that these findings suggest that emotion recognition matures through increasing involvement of the prefrontal cortex, supporting a developmental trajectory where top-down modulation enhances understanding of complex emotions as children grow older.

      Strengths:

      (1) The inclusion of pediatric iEEG makes this study uniquely positioned to offer high-resolution temporal and spatial insights into neural development compared to non-invasive approaches, e.g., fMRI, scalp EEG, etc.

      (2) Using a naturalistic film paradigm enhances ecological validity compared to static image tasks often used in emotion studies.

      (3) The idea of using state-of-the-art AI models to extract facial emotion features allows for high-dimensional and dynamic emotion labeling in real time

      Weaknesses:

      (1) The study has notable limitations that constrain the generalizability and depth of its conclusions. The sample size was very small, with only nine children included and just two having sufficient electrode coverage in the posterior superior temporal cortex (pSTC), which weakens the reliability and statistical power of the findings, especially for analyses involving age

      We appreciated the reviewer’s point regarding the constrained sample size.

      As an invasive method, iEEG recordings can only be obtained from patients undergoing electrode implantation for clinical purposes. Thus, iEEG data from young children are extremely rare,  and rapidly increasing the sample size within a few years is not feasible. However, we are confident in the reliability of our main conclusions. Specifically, 8 children (53 recording contacts in total) and 13 control participants (99 recording contacts in total) with electrode coverage in the DLPFC are included in our DLPFC analysis. This sample size is comparable to other iEEG studies with similar experiment designs [1-3]. 

      For pSTC, we returned to the data set and found another two children who had pSTC coverage. After involving these children’s data, the group-level analysis using permutation test showed that children’s pSTC significantly encode facial emotion in naturalistic contexts (Figure 3B). Notably, the two new children’s (S33 and S49) responses were highly consistent with our previous observations. Moreover, the averaged prediction accuracy in children’s pSTC (r<sub>speech</sub>=0.1565) was highly comparable to that in post-childhood group (r<sub>speech</sub>=0.1515).

      (1) Zheng, J. et al. Multiplexing of Theta and Alpha Rhythms in the Amygdala-Hippocampal Circuit Supports Pafern Separation of Emotional Information. Neuron 102, 887-898.e5 (2019).

      (2) Diamond, J. M. et al. Focal seizures induce spatiotemporally organized spiking activity in the human cortex. Nat. Commun. 15, 7075 (2024).

      (3) Schrouff, J. et al. Fast temporal dynamics and causal relevance of face processing in the human temporal cortex. Nat. Commun. 11, 656 (2020).

      (2) Electrode coverage was also uneven across brain regions, with not all participants having electrodes in both the dorsolateral prefrontal cortex (DLPFC) and pSTC, and most coverage limited to the left hemisphere-hindering within-subject comparisons and limiting insights into lateralization.

      The electrode coverage in each patient is determined entirely by the clinical needs. Only a few patients have electrodes in both DLPFC and pSTC because these two regions are far apart, so it’s rare for a single patient’s suspected seizure network to span such a large territory. However, it does not affect our results, as most iEEG studies combine data from multiple patients to achieve sufficient electrode coverage in each target brain area. As our data are mainly from left hemisphere (due to the clinical needs), this study was not designed to examine whether there is a difference between hemispheres in emotion encoding. Nevertheless, lateralization remains an interesting question that should be addressed in future research, and we have noted this limitation in the Discussion (Page 8, in the last paragraph of the Discussion).

      (3) The developmental differences observed were based on cross-sectional comparisons rather than longitudinal data, reducing the ability to draw causal conclusions about developmental trajectories.  

      In the context of pediatric intracranial EEG, longitudinal data collection is not feasible due to the invasive nature of electrode implantation. We have added this point to the Discussion to acknowledge that while our results reveal robust age-related differences in the cortical encoding of facial emotions, longitudinal studies using non-invasive methods will be essential to directly track developmental trajectories (Page 8, in the last paragraph of Discussion). In addition, we revised our manuscript to avoid emphasis causal conclusions about developmental trajectories in the current study (For example, we use “imply” instead of “suggest” in the fifth paragraph of Discussion).

      (4) Moreover, the analysis focused narrowly on DLPFC, neglecting other relevant prefrontal areas such as the orbitofrontal cortex (OFC) and anterior cingulate cortex (ACC), which play key roles in emotion and social processing.

      We agree that both OFC and ACC are critically involved in emotion and social processing. However, we have no recordings from these areas because ECoG rarely covers the ACC or OFC due to technical constraints. We have noted this limitation in the Discussion(Page 8, in the last paragraph of Discussion). Future follow-up studies using sEEG or non-invasive imaging methods could be used to examine developmental patterns in these regions.

      (5) Although the use of a naturalistic film stimulus enhances ecological validity, it comes at the cost of experimental control, with no behavioral confirmation of the emotions perceived by participants and uncertain model validity for complex emotional expressions in children. A nonfacial music block that could have served as a control was available but not analyzed. 

      The facial emotion features used in our encoding models were extracted by Hume AI models, which were trained on human intensity ratings of large-scale, experimentally controlled emotional expression data[1-2]. Thus, the outputs of Hume AI model reflect what typical facial expressions convey, that is, the presented facial emotion. Our goal of the present study was to examine how facial emotions presented in the videos are encoded in the human brain at different developmental stages. We agree that children’s interpretation of complex emotions may differ from that of adults, resulting in different perceived emotion (i.e., the emotion that the observer subjectively interprets). Behavioral ratings are necessary to study the encoding of subjectively perceived emotion, which is a very interesting direction but beyond the scope of the present work. We have added a paragraph in the Discussion (see Page 8) to explicitly note that our study focused on the encoding of presented emotion.

      We appreciated the reviewer’s point regarding the value of non-facial music blocks. However,  although there are segments in music condition that have no faces presented, these cannot be used as a control condition to test whether the encoding model’s prediction accuracy in pSTC or DLPFC drops to chance when no facial emotion is present. This is because, in the absence of faces, no extracted emotion features are available to be used for the construction of encoding model (see Author response image 1 below).  Thus, we chose to use a different control analysis for the present work. For children’s pSTC, we shuffled facial emotion feature in time to generate a null distribution, which was then used to test the statistical significance of the encoding models (see Methods/Encoding model fitting for details).

      (1) Brooks, J. A. et al. Deep learning reveals what facial expressions mean to people in different cultures. iScience 27, 109175 (2024).

      (2) Brooks, J. A. et al. Deep learning reveals what vocal bursts express in different cultures. Nat. Hum. Behav. 7, 240–250 (2023).

      Author response image 1.

      Time courses of Hume AI extracted facial expression features for the first block of music condition. Only top 5 facial expressions were shown here to due to space limitation.

      (6) Generalizability is further limited by the fact that all participants were neurosurgical patients, potentially with neurological conditions such as epilepsy that may influence brain responses. 

      We appreciated the reviewer’s point. However, iEEG data can only be obtained from clinical populations (usually epilepsy patients) who have electrodes implantation.  Given current knowledge about focal epilepsy and its potential effects on brain activity, researchers believe that epilepsy-affected brains can serve as a reasonable proxy for normal human brains when confounding influences are minimized through rigorous procedures[1]. In our study, we took several steps to ensure data quality: (1) all data segments containing epileptiform discharges were identified and removed at the very beginning of preprocessing, (2) patients were asked to participate the experiment several hours outside the window of seizures. Please see Method for data quality check description (Page 9/ Experimental procedures and iEEG data processing). 

      (1) Parvizi J, Kastner S. 2018. Promises and limitations of human intracranial electroencephalography. Nat Neurosci 21:474–483. doi:10.1038/s41593-018-0108-2

      (7) Additionally, the high temporal resolution of intracranial EEG was not fully utilized, as data were down-sampled and averaged in 500-ms windows.  

      We agree that one of the major advantages of iEEG is its millisecond-level temporal resolution. In our case, the main reason for down-sampling was that the time series of facial emotion features extracted from the videos had a temporal resolution of 2 Hz, which were used for the modelling neural responses. In naturalistic contexts, facial emotion features do not change on a millisecond timescale, so a 500 ms window is sufficient to capture the relevant dynamics. Another advantage of iEEG is its tolerance to motion, which is excessive in young children (e.g., 5-year-olds). This makes our dataset uniquely valuable, suggesting robust representation in the pSTC but not in the DLPFC in young children. Moreover, since our method framework (Figure 1) does not rely on high temporal resolution method, so it can be transferred to non-invasive modalities such as fMRI, enabling future studies to test these developmental patterns in larger populations.

      (8) Finally, the absence of behavioral measures or eye-tracking data makes it difficult to directly link neural activity to emotional understanding or determine which facial features participants afended to.  

      We appreciated this point. Part of our rationale is presented in our response to (5) for the absence of behavioral measures. Following the same rationale, identifying which facial features participants attended to is not necessary for testing our main hypotheses because our analyses examined responses to the overall emotional content of the faces. However, we agree and recommend future studies use eye-tracking and corresponding behavioral measures in studies of subjective emotional understanding. 

      Reviewer #2 (Public review):

      Summary:

      In this paper, Fan et al. aim to characterize how neural representations of facial emotions evolve from childhood to adulthood. Using intracranial EEG recordings from participants aged 5 to 55, the authors assess the encoding of emotional content in high-level cortical regions. They report that while both the posterior superior temporal cortex (pSTC) and dorsolateral prefrontal cortex (DLPFC) are involved in representing facial emotions in older individuals, only the pSTC shows significant encoding in children. Moreover, the encoding of complex emotions in the pSTC appears to strengthen with age. These findings lead the authors to suggest that young children rely more on low-level sensory areas and propose a developmental shiZ from reliance on lower-level sensory areas in early childhood to increased top-down modulation by the prefrontal cortex as individuals mature.

      Strengths: 

      (1) Rare and valuable dataset: The use of intracranial EEG recordings in a developmental sample is highly unusual and provides a unique opportunity to investigate neural dynamics with both high spatial and temporal resolution. 

      (2) Developmentally relevant design: The broad age range and cross-sectional design are well-suited to explore age-related changes in neural representations. 

      (3) Ecological validity: The use of naturalistic stimuli (movie clips) increases the ecological relevance of the findings. 

      (4) Feature-based analysis: The authors employ AIbased tools to extract emotion-related features from naturalistic stimuli, which enables a data-driven approach to decoding neural representations of emotional content. This method allows for a more fine-grained analysis of emotion processing beyond traditional categorical labels. 

      Weaknesses: 

      (1) The emotional stimuli included facial expressions embedded in speech or music, making it difficult to isolate neural responses to facial emotion per se from those related to speech content or music-induced emotion. 

      We thank the reviewer for their raising this important point. We agree that in naturalistic settings, face often co-occur with speech, and that these sources of emotion can overlap. However, background music induced emotions have distinct temporal dynamics which are separable from facial emotion (See the Author response image 2 (A) and (B) below). In addition, face can convey a wide range of emotions (48 categories in Hume AI model), whereas music conveys far fewer (13 categories reported by a recent study [1]). Thus, when using facial emotion feature time series as regressors (with 48 emotion categories and rapid temporal dynamics), the model performance will reflect neural encoding of facial emotion in the music condition, rather than the slower and lower-dimensional emotion from music. 

      For the speech condition, we acknowledge that it is difficult to fully isolate neural responses to facial emotion from those to speech when the emotional content from faces and speech highly overlaps. However, in our study, (1) the time courses of emotion features from face and voice are still different (Author response image 2 (C) and (D)), (2) our main finding that DLPFC encodes facial expression information in postchildhood individuals but not in young children was found in both speech and music condition (Figure 2B and 2C). In music condition, neural responses to facial emotion are not affected by speech. Thus, we have included the DLPFC results from the music condition in the revised manuscript (Figure 2C), and we acknowledge that this issue should be carefully considered in future studies using videos with speech, as we have indicated in the future directions in the last paragraph of Discussion.

      (1) Cowen, A. S., Fang, X., Sauter, D. & Keltner, D. What music makes us feel: At least 13 dimensions organize subjective experiences associated with music across different cultures. Proc Natl Acad Sci USA 117, 1924–1934 (2020).

      Author response image 2.

      Time courses of the amusement. (A) and (B) Amusement conveyed by face or music in a 30-s music block. Facial emotion features are extracted by Hume AI. For emotion from music, we approximated the amusement time course using a weighted combination of low-level acoustic features (RMS energy, spectral centroid, MFCCs), which capture intensity, brightness, and timbre cues linked to amusement. Notice that music continues when there are no faces presented. (C) and (D) Amusement conveyed by face or voice in a 30-s speech block. From 0 to 5 seconds, a girl is introducing her friend to a stranger. The camera focuses on the friend, who appears nervous, while the girl’s voice sounds cheerful. This mismatch explains why the shapes of the two time series differ at the beginning. Such situations occur frequently in naturalistic movies

      (2) While the authors leveraged Hume AI to extract facial expression features from the video stimuli, they did not provide any validation of the tool's accuracy or reliability in the context of their dataset. It remains unclear how well the AI-derived emotion ratings align with human perception, particularly given the complexity and variability of naturalistic stimuli. Without such validation, it is difficult to assess the interpretability and robustness of the decoding results based on these features.  

      Hume AI models were trained and validated by human intensity ratings of large-scale, experimentally controlled emotional expression data [1-2]. The training process used both manual annotations from human raters and deep neural networks. Over 3000 human raters categorized facial expressions into emotion categories and rated on a 1-100 intensity scale. Thus, the outputs of Hume AI model reflect what typical facial expressions convey (based on how people actually interpret them), that is, the presented facial emotion. Our goal of the present study was to examine how facial emotions presented in the videos are encoded in the human brain at different developmental stages. We agree that the interpretation of facial emotions may be different in individual participants, resulting in different perceived emotion (i.e., the emotion that the observer subjectively interprets). Behavioral ratings are necessary to study the encoding of subjectively perceived emotion, which is a very interesting direction but beyond the scope of the present work. We have added text in the Discussion to explicitly note that our study focused on the encoding of presented emotion (second paragraph in Page 8).

      (1) Brooks, J. A. et al. Deep learning reveals what facial expressions mean to people in different cultures. iScience 27, 109175 (2024).

      (2) Brooks, J. A. et al. Deep learning reveals what vocal bursts express in different cultures. Nat. Hum. Behav. 7, 240–250 (2023).

      (3) Only two children had relevant pSTC coverage, severely limiting the reliability and generalizability of results.  

      We appreciated this point and agreed with both reviewers who raised it as a significant concern. As described in response to reviewer 1 (comment 1), we have added data from another two children who have pSTC coverage. Group-level analysis using permutation test showed that children’s pSTC significantly encode facial emotion in naturalistic contexts (Figure 3B). Because iEEG data from young children are extremely rare, rapidly increasing the sample size within a few years is not feasible. However, we are confident in the reliability of our conclusion that children’s pSTC can encode facial emotion. First,  the two new children’s responses (S33 and S49) from pSTC were highly consistent with our previous observations (see individual data in Figure 3B). Second, the averaged prediction accuracy in children’s pSTC (r<sub>speech</sub>=0.1565) was highly comparable to that in post-childhood group (r<sub>speech</sub>=0.1515).

      (4) The rationale for focusing exclusively on high-frequency activity for decoding emotion representations is not provided, nor are results from other frequency bands explored.   

      We focused on high-frequency broadband (HFB) activity because it is widely considered to reflect the responses of local neuronal populations near the recording electrode, whereas low-frequency oscillations in the theta, alpha, and beta ranges are thought to serve as carrier frequencies for long-range communication across distributed networks[1-2]. Since our study aimed to examine the representation of facial emotion in localized cortical regions (DLPFC and pSTC), HFB activity provides the most direct measure of the relevant neural responses. We have added this rationale to the manuscript (Page 3).

      (1) Parvizi, J. & Kastner, S. Promises and limitations of human intracranial electroencephalography. Nat. Neurosci. 21, 474–483 (2018).

      (2) Buzsaki, G. Rhythms of the Brain. (Oxford University Press, Oxford, 200ti).

      (5) The hypothesis of developmental emergence of top-down prefrontal modulation is not directly tested. No connectivity or co-activation analyses are reported, and the number of participants with simultaneous coverage of pSTC and DLPFC is not specified.  

      Directional connectivity analysis results were not shown because only one child has simultaneous coverage of pSTC and DLPFC. However, the  Granger Causality results from post-childhood group (N=7) clearly showed that the influence in the alpha/beta band from DLPFC to pSTC (top-down) is gradually increased above the onset of face presentation (Author response image 3, below left, plotted in red). By comparison, the influence in the alpha/beta band from pSTC to DLPFC (bottom-up) is gradually decreased after the onset of face presentation (Author response image 3, below left, blue curve). The influence in alpha/beta band from DLPFC to pSTC was significantly increased at 750 and 1250 ms after the face presentation (face vs nonface, paired t-test, Bonferroni  corrected P=0.005, 0.006), suggesting an enhanced top-down modulation in the post-childhood group during watching emotional faces. Interestingly, this top-down influence appears very different in the 8-year-old child at 1250 ms after the face presentation (Author response image 3, below left, black curve).

      As we cannot draw direct conclusions from the single-subject sample presented here, the top-down hypothesis is introduced only as a possible explanation for our current results. We have removed potentially misleading statements, and we plan to test this hypothesis directly using MEG in the future.

      Author response image 3.

      Difference of Granger causality indices (face – nonface) in alpha/beta and gamma band for both directions. We identified a series of face onset in the movie that paticipant watched. Each trial was defined as -0.1 to 1.5 s relative to the onset. For the non-face control trials, we used houses, animals and scenes. Granger causality was calculated for 0-0.5 s, 0.5-1 s and 1-1.5 s time window. For the post-childhood group, GC indices were averaged across participants. Error bar is sem.

      (6) The "post-childhood" group spans ages 13-55, conflating adolescence, young adulthood, and middle age. Developmental conclusions would benefit from finer age stratification.  

      We appreciate this insightful comment. Our current sample size does not allow such stratification. But we plan to address this important issue in future MEG studies with larger cohorts.

      (7) The so-called "complex emotions" (e.g., embarrassment, pride, guilt, interest) used in the study often require contextual information, such as speech or narrative cues, for accurate interpretation, and are not typically discernible from facial expressions alone. As such, the observed age-related increase in neural encoding of these emotions may reflect not solely the maturation of facial emotion perception, but rather the development of integrative processing that combines facial, linguistic, and contextual cues. This raises the possibility that the reported effects are driven in part by language comprehension or broader social-cognitive integration, rather than by changes in facial expression processing per se.  

      We agree with this interpretation. Indeed, our results already show that speech influences the encoding of facial emotion in the DLPFC differently in the childhood and post-childhood groups (Figure 2D), suggesting that children’s ability to integrate multiple cues is still developing. Future studies are needed to systematically examine how linguistic cues and prior experiences contribute to the understanding of complex emotions from faces, which we have added to our future directions section (last paragraph in Discussion, Page 8-9 ).

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors): 

      In the introduction: "These neuroimaging data imply that social and emotional experiences shape the prefrontal cortex's involvement in processing the emotional meaning of faces throughout development, probably through top-down modulation of early sensory areas." Aren't these supposed to be iEEG data instead of neuroimaging? 

      Corrected.

      Reviewer #2 (Recommendations for the authors):

      This manuscript would benefit from several improvements to strengthen the validity and interpretability of the findings:

      (1) Increase the sample size, especially for children with pSTC coverage. 

      We added data from another two children who have pSTC coverage. Please see our response to reviewer 2’s comment 3 and reviewer 1’s comment 1.

      (2) Include directional connectivity analyses to test the proposed top-down modulation from DLPFC to pSTC. 

      Thanks for the suggestion. Please see our response to reviewer 2’s comment 5.

      (3) Use controlled stimuli in an additional experiment to separate the effects of facial expression, speech, and music. 

      This is an excellent point. However, iEEG data collection from children is an exceptionally rare opportunity and typically requires many years, so we are unable to add a controlled-stimulus experiment to the current study. We plan to consider using controlled stimuli to study the processing of complex emotion using non-invasive method in the future. In addition, please see our response to reviewer 2’s comment 1 for a description of how neural responses to facial expression and music are separated in our study.

    1. The Covid-19 pandemic brought these problems to the fore. While other countries resorted to debt and fiscal stimulus to deal with the crisis, the Mexican government insisted on maintaining strict budgetary balance. There were no massive bailouts, no universal direct aid, and no increase in public investment to mitigate the economic blow. The IMF rewarded the government for its pandemic response with Special Drawing Rights. As a result, austerity remained the guiding principle even in these exceptional circumstances: protecting the macroeconomic balance while sacrificing the income of millions of families.

      Notable

    1. In Shaw’s research, finding a place to experience com-munity was more important for queer players than LGBTQ charactersbeing present in the game, and while gaymers didn’t purchase gamesfor queer content, they discussed games where queer content was in-cluded (2012).

      Meaning fanbases. Get together and talk about this gay character, to feel like you belong somewhere, to be socially validated.

      Yet, the character is the excuse to meet for the first time. Even if as time goes on this can change, the initial pulse comes from some media exposure, or someone sharing theirs.

    2. The data is also “unfiltered” – partic-ipants’ are less prone to adjusting their responses when voicing theirmomentary thoughts. These responses become a mix of experiences,reflections, etc., making them less linear. However, one weakness withthink-aloud is that the understanding of the responses can be limited.There can be gaps in, or a complete lack of, justifications and reasoning

      You have to train the person doing a think aloud. They also learn to do it if they engage in multiple sessions.

    Annotators

    1. The security of IoT networks has become a significant concern owing to the increasing count of cyber threats. Traditional Intrusion Detection Systems (IDS) struggle to detect sophisticated attacks in real-time due to resource constraints and evolving attack patterns. This study proposes a novel IDS that integrates deep learning (DL) and machine learning (ML) approaches to improve IoT security. The main objective is to develop a hybrid IDS combining Feed Forward Neural Networks (FFNN) and XGBoost to improve attack detection accuracy while minimizing computational overhead. The proposed methodology involves data preprocessing, feature selection utilizing Principal Component Analysis (PCA), and classification employing FFNN and XGBoost. The model is trained and evaluated on the CIC IoT 2023 dataset, which comprises real-time attack data, ensuring its practical relevance. The proposed model is estimated on the CIC IoT 2023 dataset, demonstrating superior accuracy (99%) compared to existing IDS techniques. This study provides valuable insights into improving IDS models for IoT security, addressing challenges such as dataset imbalance, feature selection, and classification accuracy. Results demonstrate that the hybrid FFNN-XGBoost model outperforms standalone FFNN and XGBoost classifiers, achieving an accuracy of 99%. Compared to existing IDS models, the proposed approach significantly enhances precision, recall, and F1-score, ensuring robust intrusion detection. This research contributes to IoT security by introducing a scalable and efficient hybrid IDS model. The findings offer a strong basis for future advancements in intrusion detection using DL and ML approaches.

      mmmm

    1. The rise of the Internet of Things (IoT) has transformed our daily lives by connecting objects to the Internet, thereby creating interactive, automated environments. However, this rapid expansion raises major security concerns, particularly regarding intrusion detection. Traditional intrusion detection systems (IDSs) are often ill-suited to the dynamic and varied networks characteristic of the IoT. Machine learning is emerging as a promising solution to these challenges, offering the intelligence and flexibility needed to counter complex and evolving threats. This comprehensive review explores different machine learning approaches for intrusion detection in IoT systems, covering supervised, unsupervised, and deep learning methods, as well as hybrid models. It assesses their effectiveness, limitations, and practical applications, highlighting the potential of machine learning to enhance the security of IoT systems. In addition, the study examines current industry issues and trends, highlighting the importance of ongoing research to keep pace with the rapidly evolving IoT security ecosystem.

      hgjyg

  2. srconstantin.wordpress.com srconstantin.wordpress.com
    1. the sense of “everyone but me is in on the joke, there is a Thing that I don’t understand myself but is the most important Thing, and I must approximate or imitate or cargo-cult the Thing, and anybody who doesn’t is bad.”

      I mentioned Rhesus ladders in another comment (https://hypothes.is/a/gvP9DmJfEeyj-zfV0Z4Zsw) and also the relationship to Chesterton's fence in reply to a comment from someone else (https://hypothes.is/a/r7YFemJgEeymEnOBlFNH5A), but this captures the spirit of my comments elsewhere about false diagnoses perfectly.

    1. Even more concerning for Kyiv is the fact that Europe, despite its collective annual GDP of €17.9tn, has chosen to turn to the bond markets rather than reach into its own pockets.

      Excellent point

    1. Disguise structural and sentence-level faults as intentional strategies. In this light, Infinite Jest is no longer poorly-plotted and inconclusive, but ‘fractally structured like a Sierpiński gasket.’3 The hundreds of pages of solecistic flummery in his story collections are not really a grating catalogue of cliches, but an incisive parody of corporate-speak and other modern argots (George Saunders, another basically talentless writer, employs this strategy constantly, besides much else from the Wallace playbook). When it comes time to swoon into obvious sentimentality and Hallmark-style kitsch, just point out you’re aware that’s what it is and are doing it intentionally too. This will let the reader think they’re in on a complicated post-ironic work with real feeling behind it, rather than simply reading bad writing.

      Nicely put.

    1. if they wanted to respond to you, they had to do it on their own blog, and link back. The effect of this was that there were few equivalents of the worst aspects of social media that broke through.

      There was social symmetry. If you wanted to be nasty you had to do it on your own site. Consequences were for yourself. Why on things like Mastodon I prefer small to tiny instances, so that the people on an instance have the same sense of social symmetry and give and take come from the same social distance.

    2. The growth of social media in particular has wiped out a particular kind of blogging that I sometimes miss: a text-based dialogue between bloggers that required more thought and care than dashing off 180 or 240 characters and calling it a day. In order to participate in the dialogue, you had to invest some effort

      Indeed, blogs as distributed conversations. Still on the hunt for that effect.

    1. Jeremy Keith on the importance of blog responses on people's own blogs. It makes it symmetric and a conversation, making misbehaving basically a non-thing. He asks about not showing webmentions of likes and boosts. I keep them for discovery, so that other readers maybe connect to eachother.

    1. copying sqlite databases from remote sources can be done faster by .dump it to a text file, this will be smaller than the db if you have lots of indexes (to speed queries up) then compress the text file download, unzip, and reconstruct locally as sqlite db

    1. Applying it to the design of the web we aim to create a system where we can do everything offline and in local networks and the connection to the internet is optional. This will help the neuronal groups be more resilient and fast. We invite others to join as co-creators to build a local first version of the Internet together.

      Para Cardumem el enfoque, como he dicho en otros lados es diferente, eligiendo una arquitectura federada, que incluye los servidores ejecutándose localmente y con menores complejidades arquitectónicas.

    2. here is a new piece of technology called CRDTs (conflict-free replicated data types) that allow reaching the same state irrespective of the order in which changes are received, so each device can resolve conflicts independently - without relying on a single master copy.
    3. This separateness is not the biggest problem; what is more dangerous is that in each of these versions of the Internet, the neurons can’t talk and express themselves directly to each other. Servers control our communication with those closest to us: family members, neighbors and local communities.The problems with cloud-based architecture don't stop there. Not only do central servers control who can do what, but their control is ubiquitous. Even when texting your family member on the couch next to you, the signal from your device to theirs needs to go to the application server first, and only after that, return to your own living room.

      Una arquitectura donde cada cual pueda fácilmente descargar y ejecutar un servidor completo y comunicarlo con otros, es para efectos prácticos una arquitectura federada, con la posibildad de convertirse en P2P.

      Una arquitectura federada/P2P no es garantía de descentralización, como vemos pasó con la web, diría yo debido a la dificultad de montar y desplegar servidores. Y si bien se ejercen fuerzas extremas de centralización sobre sistemas como el correo electrónico y los podcast, estos continúan siendo federados. Además, el fediverso ha adquirido un nuevo auge tras la compra de Twitter, pero enfrenta sus propios desafíos.

      Diría que se requiere no sólo una manera frugal de poner a funcionar la tecnología, sino de disponerla a terceros para sus usos colectivos. Acá pareciera ser que el cuello de botella es el hospedaje y habría que mirar cómo hacerlo barato y amigable.

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Reviewer #1 (Evidence, reproducibility and clarity (Required)):

      *The authors have a longstanding focus and reputation on single cell sequencing technology development and application. In this current study, the authors developed a novel single-cell multi-omic assay termed "T-ChIC" so that to jointly profile the histone modifications along with the full-length transcriptome from the same single cells, analyzed the dynamic relationship between chromatin state and gene expression during zebrafish development and cell fate determination. In general, the assay works well, the data look convincing and conclusions are beneficial to the community. *

      Thank you for your positive feedback.

      *There are several single-cell methodologies all claim to co-profile chromatin modifications and gene expression from the same individual cell, such as CoTECH, Paired-tag and others. Although T-ChIC employs pA-Mnase and IVT to obtain these modalities from single cells which are different, could the author provide some direct comparisons among all these technologies to see whether T-ChIC outperforms? *

      In a separate technical manuscript describing the application of T-ChIC in mouse cells (Zeller, Blotenburg et al 2024, bioRxiv, 2024.05. 09.593364), we have provided a direct comparison of data quality between T-ChIC and other single-cell methods for chromatin-RNA co-profiling (Please refer to Fig. 1C,D and Fig. S1D, E, of the preprint). We show that compared to other methods, T-ChIC is able to better preserve the expected biological relationship between the histone modifications and gene expression in single cells.

      *In current study, T-ChIC profiled H3K27me3 and H3K4me1 modifications, these data look great. How about other histone modifications (eg H3K9me3 and H3K36me3) and transcription factors? *

      While we haven't profiled these other modifications using T-ChIC in Zebrafish, we have previously published high quality data on these histone modifications using the sortChIC method, on which T-ChIC is based (Zeller, Yeung et al 2023). In our comparison, we find that histone modification profiles between T-ChIC and sortChIC are very similar (Fig. S1C in Zeller, Blotenburg et al 2024). Therefore the method is expected to work as well for the other histone marks.

      *T-ChIC can detect full length transcription from the same single cells, but in FigS3, the authors still used other published single cell transcriptomics to annotate the cell types, this seems unnecessary? *

      We used the published scRNA-seq dataset with a larger number of cells to homogenize our cell type labels with these datasets, but we also cross-referenced our cluster-specific marker genes with ZFIN and homogenized the cell type labels with ZFIN ontology. This way our annotation is in line with previous datasets but not biased by it. Due the relatively smaller size of our data, we didn't expect to identify unique, rare cell types, but our full-length total RNA assay helps us identify non-coding RNAs such as miRNA previously undetected in scRNA assays, which we have now highlighted in new figure S1c .

      *Throughout the manuscript, the authors found some interesting dynamics between chromatin state and gene expression during embryogenesis, independent approaches should be used to validate these findings, such as IHC staining or RNA ISH? *

      We appreciate that the ISH staining could be useful to validate the expression pattern of genes identified in this study. But to validate the relationships between the histone marks and gene expression, we need to combine these stainings with functional genomics experiments, such as PRC2-related knockouts. Due to their complexity, such experiments are beyond the scope of this manuscript (see also reply to reviewer #3, comment #4 for details).

      *In Fig2 and FigS4, the authors showed H3K27me3 cis spreading during development, this looks really interesting. Is this zebrafish specific? H3K27me3 ChIP-seq or CutTag data from mouse and/or human embryos should be reanalyzed and used to compare. The authors could speculate some possible mechanisms to explain this spreading pattern? *

      Thanks for the suggestion. In this revision, we have reanalysed a dataset of mouse ChIP-seq of H3K27me3 during mouse embryonic development by Xiang et al (Nature Genetics 2019) and find similar evidence of spreading of H3K27me3 signal from their pre-marked promoter regions at E5.5 epiblast upon differentiation (new Figure S4i). This observation, combined with the fact that the mechanism of pre-marking of promoters by PRC1-PRC2 interaction seems to be conserved between the two species (see (Hickey et al., 2022), (Mei et al., 2021) & (Chen et al., 2021)), suggests that the dynamics of H3K27me3 pattern establishment is conserved across vertebrates. But we think a high-resolution profiling via a method like T-ChIC would be more useful to demonstrate the dynamics of signal spreading during mouse embryonic development in the future. We have discussed this further in our revised manuscript.

      Reviewer #1 (Significance (Required)):

      *The authors have a longstanding focus and reputation on single cell sequencing technology development and application. In this current study, the authors developed a novel single-cell multi-omic assay termed "T-ChIC" so that to jointly profile the histone modifications along with the full-length transcriptome from the same single cells, analyzed the dynamic relationship between chromatin state and gene expression during zebrafish development and cell fate determination. In general, the assay works well, the data look convincing and conclusions are beneficial to the community. *

      Thank you very much for your supportive remarks.

      Reviewer #2 (Evidence, reproducibility and clarity (Required)):

      *Joint analysis of multiple modalities in single cells will provide a comprehensive view of cell fate states. In this manuscript, Bhardwaj et al developed a single-cell multi-omics assay, T-ChIC, to simultaneously capture histone modifications and full-length transcriptome and applied the method on early embryos of zebrafish. The authors observed a decoupled relationship between the chromatin modifications and gene expression at early developmental stages. The correlation becomes stronger as development proceeds, as genes are silenced by the cis-spreading of the repressive marker H3k27me3. Overall, the work is well performed, and the results are meaningful and interesting to readers in the epigenomic and embryonic development fields. There are some concerns before the manuscript is considered for publication. *

      We thank the reviewer for appreciating the quality of our study.

      *Major concerns: *

        • A major point of this study is to understand embryo development, especially gastrulation, with the power of scMulti-Omics assay. However, the current analysis didn't focus on deciphering the biology of gastrulation, i.e., lineage-specific pioneer factors that help to reform the chromatin landscape. The majority of the data analysis is based on the temporal dimension, but not the cell-type-specific dimension, which reduces the value of the single-cell assay. *

      We focused on the lineage-specific transcription factor activity during gastrulation in Figure 4 and S8 of the manuscript and discovered several interesting regulators active at this stage. During our analysis of the temporal dimension for the rest of the manuscript, we also classified the cells by their germ layer and "latent" developmental time by taking the full advantage of the single-cell nature of our data. Additionally, we have now added the cell-type-specific H3K27-demethylation results for 24hpf in response to your comment below. We hope that these results, together with our openly available dataset would demonstrate the advantage of the single-cell aspect of our dataset.

      1. *The cis-spreading of H3K27me3 with developmental time is interesting. Considering H3k27me3 could mark bivalent regions, especially in pluripotent cells, there must be some regions that have lost H3k27me3 signals during development. Therefore, it's confusing that the authors didn't find these regions (30% spreading, 70% stable). The authors should explain and discuss this issue. *

      Indeed we see that ~30% of the bins enriched in the pluripotent stage spread, while 70% do not seem to spread. In line with earlier observations(Hickey et al., 2022; Vastenhouw et al., 2010), we find that H3K27me3 is almost absent in the zygote and is still being accumulated until 24hpf and beyond. Therefore the majority of the sites in the genome still seem to be in the process of gaining H3K27me3 until 24hpf, explaining why we see mostly "spreading" and "stable" states. Considering most of these sites are at promoters and show signs of bivalency, we think that these sites are marked for activation or silencing at later stages. We have discussed this in the manuscript ("discussion"). However, in response to this and earlier comment, we went back and searched for genes that show H3K27-demethylation in the most mature cell types (at 24 hpf) in our data, and found a subset of genes that show K27 demethylation after acquiring them earlier. Interestingly, most of the top genes in this list are well-known as developmentally important for their corresponding cell types. We have added this new result and discussed it further in the manuscript (Fig. 2d,e, , Supplementary table 3).

      *Minors: *

        • The authors cited two scMulti-omics studies in the introduction, but there have been lots of single-cell multi-omics studies published recently. The authors should cite and consider them. *

      We have cited more single-cell chromatin and multiome studies focussed on early embryogenesis in the introduction now.

      *2. T-ChIC seems to have been presented in a previous paper (ref 15). Therefore, Fig. 1a is unnecessary to show. *

      Figure 1a. shows a summary of our Zebrafish TChIC workflow, which contains the unique sample multiplexing and sorting strategy to reduce batch effects, which was not applied in the original TChIC workflow. We have now clarified this in "Results".

      1. *It's better to show the percentage of cell numbers (30% vs 70%) for each heatmap in Figure 2C. *

      We have added the numbers to the corresponding legends.

      1. *Please double-check the citation of Fig. S4C, which may not relate to the conclusion of signal differences between lineages. *

      The citation seems to be correct (Fig. S4C supplements Fig. 2C, but shows mesodermal lineage cells) but the description of the legend was a bit misleading. We have clarified this now.

      *5. Figure 4C has not been cited or mentioned in the main text. Please check. *

      Thanks for pointing it out. We have cited it in Results now.

      Reviewer #2 (Significance (Required)):

      *Strengths: This work utilized a new single-cell multi-omics method and generated abundant epigenomics and transcriptomics datasets for cells covering multiple key developmental stages of zebrafish. *

      *Limitations: The data analysis was superficial and mainly focused on the correspondence between the two modalities. The discussion of developmental biology was limited. *

      *Advance: The zebrafish single-cell datasets are valuable. The T-ChIC method is new and interesting. *

      *The audience will be specialized and from basic research fields, such as developmental biology, epigenomics, bioinformatics, etc. *

      *I'm more specialized in the direction of single-cell epigenomics, gene regulation, 3D genomics, etc. *

      Thank you for your remarks.

      Reviewer #3 (Evidence, reproducibility and clarity (Required)):

      *This manuscript introduces T‑ChIC, a single‑cell multi‑omics workflow that jointly profiles full‑length transcripts and histone modifications (H3K27me3 and H3K4me1) and applies it to early zebrafish embryos (4-24 hpf). The study convincingly demonstrates that chromatin-transcription coupling strengthens during gastrulation and somitogenesis, that promoter‑anchored H3K27me3 spreads in cis to enforce developmental gene silencing, and that integrating TF chromatin status with expression can predict lineage‑specific activators and repressors. *

      *Major concerns *

      1. *Independent biological replicates are absent, so the authors should process at least one additional clutch of embryos for key stages (e.g., 6 hpf and 12 hpf) with T‑ChIC and demonstrate that the resulting data match the current dataset. *

      Thanks for pointing this out. We had, in fact, performed T-ChIC experiments in four rounds of biological replicates (independent clutch of embryos) and merged the data to create our resource. Although not all timepoints were profiled in each replicate, two timepoints (10 and 24hpf) are present in all four, and the celltype composition of these replicates from these 2 timepoints are very similar. We have added new plots in figure S2f and added (new) supplementary table (#1) to highlight the presence of biological replicates.

      2. *The TF‑activity regression model uses an arbitrary R² {greater than or equal to} 0.6 threshold; cross‑validated R² distributions, permutation‑based FDR control, and effect‑size confidence intervals are needed to justify this cut‑off. *

      Thank you for this suggestion. We did use 10-fold cross validation during training and obtained the R2 values of TF motifs from the independent test set as an unbiased estimate. However, the cutoff of R2 > 0.6 to select the TFs for classification was indeed arbitrary. In the revised version, we now report the FDR-adjusted p-values for these R2 estimates based on permutation tests, and select TFs with a cutoff of padj supplementary table #4 to include the p-values for all tested TFs. However, we see that our arbitrary cutoff of 0.6 was in fact, too stringent, and we can classify many more TFs based on the FDR cutoffs. We also updated our reported numbers in Fig. 4c to reflect this. Moreover, supplementary table #4 contains the complete list of TFs used in the analysis to allow others to choose their own cutoff.

      3. *Predicted TF functions lack empirical support, making it essential to test representative activators (e.g., Tbx16) and repressors (e.g., Zbtb16a) via CRISPRi or morpholino knock‑down and to measure target‑gene expression and H3K4me1 changes. *

      We agree that independent validation of the functions of our predicted TFs on target gene activity would be important. During this revision, we analysed recently published scRNA-seq data of Saunders et al. (2023) (Saunders et al., 2023), which includes CRISPR-mediated F0 knockouts of a couple of our predicted TFs, but the scRNAseq was performed at later stages (24hpf onward) compared to our H3K4me1 analysis (which was 4-12 hpf). Therefore, we saw off-target genes being affected in lineages where these TFs are clearly not expressed (attached Fig 1). We therefore didn't include these results in the manuscript. In future, we aim to systematically test the TFs predicted in our study with CRISPRi or similar experiments.

      4. *The study does not prove that H3K27me3 spreading causes silencing; embryos treated with an Ezh2 inhibitor or prc2 mutants should be re‑profiled by T‑ChIC to show loss of spreading along with gene re‑expression. *

      We appreciate the suggestion that indeed PRC2-disruption followed by T-ChIC or other forms of validation would be needed to confirm whether the H3K27me3 spreading is indeed causally linked to the silencing of the identified target genes. But performing this validation is complicated because of multiple reasons: 1) due to the EZH2 contribution from maternal RNA and the contradicting effects of various EZH2 zygotic mutations (depending on where the mutation occurs), the only properly validated PRC2-related mutant seems to be the maternal-zygotic mutant MZezh2, which requires germ cell transplantation (see Rougeot et al. 2019 (Rougeot et al., 2019)) , and San et al. 2019 (San et al., 2019) for details). The use of inhibitors have been described in other studies (den Broeder et al., 2020; Huang et al., 2021), but they do not show a validation of the H3K27me3 loss or a similar phenotype as the MZezh2 mutants, and can present unwanted side effects and toxicity at a high dose, affecting gene expression results. Moreover, in an attempt to validate, we performed our own trials with the EZH2 inhibitor (GSK123) and saw that this time window might be too short to see the effect within 24hpf (attached Fig. 2). Therefore, this validation is a more complex endeavor beyond the scope of this study. Nevertheless, our further analysis of H3K27me3 de-methylation on developmentally important genes (new Fig. 2e-f, Sup. table 3) adds more confidence that the polycomb repression plays an important role, and provides enough ground for future follow up studies.

      *Minor concerns *

      1. *Repressive chromatin coverage is limited, so profiling an additional silencing mark such as H3K9me3 or DNA methylation would clarify cooperation with H3K27me3 during development. *

      We agree that H3K27me3 alone would not be sufficient to fully understand the repressive chromatin state. Extension to other chromatin marks and DNA methylation would be the focus of our follow up works.

      *2. Computational transparency is incomplete; a supplementary table listing all trimming, mapping, and peak‑calling parameters (cutadapt, STAR/hisat2, MACS2, histoneHMM, etc.) should be provided. *

      As mentioned in the manuscript, we provide an open-source pre-processing pipeline "scChICflow" to perform all these steps (github.com/bhardwaj-lab/scChICflow). We have now also provided the configuration files on our zenodo repository (see below), which can simply be plugged into this pipeline together with the fastq files from GEO to obtain the processed dataset that we describe in the manuscript. Additionally, we have also clarified the peak calling and post-processing steps in the manuscript now.

      *3. Data‑ and code‑availability statements lack detail; the exact GEO accession release date, loom‑file contents, and a DOI‑tagged Zenodo archive of analysis scripts should be added. *

      We have now publicly released the .h5ad files with raw counts, normalized counts, and complete gene and cell-level metadata, along with signal tracks (bigwigs) and peaks on GEO. Additionally, we now also released the source datasets and notebooks (.Rmarkdown format) on Zenodo that can be used to replicate the figures in the manuscript, and updated our statements on "Data and code availability".

      *4. Minor editorial issues remain, such as replacing "critical" with "crucial" in the Abstract, adding software version numbers to figure legends, and correcting the SAMtools reference. *

      Thank you for spotting them. We have fixed these issues.

      Reviewer #3 (Significance (Required)):

      The method is technically innovative and the biological insights are valuable; however, several issues-mainly concerning experimental design, statistical rigor, and functional validation-must be addressed to solidify the conclusions.

      Thank you for your comments. We hope to have addressed your concerns in this revised version of our manuscript.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      This manuscript introduces T‑ChIC, a single‑cell multi‑omics workflow that jointly profiles full‑length transcripts and histone modifications (H3K27me3 and H3K4me1) and applies it to early zebrafish embryos (4-24 hpf). The study convincingly demonstrates that chromatin-transcription coupling strengthens during gastrulation and somitogenesis, that promoter‑anchored H3K27me3 spreads in cis to enforce developmental gene silencing, and that integrating TF chromatin status with expression can predict lineage‑specific activators and repressors.

      Major concerns

      1. Independent biological replicates are absent, so the authors should process at least one additional clutch of embryos for key stages (e.g., 6 hpf and 12 hpf) with T‑ChIC and demonstrate that the resulting data match the current dataset.
      2. The TF‑activity regression model uses an arbitrary R² {greater than or equal to} 0.6 threshold; cross‑validated R² distributions, permutation‑based FDR control, and effect‑size confidence intervals are needed to justify this cut‑off.
      3. Predicted TF functions lack empirical support, making it essential to test representative activators (e.g., Tbx16) and repressors (e.g., Zbtb16a) via CRISPRi or morpholino knock‑down and to measure target‑gene expression and H3K4me1 changes.
      4. The study does not prove that H3K27me3 spreading causes silencing; embryos treated with an Ezh2 inhibitor or prc2 mutants should be re‑profiled by T‑ChIC to show loss of spreading along with gene re‑expression.

      Minor concerns

      1. Repressive chromatin coverage is limited, so profiling an additional silencing mark such as H3K9me3 or DNA methylation would clarify cooperation with H3K27me3 during development.
      2. Computational transparency is incomplete; a supplementary table listing all trimming, mapping, and peak‑calling parameters (cutadapt, STAR/hisat2, MACS2, histoneHMM, etc.) should be provided.
      3. Data‑ and code‑availability statements lack detail; the exact GEO accession release date, loom‑file contents, and a DOI‑tagged Zenodo archive of analysis scripts should be added.
      4. Minor editorial issues remain, such as replacing "critical" with "crucial" in the Abstract, adding software version numbers to figure legends, and correcting the SAMtools reference.

      Significance

      The method is technically innovative and the biological insights are valuable; however, several issues-mainly concerning experimental design, statistical rigor, and functional validation-must be addressed to solidify the conclusions.

    3. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #2

      Evidence, reproducibility and clarity

      Joint analysis of multiple modalities in single cells will provide a comprehensive view of cell fate states. In this manuscript, Bhardwaj et al developed a single-cell multi-omics assay, T-ChIC, to simultaneously capture histone modifications and full-length transcriptome and applied the method on early embryos of zebrafish. The authors observed a decoupled relationship between the chromatin modifications and gene expression at early developmental stages. The correlation becomes stronger as development proceeds, as genes are silenced by the cis-spreading of the repressive marker H3k27me3. Overall, the work is well performed, and the results are meaningful and interesting to readers in the epigenomic and embryonic development fields. There are some concerns before the manuscript is considered for publication.

      Major concerns:

      1. A major point of this study is to understand embryo development, especially gastrulation, with the power of scMulti-Omics assay. However, the current analysis didn't focus on deciphering the biology of gastrulation, i.e., lineage-specific pioneer factors that help to reform the chromatin landscape. The majority of the data analysis is based on the temporal dimension, but not the cell-type-specific dimension, which reduces the value of the single-cell assay.
      2. The cis-spreading of H3K27me3 with developmental time is interesting. Considering H3k27me3 could mark bivalent regions, especially in pluripotent cells, there must be some regions that have lost H3k27me3 signals during development. Therefore, it's confusing that the authors didn't find these regions (30% spreading, 70% stable). The authors should explain and discuss this issue.

      Minors:

      1. The authors cited two scMulti-omics studies in the introduction, but there have been lots of single-cell multi-omics studies published recently. The authors should cite and consider them.
      2. T-ChIC seems to have been presented in a previous paper (ref 15). Therefore, Fig. 1a is unnecessary to show.
      3. It's better to show the percentage of cell numbers (30% vs 70%) for each heatmap in Figure 2C.
      4. Please double-check the citation of Fig. S4C, which may not relate to the conclusion of signal differences between lineages.
      5. Figure 4C has not been cited or mentioned in the main text. Please check.

      Significance

      Strengths: This work utilized a new single-cell multi-omics method and generated abundant epigenomics and transcriptomics datasets for cells covering multiple key developmental stages of zebrafish. Limitations: The data analysis was superficial and mainly focused on the correspondence between the two modalities. The discussion of developmental biology was limited.

      Advance: The zebrafish single-cell datasets are valuable. The T-ChIC method is new and interesting.

      The audience will be specialized and from basic research fields, such as developmental biology, epigenomics, bioinformatics, etc.

      I'm more specialized in the direction of single-cell epigenomics, gene regulation, 3D genomics, etc.

    4. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      The authors have a longstanding focus and reputation on single cell sequencing technology development and application. In this current study, the authors developed a novel single-cell multi-omic assay termed "T-ChIC" so that to jointly profile the histone modifications along with the full-length transcriptome from the same single cells, analyzed the dynamic relationship between chromatin state and gene expression during zebrafish development and cell fate determination. In general, the assay works well, the data look convincing and conclusions are beneficial to the community.

      There are several single-cell methodologies all claim to co-profile chromatin modifications and gene expression from the same individual cell, such as CoTECH, Paired-tag and others. Although T-ChIC employs pA-Mnase and IVT to obtain these modalities from single cells which are different, could the author provide some direct comparisons among all these technologies to see whether T-ChIC outperforms?

      In current study, T-ChIC profiled H3K27me3 and H3K4me1 modifications, these data look great. How about other histone modifications (eg H3K9me3 and H3K36me3) and transcription factors?

      T-ChIC can detect full length transcription from the same single cells, but in FigS3, the authors still used other published single cell transcriptomics to annotate the cell types, this seems unnecessary?

      Throughout the manuscript, the authors found some interesting dynamics between chromatin state and gene expression during embryogenesis, independent approaches should be used to validate these findings, such as IHC staining or RNA ISH?

      In Fig2 and FigS4, the authors showed H3K27me3 cis spreading during development, this looks really interesting. Is this zebrafish specific? H3K27me3 ChIP-seq or CutTag data from mouse and/or human embryos should be reanalyzed and used to compare. The authors could speculate some possible mechanisms to explain this spreading pattern?

      Significance

      The authors have a longstanding focus and reputation on single cell sequencing technology development and application. In this current study, the authors developed a novel single-cell multi-omic assay termed "T-ChIC" so that to jointly profile the histone modifications along with the full-length transcriptome from the same single cells, analyzed the dynamic relationship between chromatin state and gene expression during zebrafish development and cell fate determination. In general, the assay works well, the data look convincing and conclusions are beneficial to the community.

    1. In many countries straw is burned without any energetic use at all, causing widespread health harms from particulate matter, so alternate uses may bring advantages.

      Is India an example here? Looks like it might be appropriate - the linked paper references China.

    2. An example: for a hydrogen peaking power plant in Germany running 200 hours a year, the capped network capacity charge for withdrawing from the planned Kernnetz pipeline network of 25 EUR/(kWh/h-peak)/a [BNetzA, 2025] works out at 25 EUR/kW/a / 200 h/a = 125 EUR/MWh ~ 4 EUR/kg. Add this to a production cost near Germany of 120 EUR/MWh and a storage charge of 120 EUR/MWh [EWI, 2024], and you are quickly at 360 EUR/MWh for the fuel alone.

      Wow, storage and transport is two thirds of the cost of h2 for backup power? I had no idea it was such a big share of costs

    1. During the incidents, it took us too long to resolve the problem. In both cases, this was worsened by our security systems preventing team members from accessing the tools they needed to fix the problem, and in some cases, circular dependencies slowed us down as some internal systems also became unavailable.

      Wowie

    1. To save time, money and avoid administering the questionnaires to unwilling respondents, fish seed and feed producers were approached by phone calls to obtain their willingness to participate in the study and make appointments.

      This is a really good method for surveys.

    1. thered’ b

      Connect: This kind of racially charged behavior and attitude reminds me about something similar from my cultural media studies, specifically on the fixation of "the Other," where certain racial groups were objectified and their cultures were misconstrued. This potentially may have contributed to the influence of implicit biases in the viewers of that specific content.

    1. Discover the list of the 10 most reputable bookmakers

      Nhacaiuytin là nhà cái uy tín hàng đầu, cung cấp đa dạng trò chơi cá cược như thể thao, casino, lô đề với tỷ lệ thưởng hấp dẫn và dịch vụ chuyên nghiệp...

      Địa chỉ: 641 Đ. Thống Nhất, Phường 15, Gò Vấp, Hồ Chí Minh, Việt Nam

      Email: cjrxvkentrobin@gmail.com

      Website: https://nhacaiuytin66.net/

      Điện thoại: (+84) 776002732

      nhacaiuytin #nhacaiuytin66 #songbacuytin #tonghopnhacaiuytin #linktrangchunhacai

      Social Links:

      https://nhacaiuytin66.net/

      https://www.facebook.com/nhacaiuytin66/

      https://www.youtube.com/@nhacaiuytin661

      https://x.com/nhacaiuytin661

      https://www.reddit.com/user/nhacaiuytin661/

      https://www.pinterest.com/nhacaiuytin661/

      https://ameblo.jp/nhacaiuytin661/entry-12916890097.html

      https://gravatar.com/nhacaiuytin661

      https://www.band.us/band/99297654/intro

      https://www.blogger.com/profile/00260642935645639001

      https://cjrxvkentrobin.wixsite.com/nhacaiuytin661

      https://www.tumblr.com/nhacaiuytin661

      https://nhacaiuytin661.wordpress.com/

      https://www.twitch.tv/nhacaiuytin661/about

      https://sites.google.com/view/nhacaiuytin661/home

      https://nhacaiuytin661.webflow.io/

      https://bookmarksclub.com/backlink/nhacaiuytin661/

      https://nhacaiuytin661.mystrikingly.com/

      https://nhacaiuytin661.amebaownd.com/

      https://telegra.ph/nhacaiuytin66-07-16

      https://nhacaiuytin661.pixnet.net/blog/post/168275305

      https://68776bfa6ab6c.site123.me/

      https://myspace.com/nhacaiuytin661

      https://scholar.google.com/citations?user=9m2YKokAAAAJ&hl=vi

      https://www.pearltrees.com/nhacaiuytin661/item726827428

      https://nhacaiuytin661.localinfo.jp/

      https://nhacaiuytin661.shopinfo.jp/

      https://nhacaiuytin66net.hashnode.dev/nhacaiuytin661

      https://nhacaiuytin661.themedia.jp/

      https://rapidapi.com/user/cjrxvkentrobin

      https://731275.8b.io/

      https://nhacaiuytin661.theblog.me/

      https://fliphtml5.com/homepage/srbaj/nhacaiuytin66/

      https://nhacaiuytin661.therestaurant.jp/

      https://www.aicrowd.com/participants/nhacaiuytin661

      http://nhacaiuytin661net.website3.me/

      https://www.quora.com/profile/Nhacaiuytin66

      https://nhacaiuytin661.mypixieset.com/

      https://4037096601505.gumroad.com/l/nhacaiuytin661

      https://flipboard.com/@nhacaiuytin427l

      https://www.threadless.com/@nhacaiuytin661

      https://wakelet.com/@nhacaiuytin661

      https://www.magcloud.com/user/nhacaiuytin661

      https://hackmd.io/@9a_Q30mqSFuVBYFBWG07tA/nhacaiuytin661

      https://nhacaiuytin661.blogspot.com/

      https://nhacaiuytin661.doorkeeper.jp/

      https://nhacaiuytin661.storeinfo.jp/

      https://velog.io/@nhacaiuytin661/about

      https://bato.to/u/2827063-nhacaiuytin661

      https://zb3.org/nhacaiuytin661/

      https://github.com/nhacaiuytin661

      https://diigo.com/0108lyo

      https://bit.ly/m/nhacaiuytin661

      https://tinyurl.com/nhacaiuytin661

      https://tawk.to/nhacaiuytin661

      https://gitlab.com/nhacaiuytin661

      https://rebrand.ly/nhacaiuytin661

      https://gifyu.com/nhacaiuytin661

      https://www.deviantart.com/nhacaiuytin661

      https://orcid.org/0009-0008-8751-5336

      https://community.cisco.com/t5/user/viewprofilepage/user-id/1898339

      https://linktr.ee/nhacaiuytin661

      https://archive.org/details/nhacaiuytin661

      https://wpfr.net/support/utilisateurs/nhacaiuytin661

      https://ameblo.jp/nhacaiuytin661/

      https://plaza.rakuten.co.jp/nhacaiuytin661/diary/202507160000/

      https://www.dailymotion.com/nhacaiuytin661

      https://pixabay.com/users/51345239/

      https://disqus.com/by/nhacaiuytin661/about/

      https://www.reverbnation.com/artist/nhacaiuytin661

      https://newspicks.com/user/11601376/

      https://www.gamblingtherapy.org/forum/users/nhacaiuytin661/

      https://heylink.me/nhacaiuytin661/

      https://forum.m5stack.com/user/nhacaiuytin661

      https://app.readthedocs.org/profiles/nhacaiuytin661/

      https://gitee.com/nhacaiuytin661

      https://public.tableau.com/app/profile/nhacaiuytin661.net/viz/nhacaiuytin661/Sheet1#1

      https://connect.garmin.com/modern/profile/7f6ad1c5-ea9c-460a-b65a-4ca96cecd63d

      https://www.pixiv.net/en/users/118041075

      https://skitterphoto.com/photographers/986464/nhacaiuytin661

      https://readtoto.com/u/2827063-nhacaiuytin661

      https://s.id/nhacaiuytin661

      https://qna.habr.com/user/nhacaiuytin661

      https://linkr.bio/nhacaiuytin661

      https://www.bark.com/en/gb/company/nhacaiuytin661/kwj2By/

      https://pastebin.com/u/nhacaiuytin661

      https://www.storeboard.com/nhacaiuytin661net

      https://etextpad.com/qny6qts3ui

      https://md.darmstadt.ccc.de/s/xFy5trLtp

      https://expathealthseoul.com/profile/nhacaiuytin661/

      https://qiita.com/nhacaiuytin661

      https://comicvine.gamespot.com/profile/nhacaiuytin661/

      https://padlet.com/cjrxvkentrobin/nhacaiuytin661-otjtzeo9egcl6rfp

      https://3dwarehouse.sketchup.com/by/nhacaiuytin661

      https://muckrack.com/nhacaiuytin661-net/bio

      https://hedgedoc.k8s.eonerc.rwth-aachen.de/s/wZ6BgiljR

      https://connect.informs.org/network/speakerdirectory/speaker?UserKey=aa0733d2-f022-4971-970e-0198131ba2bb

      https://us.enrollbusiness.com/BusinessProfile/7417354/nhacaiuytin661

      https://openlibrary.org/people/nhacaiuytin661

      https://anyflip.com/homepage/okdxi#About

      https://lu.ma/user/nhacaiuytin66

    1. However, Think-After still performsslightly below No-Thinking, indicating that additional mecha-nisms—such as residual contextual coupling or reasoning-tokennoise—may also contribute to the performance gap

      imp

    1. An Indian computer scientist and cryptographer named Yajna Devam has claimed in an article written in 2022 that he has decoded over five hundred inscriptions or “seals” from the Indus Valley Civilization

      Yajna Devam, a computer scientist from India, claimed to have read over 500 ancient seals of the Indus Valley Civilization. This was in the year 2022.

    1. In India, the imposition of colonial rule by the British East India Company in the eighteenth century and then the British Empire in the 19th had a long-lasting effect on interpretations of the Indian past.

      This quote brings out the influence of British dominance in India on the understanding of Indian history

    1. The most significant of these cultures was known as the Yamnaya. These cart-using pastoralists originated in what is now Ukraine and eastern Russia about 5,300 years ago and spread through Europe over the next 700 years.

      The quote indicates that Yamna culture was one of ancient cultures in and around modern-day Ukraine and eastern Russia. They made use of cart transport, domesticated and raised livestock, and had their influence spread over a wide area of Europe over many centuries.

    1. UDL aims to change the design of the environment rather than to situate the problem as a perceived deficit within the learner.

      Instead of blaming students for struggling it pushes educators to reflect on how learning environments create barriers.

    1. Figure 5 shows the normalized PIM values across the three SNP selection thresholds. F1score consistently emerges as the most influential criterion, with its importance growing asmore SNPs are included

      The PIM values (Figure 5) determine the relative contribution of each selection metric to the final model, yet these appear to be point estimates from a single train/validation split. How stable are these PIM rankings across different random splits, bootstrap samples, or subsampling of the training data? If F1 score's dominance as the top weighted metric is sensitive to the specific data partition, this could affect reproducibility of MIXER-selected feature sets in independent applications. Have you characterized the variance of PIM estimates and does the ranking of metrics remain consistent?

    1. We also investigated how consistent the cell attention scores are across cross-validation splits, com-

      The paper notes that MultiMIL relies on batch-corrected embeddings to handle technical confounders, and that explicitly adding covariates like age/sex didn't improve performance. But beyond technical batch effects, patients often have heterogeneous biological states like unique inflammatory signatures, co-morbidities, disease subtypes, that are real biology but not common to the disease state. These could still be predictive in smaller cohorts without reflecting shared disease mechanisms. Does the attention mechanism have any inherent safeguard against overfitting to patient-specific biological features? In the stability analysis comparing embeddings (scVI versus scGPT), were there cases where the model consistently attended to features that were biologically real but unique to specific patients rather than the shared phenotype?

    1. Given time constraints and competing priorities i

      Question: how can this be changed? How can we prioritize more the needs of marginalized communities that exist in every community? How do we make patient care more specific and individualized?

      Shouldn't our basic medical education give us a stronger and broader foundation to inclusively treat more patients?

    2. including the perspectives of marginalized populations in competency development.

      I think this is a great idea and will guide better informed competencies.

    3. community-identified providercompetencies.

      Summary: Community-identified provider competencies include 1) being comfortable working with LGBTQI patients ("be" rather than "seem" = intentionality), 2) shared medical-decision-making (know patient's preferences), 3) avoid assumptions (provide the correct BEST care), 4) apply knowledge (know how to provide specific individualized care), 5) acknowledge and address social marginalization (destigmatize and humanize).

    1. Node.js is primarily used to build network programs such as web servers.[29] The most significant difference between Node.js and PHP is that most functions in PHP block until completion (commands execute only after previous commands finish), while Node.js functions are non-blocking (commands execute concurrently and use callbacks to signal completion or failure).[29]

      node.js differs from php in that it supports concurrent command execution.

    1. Is Fast Charging Killing the Battery? A 2-Year Test on 40 Phones
      • Experiment Methodology: Researchers tested 40 phones over two years, completing 500 charge-discharge cycles using custom automation tools to compare the effects of different charging habits [00:01:11].
      • Fast Charging vs. Slow Charging: The study found that fast charging does not significantly harm battery health. After 500 cycles, the fast-charging iPhone group lost only 0.5% more capacity than the slow-charging group, while fast-charging Android phones actually showed slightly less wear than the slow-charging group [00:03:03].
      • The 30-80% Charging Habit: Maintaining a battery level between 30% and 80% reduced wear by 2.5% to 4% compared to full 0-100% cycles. While technically better, the researchers suggested the real-world benefit is limited compared to the effort [00:03:27].
      • Long-term Stability: Storing phones at 100% charge for a week showed no measurable change in capacity, reinforcing that battery degradation is a gradual, long-term process [00:04:13].
      • Battery Replacement Guidelines: Battery life begins to noticeably shorten when health drops to 85%, and the researchers recommend replacement when health reaches 80% to maintain a good user experience [00:05:01].
      • Performance & Throttling: Battery wear does not inherently slow down the phone's peak performance, but degraded batteries cause the system to throttle (slow down) earlier at low charge levels (e.g., at 11% instead of 5%) to prevent power failure [00:05:38].
      • Conclusion: The technical differences in battery wear from various charging methods are minimal. The best approach is to charge your phone conveniently and avoid trading "mental energy" for negligible battery gains [00:04:20].
    1. Stretching pulls on the muscle fibers and results in an increased blood flow to the muscles being worked

      Do we want to discuss that dynamic stretching prior to exercise provides more benefit than static stretching based on current understanding?

    2. The tension is released from the biceps brachii and the angle of the elbow joint increases.

      It may be helpful to contrast this with relaxation, as my students initially struggle with this concept. I use the controlled descent portion of a pushup as an example of an eccentric contraction of triceps brachii or losing an arm-wrestling match as another example.

    3. much like a key unlocking a lock. This allows the myosin heads to attach to actin.

      FWIW, the analogy I like to use here is a garage door opener (troponin) pulling open the garage door (tropomyosin), allowing the car (myosin head) to enter the garage (myosin binding site on actin).

    1. Limitations and Considerations While SQLite is powerful and versatile, it’s important to understand its limitations:ezstandalone.cmd.push(function () { ezstandalone.showAds(119); }); Concurrency: SQLite uses file-based locking, which can limit concurrent write operations. It’s not suitable for high-concurrency scenarios. Network access: SQLite is designed for local storage and doesn’t provide network access out of the box. User management: SQLite doesn’t have built-in user management or access control features.ezstandalone.cmd.push(function () { ezstandalone.showAds(120); }); Scalability: While SQLite can handle databases up to 140 terabytes, it may not be the best choice for very large datasets or high-traffic applications. Alter table limitations: SQLite has limited support for ALTER TABLE operations compared to other database systems.

      Llimitations are low concurrency (not an issue, unless I write from multiple applications/scripts to the same thing) local only/mostly, fine too user management, not an issue, just me scalability, not suited to large amounts. The premise is that I won't have lots of data ALTER TABLE limitations, this may mean rebulds/redesigns as things evolve?

    2. Best Practices for Using SQLite on Mac

      make regular backups with the .dump command (you can cron job this) optimize queries use right data types indexing rebuild often to reclaim unused space (VACUUM) command use prepared statements in my code

    3. DB Browser for SQLite: A free, open-source tool that provides a user-friendly interface for creating, designing, and editing SQLite database files. TablePlus: A native macOS application that supports multiple database systems, including SQLite.ezstandalone.cmd.push(function () { ezstandalone.showAds(116); }); SQLite Studio: Another free, open-source option with a rich feature set for managing SQLite databases.

      three Mac tools for interacting w sqlite thru GUI. DB Browser, TablePlus and SQLite Studio. TablePlus is a mac app, the other two FOSS tools.

    4. SQLite integrates seamlessly with Python. Here’s a simple script to interact with our database: import sqlite3 import pandas as pd # Connect to the database conn = sqlite3.connect('analytics.db') # Create a cursor cur = conn.cursor() # Execute a query cur.execute(""" SELECT pv.page_url, pv.view_count, COUNT(uv.id) as unique_visitors FROM page_views pv LEFT JOIN user_visits uv ON pv.id = uv.page_id GROUP BY pv.id ORDER BY pv.view_count DESC """) # Fetch all rows rows = cur.fetchall() # Create a pandas DataFrame df = pd.DataFrame(rows, columns=['Page URL', 'View Count', 'Unique Visitors']) print(df) # Close the connection conn.close()

      sqlite integrates w python. what about php?

    1. some skeletal muscles are also located throughout the body at the openings of internal tracts to control the movement of various substances

      Maybe mention muscles attached to the skin used in facial expression?

    1. writers must use three types of proofs, or rhetorical appeals. They are logos, or logical appeal; pathos, or emotional appeal; and ethos, or ethical appeal, or appeal based on the character and credibility of the author

      This introduces the main rhetorical appeals used to persuade an audience.

    2. The three most basic, yet important components of a rhetorical situation are: The purpose of writing or rhetorical aim (the goal the writer is trying to achieve or argument the writer is trying to make) The intended audience The writer/speaker

      This explains the foundational elements writers must consider to communicate effectively.

    3. Appeal to logic

      Logos and kairos appeal to the logic and timeliness to a topic. These two rhetorical appeals are about how and why something makes sense and fits into the real world.

    4. note that the term “rhetoric” also is used to mean someone speaking bombastic thoughts that are empty of meaning End of Light Yellow comment.

      Rhetoric has different meaning and connotations. Focusing on the effective communication meaning.

    5. The purpose of writing or rhetorical aim (the goal the writer is trying to achieve or argument the writer is trying to make) The intended audience The writer/speaker

      All three elements used to create a situation where rhetoric is understandable.

    1. where I frame my courses like stories, their narratives unfolding over the course of the term.

      THIS is where LMS could use upgrades to deliver more!! Make it easier and less technical to weave narratives, create pathways, add in passion, capture moments.... this is where they need some totally new tools. C'mon, it's not rocket science!

    2. It may be that what survives is a harder-to-automate version of online teaching, not a return to the days of Lotus Notes, but a new approach built around real instructor presence, with daily or biweekly check-ins.

      Again, how long have online instructors been building real instructor presence? This is NOT new.

    3. The challenge now is how to leverage real instructor presence in the design and delivery of online courses.

      That's always been the challenge. Literally. Always. AI did not change that.

    4. And when it comes to quizzes, even ones demanding written responses, AI can often outperform a rushed, underprepared human. It isn’t thinking, but it’s simulating the appearance of having thought. That’s often enough to get a B, even with an AI-aware instructor burying trip wires.

      There is no use for MC quizzes for anything other than voluntary self-assessment. They have been outmoded for a long time IMHO.

      Indeed, any forward thinking LMS really should have made other knowledge check or material exporation modalities available as native tools.

      Again, many thoughtful online teachers already found workarounds to create more pedagogically sound learning opportunities.

    5. These platforms weren’t designed to teach. They were designed to administer.

      Absolutely true!!! Anyone who thought that the LMS was doing the teaching was way off base in the first place! I agree that the LMS concept needs major work to keep up, but that is nothing new. And administration is also still a necessary function, so it is still servicable in that regard.

    6. It presents itself as modern, with its clickable modules and mobile-friendly interfaces, but the pedagogy behind it is stuck in another era.

      I'm not sure that I've ever thought of any LMS as modern and user-friendly, I've always thought of it as a barrier to get past and clunky tools to manage with the goal of delivering my course materials and pedagogical approach.

    7. In 21st-century online education, the medium has become the LMS. And if McLuhan was right, then the LMS itself is shaping the message of our courses more than we care to admit.

      this has always been the limitation of online teaching, even before AI. Most instructors DEFER to the LMS structure because creating an pedagogically sound UX is difficult, and not what they are trained in. Really invested online instructors have always attempted to go above and beyond the limits of the LMS structure to create a more connected, pedagogically sound UX. The classes that deferred to the LMS structure were allowed to persist because nothing challenged them until AI revealed their inherent absurdity.

      So essentially, this argument is not new due to AI; the LMS restrictions and unimaginative teaching have always created a problem of substandard teaching, they just have no where to hide now. However, you can't apply the logic that if a course is in an LMS must therefore be substandard; sound pedagogical UX can be created in many different environments.

    1. Is this departmental/ major/ meta-major or is there one list across the university Stat, etc. on Page 22 this is listed as a lower division credit both of our quant classes are upper division. The transfer section makes this just look like a basic Math class.

    1. Was ist das wichtigste Merkmal des Lebens in dieser Stadt? 5.  Ein Überblick über die Geschichte des Alten Testaments Um die Bibel besser zu verstehen, ist es häufig hilfreich, etwas über die ursprüngliche historische Situation des behandelten Bibeltextes bzw. biblischen Buches zu wissen. Es ist allerdings noch wichtiger, die Hauptbegebenheiten der Bibel zueinander in Bezug setzen zu können – also die Reihenfolge von Ereignissen und die Einordnung wichtiger Personen in die Hauptstruktur zu kennen. In Einheit 1 wurde die Botschaft der gesamten Bibel von 1. Mose bis zur Offenbarung kurz zusammengefasst und wichtige Ereignisse wurden hervorgehoben. Zum Abschluss dieser Einheit werden einige dieser Ereignisse nun erneut graphisch dargestellt werden mitsamt einigen Jahreszahlen und den Namen wichtiger Personen. Im weiteren Verlauf des Kurses kann es hilfreich sein, immer wieder zu dieser Übersicht zurückzukommen und weitere Details hinzuzufügen. Die Geschichte des Alten Testaments Die Abbildung wurde mit wenigen Änderungen Graeme Goldworthys The Goldsworthy Trilogy (Cumbria: Paternoster Press, 2000, S. 36) entnommen und mit freundlicher Genehmigung wiedergegeben. Weitere Einzelheiten zur biblischen Geschichte können in einem Bibellexikon oder einer entsprechenden Abhandlung alttestamentlicher Geschichte nachgeschlagen werden. Die Daten zu Abraham und Mose sind abhängig von der Datierung des Exodus. Die archäologische Beweislage zum Exodus ist leider nicht eindeutig. Die Mehrheit der Forscher datiert den Exodus heute bevorzugt im 13. Jahrhundert (ca. 1280 bis 1240 v. Chr.), aber die chronologischen Angaben innerhalb des Alten Testaments legen eine Datierung im 15. Jahrhundert nahe (ca. 1450 v. Chr.; vgl. 1. Könige 6,1; Richter 11,26; 2. Mose 12,40). Aufgrund der Mehrdeutigkeit des archäologischen Materials erscheint es weiser, den expliziten Angaben des biblischen Textes Glauben zu schenken und die frühere Datierung („Lange Chronologie“ in der folgenden Tabelle) als korrekt anzunehmen. Wichtige Jahreszahlen (Es gibt zwei mögliche Datierungen für diesen frühen Zeitraum.)1 Lange Chronologie Kurze Chronologie Abraham ca. 2165–1990 v. Chr. ca. 2000–1825 v. Chr. Isaak ca. 2065–1885 v. Chr. ca. 1900–1720 v. Chr. Jakob ca. 2000–1860 v. Chr. ca. 1840–1700 v. Chr. Josef ca. 1910–1800 v. Chr. ca. 1750–1640 v. Chr. Ankunft in Ägypten ca. 1875 ca. 1700 Auszug aus Ägypten ca. 1450 ca. 1260 Zeit der Richter ca. 1380–1050 v. Chr. ca. 1200–1050 v. Chr. Zeitstrahl Manchmal fällt es schwer, verschiedene geschichtliche Ereignisse in Relation zueinander zu setzen. Ein Zeitstrahl kann dabei helfen, einen besseren Überblick zu gewinnen. Betrachten Sie den unten stehenden Zeitstrahl und die oben stehende Tabelle zusammen und gewinnen Sie ein Eindruck davon, mit welch großer Zeitspanne wir uns befassen. Zeitstrahl – menschliche Perspektive auf die Geschichte Übungen Was hat die Sintflut erreicht, wenn sich die Situation der Menschheit nach Noah so schnell wieder abwärts bewegt? Welche Absicht war damit verknüpft? Nehmen Sie sich die Zeit, Jesaja 65,17–25 sorgfältig zu lesen. Versuchen Sie in eigenen Worten auszudrücken, was mit der Bildsprache gemeint ist. Welche Botschaft versucht der Prophet, zu vermitteln? Weiterführende Lektüre: Schlagen Sie „Adam“, „Eva“ und „Sündenfall“ im Bibellexikon nach. Reflexion Wie viele der Probleme in unserer Welt können mit den Begebenheiten in 1. Mose 3–11 in Verbindung gebracht werden? Wir haben in dieser Einheit viel darüber nachgedacht, was in der Welt im Argen liegt. Welche Hoffnungen wurden dabei zugleich in Ihnen geweckt? 1 Vgl. Artikel „Archaeological sites: Late Bronze Age” und „Time Charts: Biblical History from Abraham to Saul” in New Bible Atlas, Leicester: IVP 1985. Als erledigt kennzeichnen ◄ 2. Auslegung der Bibel Direkt zu: Direkt zu: Bitte lesen... Ankündigungen Gruppeninterne Videokonferenz Gruppeninternes Forum Offenes Forum 1. Das Buch der Bücher 2. Auslegung der Bibel 4. Israel und Gottes Heilsplan 5. Das verheißene Land und Gottes Heilsplan 6. Davids Königreich und Gottes Heilsplan 7. Die Erneuerung von Gottes Heilsplan 8. Jesus: die Erfüllung von Gottes Heilsplan 9. Die Gute Nachricht für alle Völker 10. Warten auf die Vollendung – die Schriften der Apostel Quiz: 1. Das Buch der Bücher Quiz: 2. Auslegung der Bibel <input type="submit" class="btn btn-secondary ml-1" value="Start"> 4. Israel und Gottes Heilsplan ► Kontakte Ausgewählte Mitteilungen: 1 × Kontakte 0 Einstellungen Kontakte Anfragen 0 Keine Kontakte Keine Kontaktanfragen Kontaktanfrage gesendet Persönlicher Bereich Speichern Sie Entwürfe von Nachrichten, Links, Notizen usw. für einen späteren Zugriff. Für mich und alle anderen löschen Blockieren Blockierung aufheben Entfernen Hinzufügen Löschen Löschen Kontaktanfrage senden Annehmen und zu Kontakten hinzufügen Ablehnen OK Abbrechen Favoriten () Keine Kommunikation als Favorit markiert Gruppe () Keine Gruppenkommunikation Persönlich () Keine persönliche Kommunikation Kontakte Weitere Personen Mehr laden Mitteilungen Mehr laden Keine Ergebnisse Personen und Mitteilungen suchen Datenschutz Welche Personen sollen Ihnen persönliche Mitteilungen senden können? Mitteilungen akzeptieren von: Nur meine Kontakte Kontakte und aus meinen Kursen Systemnachrichten Allgemein Eingabetaste zum Senden tippen Ausgewählte Mitteilungen löschen Kontaktanfrage senden Sie haben diese Person blockiert. Blockierung für diese Person aufheben Sie können dieser Person keine Mitteilung senden. Alle anzeigen Sie sind angemeldet als Franziska Kaps (Logout) Einführung in die Bibel Datenschutzinfos Impressum | Cookie-Einstellungen © Copyright 2018–2025 – Alle Inhalte des Bibel-für-alle-Kurses sind urheberrechtlich geschützt. Alle Rechte, einschließlich der Vervielfältigung, Veröffentlichung, Bearbeitung und Übersetzung, bleiben vorbehalten. Das Urheberrecht liegt, soweit nicht ausdrücklich anders gekennzeichnet, beim Moore Theological College. Soweit nicht anders angegeben sind die Bibelzitate der Schlachter Übersetzung in der revidierten Fassung von 2000 entnommen: © Copyright Genfer Bibelgesellschaft. Wiedergegeben mit freundlicher Genehmigung. Alle Rechte vorbehalten. try { document.querySelector('.bfaFooter .currentYear').textContent = new Date().getFullYear(); } catch (e) { } //<![CDATA[ var require = { baseUrl : 'https://kurs.bibel-fuer-alle.net/lib/requirejs.php/1647014878/', // We only support AMD modules with an explicit define() statement. enforceDefine: true, skipDataMain: true, waitSeconds : 0, paths: { jquery: 'https://kurs.bibel-fuer-alle.net/lib/javascript.php/1647014878/lib/jquery/jquery-3.5.1.min', jqueryui: 'https://kurs.bibel-fuer-alle.net/lib/javascript.php/1647014878/lib/jquery/ui-1.12.1/jquery-ui.min', jqueryprivate: 'https://kurs.bibel-fuer-alle.net/lib/javascript.php/1647014878/lib/requirejs/jquery-private' }, // Custom jquery config map. map: { // '*' means all modules will get 'jqueryprivate' // for their 'jquery' dependency. '*': { jquery: 'jqueryprivate' }, // Stub module for 'process'. This is a workaround for a bug in MathJax (see MDL-60458). '*': { process: 'core/first' }, // 'jquery-private' wants the real jQuery module // though. If this line was not here, there would // be an unresolvable cyclic dependency. jqueryprivate: { jquery: 'jquery' } } }; //]]> //<![CDATA[ M.util.js_pending("core/first"); require(['core/first'], function() { require(['core/prefetch']) ; require(["media_videojs/loader"], function(loader) { loader.setUp('de'); });; require(['jquery', 'message_popup/notification_popover_controller'], function($, controller) { var container = $('#nav-notification-popover-container'); var controller = new controller(container); controller.registerEventListeners(); controller.registerListNavigationEventListeners(); }); ; require( [ 'jquery', 'core_message/message_popover' ], function( $, Popover ) { var toggle = $('#message-drawer-toggle-6945962ccc63e6945962cc6f2b3'); Popover.init(toggle); }); ; require(['jquery', 'core/custom_interaction_events'], function($, CustomEvents) { CustomEvents.define('#jump-to-activity', [CustomEvents.events.accessibleChange]); $('#jump-to-activity').on(CustomEvents.events.accessibleChange, function() { if (!$(this).val()) { return false; } $('#url_select_f6945962cc6f2b12').submit(); }); }); ; require(['jquery', 'core_message/message_drawer'], function($, MessageDrawer) { var root = $('#message-drawer-6945962ccf8396945962cc6f2b13'); MessageDrawer.init(root, '6945962ccf8396945962cc6f2b13', false); }); ; M.util.js_pending('theme_boost/loader'); require(['theme_boost/loader'], function() { M.util.js_complete('theme_boost/loader'); }); M.util.js_pending('theme_boost/drawer'); require(['theme_boost/drawer'], function(drawer) { drawer.init(); M.util.js_complete('theme_boost/drawer'); }); ; require(['core_course/manual_completion_toggle'], toggle => { toggle.init() }); ; M.util.js_pending('core/notification'); require(['core/notification'], function(amd) {amd.init(170, []); M.util.js_complete('core/notification');});; M.util.js_pending('core/log'); require(['core/log'], function(amd) {amd.setConfig({"level":"warn"}); M.util.js_complete('core/log');});; M.util.js_pending('core/page_global'); require(['core/page_global'], function(amd) {amd.init(); M.util.js_complete('core/page_global');}); M.util.js_complete("core/first"); }); //]]> //<![CDATA[ M.str = {"moodle":{"lastmodified":"Zuletzt ge\u00e4ndert","name":"Name","error":"Fehler","info":"Infos","yes":"Ja","no":"Nein","cancel":"Abbrechen","confirm":"Best\u00e4tigen","areyousure":"Sind Sie sicher?","closebuttontitle":"Schlie\u00dfen","unknownerror":"Unbekannter Fehler","file":"Datei","url":"URL","collapseall":"Alles einklappen","expandall":"Alles aufklappen"},"repository":{"type":"Typ","size":"Gr\u00f6\u00dfe","invalidjson":"Ung\u00fcltiger JSON-Text","nofilesattached":"Keine Datei","filepicker":"Dateiauswahl","logout":"Abmelden","nofilesavailable":"Keine Dateien vorhanden","norepositoriesavailable":"Sie k\u00f6nnen hier zur Zeit keine Dateien hochladen.","fileexistsdialogheader":"Datei bereits vorhanden","fileexistsdialog_editor":"Eine Datei mit diesem Namen wurde bereits an den Text angeh\u00e4ngt, den Sie gerade bearbeiten","fileexistsdialog_filemanager":"Eine Datei mit diesem Namen wurde bereits an den Text angeh\u00e4ngt","renameto":"Nach '{$a}' umbenennen","referencesexist":"Es gibt {$a} Links zu dieser Datei.","select":"W\u00e4hlen Sie"},"admin":{"confirmdeletecomments":"M\u00f6chten Sie die Kommentare wirklich l\u00f6schen?","confirmation":"Best\u00e4tigung"},"debug":{"debuginfo":"Debug-Info","line":"Zeile","stacktrace":"Stack trace"},"langconfig":{"labelsep":":\u00a0"}}; //]]> //<![CDATA[ (function() {M.util.help_popups.setup(Y); M.util.js_pending('random6945962cc6f2b14'); Y.on('domready', function() { M.util.js_complete("init"); M.util.js_complete('random6945962cc6f2b14'); }); })(); //]]> window.hypothesisConfig = function () { return {"openSidebar":false,"showHighlights":true,"appType":"bookmarklet"}; };window.hypothesisConfig = function () { return {"openSidebar":false,"showHighlights":true,"appType":"bookmarklet"}; };window.hypothesisConfig = function () { return {"openSidebar":false,"showHighlights":true,"appType":"bookmarklet"}; };window.hypothesisConfig = function () { return {"openSidebar":false,"showHighlights":true,"appType":"bookmarklet"}; };window.hypothesisConfig = function () { return {"openSidebar":false,"showHighlights":true,"appType":"bookmarklet"}; };window.hypothesisConfig = function () { return {"openSidebar":false,"showHighlights":true,"appType":"bookmarklet"}; };

      Gott der Herr selbt wird gegenwärtig sein und alles mit seiner herrlichen Gegenwart erfüllen.

    1. eLife Assessment

      This important study introduces an advance in multi-animal tracking by reframing identity assignment as a self-supervised contrastive representation learning problem. It eliminates the need for segments of video where all animals are simultaneously visible and individually identifiable, and significantly improves tracking speed, accuracy, and robustness with respect to occlusion. This innovation has implications beyond animal tracking, potentially connecting with advances in behavioral analysis and computer vision. The strength of support for these advances is compelling overall, although there were some remaining minor methodological concerns.

    2. Reviewer #1 (Public review):

      Summary:

      This is a strong paper that presents a clear advance in multi-animal tracking. The authors introduce an updated version of idtracker.ai that reframes identity assignment as a contrastive representation learning problem rather than a classification task requiring global fragments. This change leads to substantial gains in speed and accuracy and removes a known bottleneck in the original system. The benchmarking across species is comprehensive, the results are convincing, and the work significant.

      Strengths:

      The main strengths are the conceptual shift from classification to representation learning, the clear performance gains, and the improved robustness of the new version. Removing the need for global fragments makes the software much more flexible in practice, and the accuracy and speed improvements are well demonstrated across a diverse set of datasets. The authors' response also provides further support for the method's robustness.

      The comparison to other methods is now better documented. The authors clarify which features are used, how failures are defined, how parameters are sampled, and how accuracy is assessed against human-validated data. This helps ensure that the evaluation is fair and that readers can understand the assumptions behind the benchmarks.

      The software appears thoughtfully implemented, with GUI updates, integration with pose estimators, and tools such as idmatcher.ai for linking identities across videos. The overall presentation has been improved so that the limitations of the original idtracker.ai, the engineering optimizations, and the new contrastive formulation are more clearly separated. This makes the central ideas and contributions easier to follow.

      Weaknesses:

      I do not have major remaining criticisms. The authors have addressed my earlier concerns about the clarity and fairness of the comparison with prior methods, the benchmark design, and the memory usage analysis by adding methodological detail and clearly explaining their choices. At this point I view these aspects as transparent features of the experimental design that readers can take into account, rather than weaknesses of the work.

      Overall, this is a high-quality paper. The improvements to idtracker.ai are well justified and practically significant, and the authors' response addresses the main concerns about clarity and evaluation. The conceptual contribution, thorough empirical validation, and thoughtful software implementation make this a valuable and impactful contribution to multi-animal tracking.

    3. Reviewer #3 (Public review):

      Summary:

      The authors propose a new version of idTracker.ai for animal tracking. Specifically, they apply contrastive learning to embed cropped images of animals into a feature space where clusters correspond to individual animal identities. By doing this, they address the requirement for so-called global fragments - segments of the video, in which all entities are visible/detected at the same time. In general, the new method reduces the long tracking times from the previous versions, while also increasing the average accuracy of assigning the identity labels.

      Strengths and weaknesses:

      The authors have reorganized and rewritten a substantial portion of their manuscript, which has improved the overall clarity and structure to some extent. In particular, omitting the different protocols enhanced readability. However, all technical details are now in appendix which is now referred to more frequently in the manuscript, which was already the case in the initial submission. These frequent references to the appendix - and even to appendices from previous versions - make it difficult to read and fully understand the method and the evaluations in detail. A more self-contained description of the method within the main text would be highly appreciated.

      Furthermore, the authors state that they changed their evaluation metric from accuracy to IDF1. However, throughout the manuscript they continue to refer to "accuracy" when evaluating and comparing results. It is unclear which accuracy metric was used or whether the authors are confusing the two metrics. This point needs clarification, as IDF1 is not an "accuracy" measure but rather an F1-score over identity assignments.

      The authors compare the speedups of the new version with those of the previous ones by taking the average. However, it appears that there are striking outliers in the tracking performance data (see Supplementary Table 1-4). Therefore, using the average may not be the most appropriate way to compare. The authors should consider using the median or providing more detailed statistics (e.g., boxplots) to better illustrate the distributions.

      The authors did not provide any conclusion or discussion section. Including a concise conclusion that summarizes the main findings and their implications would help to convey the message of the manuscript.

      The authors report an improvement in the mean accuracy across all benchmarks from 99.49% to 99.82% (with crossings). While this represents a slight improvement, the datasets used for benchmarking seem relatively simple and already largely "solved". Therefore, the impact of this work on the field may be limited. It would be more informative to evaluate the method on more challenging datasets that include frequent occlusions, crossings, or animals with similar appearances. The accuracy reported in the main text is "without crossings" - this seems like incomplete evaluation, especially that tracking objects that do not cross seems a straightforward task. Information is missing why crossings are a problem and are dealt with separately. There are several videos with a much lower tracking accuracy, explaining what the challenges of these videos are and why the method fails in such cases would help to understand the method's usability and weak points.

    4. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      Summary

      This is a strong paper that presents a clear advance in multi-animal tracking. The authors introduce an updated version of idtracker.ai that reframes identity assignment as a contrastive learning problem rather than a classification task requiring global fragments. This change leads to gains in speed and accuracy. The method eliminates a known bottleneck in the original system, and the benchmarking across species is comprehensive and well executed. I think the results are convincing and the work is significant.

      Strengths

      The main strengths are the conceptual shift from classification to representation learning, the clear performance gains, and the fact that the new version is more robust. Removing the need for global fragments makes the software more flexible in practice, and the accuracy and speed improvements are well demonstrated. The software appears thoughtfully implemented, with GUI updates and integration with pose estimators.

      Weaknesses

      I don't have any major criticisms, but I have identified a few points that should be addressed to improve the clarity and accuracy of the claims made in the paper.

      (1) The title begins with "New idtracker.ai," which may not age well and sounds more promotional than scientific. The strength of the work is the conceptual shift to contrastive representation learning, and it might be more helpful to emphasize that in the title rather than branding it as "new."

      We considered using “Contrastive idtracker.ai”. However, we thought that readers could then think that we believe they could use both the old idtracker.ai or this contrastive version. But we want to say that the new version is the one to use as it is better in both accuracy and tracking times. We think “New idtracker.ai” communicates better that this version is the version we recommend.

      (2) Several technical points regarding the comparison between TRex (a system evaluated in the paper) and idtracker.ai should be addressed to ensure the evaluation is fair and readers are fully informed.

      (2.1) Lines 158-160: The description of TRex as based on "Protocol 2 of idtracker.ai" overlooks several key additions in TRex, such as posture image normalization, tracklet subsampling, and the use of uniqueness feedback during training. These features are not acknowledged, and it's unclear whether TRex was properly configured - particularly regarding posture estimation, which appears to have been omitted but isn't discussed. Without knowing the actual parameters used to make comparisons, it's difficult to dassess how the method was evaluated.

      We added the information about the key additions of TRex in the section “The new idtracker.ai uses representation learning”, lines 153-157. Posture estimation in TRex was not explicitly used but neither disabled during the benchmark; we clarified this in the last paragraph of “Benchmark of accuracy and tracking time”, lines 492-495.

      (2.2) Lines 162-163: The paper implies that TRex gains speed by avoiding Protocol 3, but in practice, idtracker.ai also typically avoids using Protocol 3 due to its extremely long runtime. This part of the framing feels more like a rhetorical contrast than an informative one.

      We removed this, see new lines 153-157.

      (2.3) Lines 277-280: The contrastive loss function is written using the label l, but since it refers to a pair of images, it would be clearer and more precise to write it as l_{I,J}. This would help readers unfamiliar with contrastive learning understand the formulation more easily.

      We added this change in lines 613-620.

      (2.4) Lines 333-334: The manuscript states that TRex can fail to track certain videos, but this may be inaccurate depending on how the authors classify failures. TRex may return low uniqueness scores if training does not converge well, but this isn't equivalent to tracking failure. Moreover, the metric reported by TRex is uniqueness, not accuracy. Equating the two could mislead readers. If the authors did compare outputs to human-validated data, that should be stated more explicitly.

      We observed TRex crashing without outputting any trajectories on some occasions (Appendix 1—figure 1), and this is what we labeled as “failure”. These failures happened in the most difficult videos of our benchmark, that’s why we treated them the same way as idtracker.ai going to P3. We clarified this in new lines 464-469.

      The accuracy measured in our benchmark is not estimated but it is human-validated (see section Computation of tracking accuracy in Appendix 1). Both softwares report some quality estimators at the end of a tracking (“estimated accuracy” for idtracker.ai and "uniqueness” for TRex) but these were not used in the benchmark.

      (2.5) Lines 339-341: The evaluation approach defines a "successful run" and then sums the runtime across all attempts up to that point. If success is defined as simply producing any output, this may not reflect how experienced users actually interact with the software, where parameters are iteratively refined to improve quality.

      Yes, our benchmark was designed to be agnostic to the different experiences of the user. Also, our benchmark was designed for users that do not inspect the trajectories to choose parameters again not to leave room for potential subjectivity.

      (2.6) Lines 344-346: The simulation process involves sampling tracking parameters 10,000 times and selecting the first "successful" run. If parameter tuning is randomized rather than informed by expert knowledge, this could skew the results in favor of tools that require fewer or simpler adjustments. TRex relies on more tunable behavior, such as longer fragments improving training time, which this approach may not capture.

      We precisely used the TRex parameter track_max_speed to elongate fragments for optimal tracking. Rather than randomized parameter tuning, we defined the “valid range” for this parameter so that all values in it would produce a decent fragment structure. We used this procedure to avoid worsening those methods that use more parameters.

      (2.7) Line 354 onward: TRex was evaluated using two varying parameters (threshold and track_max_speed), while idtracker.ai used only one (intensity_threshold). With a fixed number of samples, this asymmetry could bias results against TRex. In addition, users typically set these parameters based on domain knowledge rather than random exploration.

      idtracker.ai and TRex have several parameters. Some of them have a single correct value (e.g. number of animals) or the default value that the system computes is already good (e.g. minimum blob size). For a second type of parameters, the system finds a value that is in general not as good, so users need to modify them. In general, users find that for this second type of parameter there is a valid interval of possible values, from which they need to choose a single value to run the system. idtracker.ai has intensity_threshold as the only parameter of this second type and TRex has two: threshold and track_max_speed. For these parameters, choosing one value or another within the valid interval can give different tracking results. Therefore, when we model a user that wants to run the system once except if it goes to P3 (idtracker.ai) or except if it crashes (TRex), it is these parameters we sample from within the valid interval to get a different value for each run of the system. We clarify this in lines 452-469 of the section “Benchmark of accuracy and tracking time”.

      Note that if we chose to simply run old idtracker.ai (v4 or v5) or TRex a single time, this would benefit the new idtracker.ai (v6). This is because old idtracker.ai can enter the very slow protocol 3 and TRex can fail to track. So running old idtracker.ai or TRex up to 5 times until old idtracker.ai does not use Protocol 3 and TRex does not fail is to make them as good as they can be with respect to the new idtracker.ai

      (2.8) Figure 2-figure supplement 3: The memory usage comparison lacks detail. It's unclear whether RAM or VRAM was measured, whether shared or compressed memory was included, or how memory was sampled. Since both tools dynamically adjust to system resources, the relevance of this comparison is questionable without more technical detail.

      We modified the text in the caption (new Figure 1-figure supplement 2) adding the kind of memory we measured (RAM) and how we measured it. We already have a disclaimer for this plot saying that memory management depends on the machine's available resources. We agree that this is a simple analysis of the usage of computer resources.

      (3) While the authors cite several key papers on contrastive learning, they do not use the introduction or discussion to effectively situate their approach within related fields where similar strategies have been widely adopted. For example, contrastive embedding methods form the backbone of modern facial recognition and other image similarity systems, where the goal is to map images into a latent space that separates identities or classes through clustering. This connection would help emphasize the conceptual strength of the approach and align the work with well-established applications. Similarly, there is a growing literature on animal re-identification (ReID), which often involves learning identity-preserving representations across time or appearance changes. Referencing these bodies of work would help readers connect the proposed method with adjacent areas using similar ideas, and show that the authors are aware of and building on this wider context.

      We have now added a new section in Appendix 3, “Differences with previous work in contrastive/metric learning” (lines 792-841) to include references to previous work and a description of what we do differently.

      (4) Some sections of the Results text (e.g., lines 48-74) read more like extended figure captions than part of the main narrative. They include detailed explanations of figure elements, sorting procedures, and video naming conventions that may be better placed in the actual figure captions or moved to supplementary notes. Streamlining this section in the main text would improve readability and help the central ideas stand out more clear

      Thank you for pointing this out. We have rewritten the Results, for example streamlining the old lines 48-74 (new lines 42-48)  by moving the comments about names, files and order of videos to the caption of Figure 1.

      Overall, though, this is a high-quality paper. The improvements to idtracker.ai are well justified and practically significant. Addressing the above comments will strengthen the work, particularly by clarifying the evaluation and comparisons.

      We thank the reviewer for the detailed suggestions. We believe we have taken all of them into consideration to improve the ms.

      Reviewer #2 (Public review):

      Summary:

      This work introduces a new version of the state-of-the-art idtracker.ai software for tracking multiple unmarked animals. The authors aimed to solve a critical limitation of their previous software, which relied on the existence of "global fragments" (video segments where all animals are simultaneously visible) to train an identification classifier network, in addition to addressing concerns with runtime speed. To do this, the authors have both re-implemented the backend of their software in PyTorch (in addition to numerous other performance optimizations) as well as moving from a supervised classification framework to a self-supervised, contrastive representation learning approach that no longer requires global fragments to function. By defining positive training pairs as different images from the same fragment and negative pairs as images from any two co-existing fragments, the system cleverly takes advantage of partial (but high-confidence) tracklets to learn a powerful representation of animal identity without direct human supervision. Their formulation of contrastive learning is carefully thought out and comprises a series of empirically validated design choices that are both creative and technically sound. This methodological advance is significant and directly leads to the software's major strengths, including exceptional performance improvements in speed and accuracy and a newfound robustness to occlusion (even in severe cases where no global fragments can be detected). Benchmark comparisons show the new software is, on average, 44 times faster (up to 440 times faster on difficult videos) while also achieving higher accuracy across a range of species and group sizes. This new version of idtracker.ai is shown to consistently outperform the closely related TRex software (Walter & Couzin, 2021\), which, together with the engineering innovations and usability enhancements (e.g., outputs convenient for downstream pose estimation), positions this tool as an advancement on the state-of-the-art for multi-animal tracking, especially for collective behavior studies.

      Despite these advances, we note a number of weaknesses and limitations that are not well addressed in the present version of this paper:

      Weaknesses

      (1) The contrastive representation learning formulation. Contrastive representation learning using deep neural networks has long been used for problems in the multi-object tracking domain, popularized through ReID approaches like DML (Yi et al., 2014\) and DeepReID (Li et al., 2014). More recently, contrastive learning has become more popular as an approach for scalable self-supervised representation learning for open-ended vision tasks, as exemplified by approaches like SimCLR (Chen et al., 2020), SimSiam (Chen et al., 2020\), and MAE (He et al., 2021\) and instantiated in foundation models for image embedding like DINOv2 (Oquab et al., 2023). Given their prevalence, it is useful to contrast the formulation of contrastive learning described here relative to these widely adopted approaches (and why this reviewer feels it is appropriate):

      (1.1) No rotations or other image augmentations are performed to generate positive examples. These are not necessary with this approach since the pairs are sampled from heuristically tracked fragments (which produces sufficient training data, though see weaknesses discussed below) and the crops are pre-aligned egocentrically (mitigating the need for rotational invariance).

      (1.2) There is no projection head in the architecture, like in SimCLR. Since classification/clustering is the only task that the system is intended to solve, the more general "nuisance" image features that this architectural detail normally affords are not necessary here.

      (1.3) There is no stop gradient operator like in BYOL (Grill et al., 2020\) or SimSiam. Since the heuristic tracking implicitly produces plenty of negative pairs from the fragments, there is no need to prevent representational collapse due to class asymmetry. Some care is still needed, but the authors address this well through a pair sampling strategy (discussed below).

      (1.4) Euclidean distance is used as the distance metric in the loss rather than cosine similarity as in most contrastive learning works. While cosine similarity coupled with L2-normalized unit hypersphere embeddings has proven to be a successful recipe to deal with the curse of dimensionality (with the added benefit of bounded distance limits), the authors address this through a cleverly constructed loss function that essentially allows direct control over the intra- and inter-cluster distance (D\_pos and D\_neg). This is a clever formulation that aligns well with the use of K-means for the downstream assignment step.

      No concerns here, just clarifications for readers who dig into the review. Referencing the above literature would enhance the presentation of the paper to align with the broader computer vision literature.

      Thank you for this detailed comparison. We have now added a new section in Appendix 3, “Differences with previous work in contrastive/metric learning” (lines 792-841) to include references to previous work and a description of what we do differently, including the points raised by the reviewer.

      (2) Network architecture for image feature extraction backbone. As most of the computations that drive up processing time happen in the network backbone, the authors explored a variety of architectures to assess speed, accuracy, and memory requirements. They land on ResNet18 due to its empirically determined performance. While the experiments that support this choice are solid, the rationale behind the architecture selection is somewhat weak. The authors state that: "We tested 23 networks from 8 different families of state-of-the-art convolutional neural network architectures, selected for their compatibility with consumer-grade GPUs and ability to handle small input images (20 × 20 to 100 × 100 pixels) typical in collective animal behavior videos."

      (2.1) Most modern architectures have variants that are compatible with consumer-grade GPUs. This is true of, for example, HRNet (Wang et al., 2019), ViT (Dosovitskiy et al., 2020), SwinT (Liu et al., 2021), or ConvNeXt (Liu et al., 2022), all of which report single GPU training and fast runtime speeds through lightweight configuration or subsequent variants, e.g., MobileViT (Mehta et al., 2021). The authors may consider revising that statement or providing additional support for that claim (e.g., empirical experiments) given that these have been reported to outperform ResNet18 across tasks.

      Following the recommendation of the reviewer, we tested the architectures SwinT, ConvNeXt and ViT. We found out that none of them outperformed ResNet18 since they all showed a slower learning curve. This would result in higher tracking times. These tests are now included in the section “Network architecture” (lines 550-611).

      (2.2) The compatibility of different architectures with small image sizes is configurable. Most convolutional architectures can be readily adapted to work with smaller image sizes, including 20x20 crops. With their default configuration, they lose feature map resolution through repeated pooling and downsampling steps, but this can be readily mitigated by swapping out standard convolutions with dilated convolutions and/or by setting the stride of pooling layers to 1, preserving feature map resolution across blocks. While these are fairly straightforward modifications (and are even compatible with using pretrained weights), an even more trivial approach is to pad and/or resize the crops to the default image size, which is likely to improve accuracy at a possibly minimal memory and runtime cost. These techniques may even improve the performance with the architectures that the authors did test out.

      The only two tested architectures that require a minimum image size are AlexNet and DenseNet. DenseNet proved to underperform ResNet18 in the videos where the images are sufficiently large. We have tested AlexNet with padded images to see that it also performs worse than ResNet18 (see Appendix 3—figure 1).

      We also tested the initialization of ResNet18 with pre-trained weights from ImageNet (in Appendix 3—figure 2) and it proved to bring no benefit to the training speed (added in lines 591-592).

      (2.3) The authors do not report whether the architecture experiments were done with pretrained or randomly initialized weights.

      We adapted the text to make it clear that the networks are always randomly initialized (lines 591-592, lines 608-609 and the captions of Appendix 3—figure 1 and 2).

      (2.4) The authors do not report some details about their ResNet18 design, specifically whether a global pooling layer is used and whether the output fully connected layer has any activation function. Additionally, they do not report the version of ResNet18 employed here, namely, whether the BatchNorm and ReLU are applied after (v1) or before (v2) the conv layers in the residual path.

      We use ResNet18 v1 with no activation function nor bias in its last layer (this has been clarified in the lines 606-608). Also, by design, ResNet has a global average pool right before the last fully connected layer which we did not remove. In response to the reviewer, Resnet18 v2 was tested and its performance is the same as that of v1 (see Appendix 3—figure 1 and lines 590-591).

      (3) Pair sampling strategy. The authors devised a clever approach for sampling positive and negative pairs that is tailored to the nature of the formulation. First, since the positive and negative labels are derived from the co-existence of pretracked fragments, selection has to be done at the level of fragments rather than individual images. This would not be the case if one of the newer approaches for contrastive learning were employed, but it serves as a strength here (assuming that fragment generation/first pass heuristic tracking is achievable and reliable in the dataset). Second, a clever weighted sampling scheme assigns sampling weights to the fragments that are designed to balance "exploration and exploitation". They weigh samples both by fragment length and by the loss associated with that fragment to bias towards different and more difficult examples.

      (3.1) The formulation described here resembles and uses elements of online hard example mining (Shrivastava et al., 2016), hard negative sampling (Robinson et al., 2020\), and curriculum learning more broadly. The authors may consider referencing this literature (particularly Robinson et al., 2020\) for inspiration and to inform the interpretation of the current empirical results on positive/negative balancing.

      Following this recommendation, we added references of hard negative mining in the new section “Differences with previous work in contrastive/metric learning”, lines 792-841. Regarding curriculum learning, even though in spirit it might have parallels with our sampling method in the sense that there is a guided training of the network, we believe the approach is more similar to an exploration-exploitation paradigm.

      (4) Speed and accuracy improvements. The authors report considerable improvements in speed and accuracy of the new idTracker (v6) over the original idTracker (v4?) and TRex. It's a bit unclear, however, which of these are attributable to the engineering optimizations (v5?) versus the representation learning formulation.

      (4.1) Why is there an improvement in accuracy in idTracker v5 (L77-81)? This is described as a port to PyTorch and improvements largely related to the memory and data loading efficiency. This is particularly notable given that the progression went from 97.52% (v4; original) to 99.58% (v5; engineering enhancements) to 99.92% (v6; representation learning), i.e., most of the new improvement in accuracy owes to the "optimizations" which are not the central emphasis of the systematic evaluations reported in this paper.

      V5 was a two year-effort designed to improve time efficiency of v4. It was also a surprise to us that accuracy was higher, but that likely comes from the fact that the substituted code from v4 contained some small bug/s. The improvements in v5 are retained in v6 (contrastive learning) and v6 has higher accuracy and shorter tracking times. The difference in v6 for this extra accuracy and shorter tracking times is contrastive learning.

      (4.2) What about the speed improvements? Relative to the original (v4), the authors report average speed-ups of 13.6x in v5 and 44x in v6. Presumably, the drastic speed-up in v6 comes from a lower Protocol 2 failure rate, but v6 is not evaluated in Figure 2 - figure supplement 2.

      Idtracker.ai v5 runs an optimized Protocol 2 and, sometimes, the Protocol 3. But v6 doesn’t run either of them. While P2 is still present in v6 as a fallback protocol when contrastive fails, in our v6 benchmark P2 was never needed. So the v6 speedup comes from replacing both P2 and P3 with the contrastive algorithm.

      (5) Robustness to occlusion. A major innovation enabled by the contrastive representation learning approach is the ability to tolerate the absence of a global fragment (contiguous frames where all animals are visible) by requiring only co-existing pairs of fragments owing to the paired sampling formulation. While this removes a major limitation of the previous versions of idtracker.ai, its evaluation could be strengthened. The authors describe an ablation experiment where an arc of the arena is masked out to assess the accuracy under artificially difficult conditions. They find that the v6 works robustly up to significant proportions of occlusions, even when doing so eliminates global fragments.

      (5.1) The experiment setup needs to be more carefully described.

      (5.1.1) What does the masking procedure entail? Are the pixels masked out in the original video or are detections removed after segmentation and first pass tracking is done?

      The mask is defined as a region of interest in the software. This means that it is applied at the segmentation step where the video frame is converted to a foreground-background binary image. The region of interest is applied here, converting to background all pixels not inside of it. We clarified this in the newly added section Occlusion tests, lines 240-244.

      (5.1.2) What happens at the boundary of the mask? (Partial segmentation masks would throw off the centroids, and doing it after original segmentation does not realistically model the conditions of entering an occlusion area.)

      Animals at the boundaries of the mask are partially detected. This can change the location of their detected centroid. That’s why, when computing the ground-truth accuracy for these videos, only the groundtruth centroids that were at minimum 15 pixels further from the mask were considered. We clarified this in the newly added section Occlusion tests, lines 248-251.

      (5.1.3) Are fragments still linked for animals that enter and then exit the mask area?

      No artificial fragment linking was added in these videos. Detected fragments are linked the usual way. If one animal hides into the mask, the animal disappears so the fragment breaks.  We clarified this in the newly added section Occlusion tests, lines 245-247.

      (5.1.4) How is the evaluation done? Is it computed with or without the masked region detections?

      The groundtruth used to validate these videos contains the positions of all animals at all times. But only the positions outside the mask at each frame were considered to compute the tracking accuracy. We clarified this in the newly added section Occlusion tests, lines 248-251.

      (5.2) The circular masking is perhaps not the most appropriate for the mouse data, which is collected in a rectangular arena.

      We wanted to show the same proof of concept in different videos. For that reason, we used to cover the arena parametrized by an angle. In the rectangular arena the circular masking uses an external circle, so it is covering the rectangle parametrized by an angle.

      (5.3) The number of co-existing fragments, which seems to be the main determinant of performance that the authors derive from this experiment, should be reported for these experiments. In particular, a "number of co-existing fragments" vs accuracy plot would support the use of the 0.25(N-1) heuristic and would be especially informative for users seeking to optimize experimental and cage design. Additionally, the number of co-existing fragments can be artificially reduced in other ways other than a fixed occlusion, including random dropout, which would disambiguate it from potential allocentric positional confounds (particularly relevant in arenas where egocentric pose is correlated with allocentric position).

      We included the requested analysis about the fragment connectivity in Figure 3-figure supplement 1. We agree that there can be additional ways of reducing co-existing fragments, but we think the occlusion tests have the additional value that there are many real experiments similar to this test.

      (6) Robustness to imaging conditions. The authors state that "the new idtracker.ai can work well with lower resolutions, blur and video compression, and with inhomogeneous light (Figure 2 - figure supplement 4)." (L156). Despite this claim, there are no speed or accuracy results reported for the artificially corrupted data, only examples of these image manipulations in the supplementary figure.

      We added this information in the same image, new Figure 1 - figure supplement 3.

      (7) Robustness across longitudinal or multi-session experiments. The authors reference idmatcher.ai as a compatible tool for this use case (matching identities across sessions or long-term monitoring across chunked videos), however, no performance data is presented to support its usage. This is relevant as the innovations described here may interact with this setting. While deep metric learning and contrastive learning for ReID were originally motivated by these types of problems (especially individuals leaving and entering the FOV), it is not clear that the current formulation is ideally suited for this use case. Namely, the design decisions described in point 1 of this review are at times at odds with the idea of learning generalizable representations owing to the feature extractor backbone (less scalable), low-dimensional embedding size (less representational capacity), and Euclidean distance metric without hypersphere embedding (possible sensitivity to drift). It's possible that data to support point 6 can mitigate these concerns through empirical results on variations in illumination, but a stronger experiment would be to artificially split up a longer video into shorter segments and evaluate how generalizable and stable the representations learned in one segment are across contiguous ("longitudinal") or discontiguous ("multi-session") segments.

      We have now added a test to prove the reliability of idmatcher.ai in v6. In this test, 14 videos are taken from the benchmark and split in two non-overlapping parts (with a 200 frames gap in between). idmatcher.ai is run between the two parts presenting a 100% accuracy identity matching across all of them (see section “Validity of idmatcher.ai in the new idtracker.ai”, lines 969-1008).

      We thank the reviewer for the detailed suggestions. We believe we have taken all of them into consideration to improve the ms.

      Reviewer #3 (Public review):

      Summary

      The authors propose a new version of idTracker.ai for animal tracking. Specifically, they apply contrastive learning to embed cropped images of animals into a feature space where clusters correspond to individual animal identities.

      Strengths

      By doing this, the new software alleviates the requirement for so-called global fragments - segments of the video, in which all entities are visible/detected at the same time - which was necessary in the previous version of the method. In general, the new method reduces the tracking time compared to the previous versions, while also increasing the average accuracy of assigning the identity labels.

      Weaknesses

      The general impression of the paper is that, in its current form, it is difficult to disentangle the old from the new method and understand the method in detail. The manuscript would benefit from a major reorganization and rewriting of its parts. There are also certain concerns about the accuracy metric and reducing the computational time.

      We have made the following modifications in the presentation:

      (1) We have added section tiles to the main text so it is clearer what tracking system we are referring to. For example, we now have sections “Limitation of the original idtracker.ai”, “Optimizing idtracker.ai without changes in the learning method” and “The new idtracker.ai uses representation learning”.

      (2) We have completely rewritten all the text of the ms until we start with contrastive learning. Old L20-89 is now L20-L66, much shorter and easier to read.

      (3) We have rewritten the first 3 paragraphs in the section “The new idtracker.ai uses representation learning” (lines 68-92).

      (4) We now expanded Appendix 3 to discuss the details of our approach  (lines 539-897).  It discusses in detail the steps of the algorithm, the network architecture, the loss function, the sampling strategy, the clustering and identity assignment, and the stopping criteria in training

      (5) To cite previous work in detail and explain what we do differently, we have now added in Appendix 3 the new section “Differences with previous work in contrastive/metric learning” (lines 792-841).

      Regarding accuracy metrics, we have replaced our accuracy metric with the standard metric IDF1. IDF1 is the standard metric that is applied to systems in which the goal is to maintain consistent identities across time. See also the section in Appendix 1 "Computation of tracking accuracy” (lines 414-436) explaining IDF1 and why this is an appropriate metric for our goal.

      Using IDF1 we obtain slightly higher accuracies for the idtracker.ai systems. This is the comparison of mean accuracy over all our benchmark for our previous accuracy score and the new one for the full trajectories:

      v4:   97.42% -> 98.24%

      v5:   99.41% -> 99.49%

      v6:   99.74% -> 99.82%

      trex: 97.89% -> 97.89%

      We thank the reviewer for the suggestions about presentation and about the use of more standard metrics.

      Recommendations for the authors:

      Reviewer #2 (Recommendations for the authors):

      (1) Figure 1a: A graphical legend inset would make it more readable since there are multiple colors, line styles, and connecting lines to parse out.

      Following this recommendation, we added a graphical legend in the old Figure 1 (new Figure 2).

      (2) L46: "have images" → "has images".

      We applied this correction. Line 35.

      (3) L52: "videos start with a letter for the species (z,**f**,m)", but "d" is used for fly videos.

      We applied this correction in the caption of Figure 1.

      (4) L62: "with Protocol 3 a two-step process" → "with Protocol 3 being a two-step process".

      We rewrote this paragraph without mentioning Protocol 3, lines 37-41.

      (5) L82-89: This is the main statement of the problems that are being addressed here (speed and relaxing the need for global fragments). This could be moved up, emphasized, and made clearer without the long preamble and results on the engineering optimizations in v5. This lack of linearity in the narrative is also evident in the fact that after Figure 1a is cited, inline citations skip to Figure 2 before returning to Figure 1 once the contrastive learning is introduced.

      We have rewritten all the text until the contrastive learning, (old lines 20-89 are now lines 20-66). The text is shorter, more linear and easier to read.

      (6) L114: "pairs until the distance D_{pos}" → "pairs until the distance approximates D_{pos}".

      We rewrote as “ pairs until the distance 𝐷pos (or 𝐷neg) is reached” in line 107.

      (7) L570: Missing a right parenthesis in the equation.

      We no longer have this equation in the ms.

      (8) L705: "In order to identify fragments we, not only need" → "In order to identify fragments, we not only need".

      We applied this correction, Line 775.

      (9) L819: "probably distribution" → "probability distribution".

      We applied this correction, Line 776.

      (10) L833: "produced the best decrease the time required" → "produced the best decrease of the time required".

      We applied this correction, Line 746.

      Reviewer #3 (Recommendations for the authors):

      (1) We recommend rewriting and restructuring the manuscript. The paper includes a detailed explanation of the previous approaches (idTracker and idTracker.ai) and their limitations. In contrast, the description of the proposed method is short and unstructured, which makes it difficult to distinguish between the old and new methods as well as to understand the proposed method in general. Here are a few examples illustrating the problem. 

      (1.1) Only in line 90 do the authors start to describe the work done in this manuscript. The previous 3 pages list limitations of the original method.

      We have now divided the main text into sections, so it is clearer what is the previous method (“Limitation of the original idtracker.ai”, lines 28-51), the new optimization we did of this method (“Optimizing idtracker.ai without changes in the learning method”, lines 52-66) and the new contrastive approach that also includes the optimizations (“The new idtracker.ai uses representation learning”, lines 66-164). Also, the new text has now been streamlined until the contrastive section, following your suggestion. You can see that in the new writing the three sections are 25 , 15 and 99 lines. The more detailed section is the new system, the other two are needed as reference, to describe which problem we are solving and the extra new optimizations.  

      (1.2) The new method does not have a distinct name, and it is hard to follow which idtracker.ai is a specific part of the text referring to. Not naming the new method makes it difficult to understand.

      We use the name new idtracker.ai (v6) so it becomes the current default version. v5 is now obsolete, as well as v4. And from the point of view of the end user, no new name is needed since v6 is just an evolution of the same software they have been using. Also, we added sections in the main text to clarify the ideas in there and indicate the version of idtracker.ai we are referring to.

      (1.3) There are "Protocol 2" and "Protocol 3" mixed with various versions of the software scattered throughout the text, which makes it hard to follow. There should be some systematic naming of approaches and a listing of results introduced.

      Following this recommendation we no longer talk about the specific protocols of the old version of idtracker.ai in the main text. We rewritten the explanation of these versions in a more clear and straightforward way, lines 29-36.

      (2) To this end, the authors leave some important concepts either underexplained or only referenced indirectly via prior work. For example, the explanation of how the fragments are created (line 15) is only explained by the "video structure" and the algorithm that is responsible for resolving the identities during crossings is not detailed (see lines 46-47, 149-150). Including summaries of these elements would improve the paper's clarity and accessibility.

      We listed the specific sections from our previous publication where the reader can find information about the entire tracking pipeline (lines 539-549). This way, we keep the ms clear and focused on the new identification algorithm while indicating where to find such information.

      (3) Accuracy metrics are not clear. In line 319, the authors define it as based on "proportion of errors in the trajectory". This proportion is not explained. How is the error calculated if a trajectory is lost or there are identity swaps? Multi-object tracking has a range of accuracy metrics that account for such events but none of those are used by the authors. Estimating metrics that are common for MOT literature, for example, IDF1, MOTA, and MOTP, would allow for better method performance understanding and comparison.

      In the new ms, we replaced our accuracy metric with the standard metric IDF1. IDF1 is the standard metric that is applied to systems in which the goal is to maintain consistent identities across time. See also the section in Appendix 1 "Computation of tracking accuracy” explaining why IDF1 and not MOTA or MOTP is the adequate metric for a system that wants to give correct tracking by identification in time. See lines 416-436.

      Using IDF1 we obtain slightly higher accuracies for the idtracker.ai systems. This is the comparison of mean accuracy four our previous accuracy and the new one for the full trajectories:

      v4:   97.42% -> 98.24%

      v5:   99.41% -> 99.49%

      v6:   99.74% -> 99.82%

      trex: 97.89% -> 97.89%

      (4) Additionally, the authors distinguish between tracking with and without crossings, but do not provide statistics on the frequency of crossings per video. It is also unclear how the crossings are considered for the final output. Including information such as the frame rate of the videos would help to better understand the temporal resolution and the differences between consecutive frames of the videos.

      We added this information in the Appendix 1 “Benchmark of accuracy and tracking time”, lines 445-451. The framerate in our benchmark videos goes from 25 to 60 fps (average of 37 fps). On average 2.6% of the blobs are crossings (1.1% for zebrafish 0.7% for drosophila 9.4% for mice).

      (5) In the description of the dataset used for evaluation (lines 349-365), the authors describe the random sampling of parameter values for each tracking run. However, it is unclear whether the same values were used across methods. Without this clarification, comparisons between the proposed method, older versions, and TRex might be biased due to lucky parameter combinations. In addition, the ranges from which the values were randomly sampled were also not described.

      Only one parameter is shared between idtracker.ai and TRex: intensity_threshold (in idtracker.ai) and threshold (in TRex). Both are conceptually equivalent but differ in their numerical values since they affect different algorithms. V4, v5, and TRex each required the same process of independent expert visual inspection of the segmentation to select the valid value range. Since versions 5 and 6 use exactly the same segmentation algorithm, they share the same parameter ranges.

      All the ranges of valid values used in our benchmark are public here https://drive.google.com/drive/folders/1tFxdtFUudl02ICS99vYKrZLeF28TiYpZ as stated in the section “Data availability”, lines 227-228.

      (6) Lines 122-123, Figure 1c. "batches" - is an imprecise metric of training time as there is no information about the batch size.

      We clarified the Figure caption, new Figure 2c.

      (7) Line 145 - "we run some steps... For example..." leaves the method description somewhat unclear. It would help if you could provide more details about how the assignments are carried out and which metrics are being used.

      Following this recommendation, we listed the specific sections from our previous publication where the reader can find information about the entire tracking pipeline (lines 539-549). This way, we keep the ms clear and focused on the new identification algorithm while indicating where to find such information.

      (8) Figure 3. How is tracking accuracy assessed with occlusions? Are the individuals correctly recognized when they reappear from the occluded area?

      The groundtruth for this video contains the positions of all animals at all times. Only the groundtruth points inside the region of interest are taken into account when computing the accuracy. When the tracking reaches high accuracy, it means that animals are successfully relabeled every time they enter the non-masked region. Note that this software works all the time by identification of animals, so crossings and occlusion are treated the same way. What is new here is that the occlusions are so large that there are no global fragments. We clarified this in the new section “Occlusion tests” in Methods, lines 239-251.

      (9) Lines 185-187 this part of the sentence is not clear.

      We rewrote this part in a clearer way, lines 180-182.

      (10) The authors also highlight the improved runtime performance. However, they do not provide a detailed breakdown of the time spent on each component of the tracking/training pipeline. A timing breakdown would help to compare the training duration with the other components. For example, the calculation of the Silhouette Score alone can be time-consuming and could be a bottleneck in the training process. Including this information would provide a clearer picture of the overall efficiency of the method.

      We measured that the training of ResNet takes on average in our benchmark 47% of the tracking time (we added this information line 551 section “Network Architecture”). In this training stage the bottleneck becomes the network forward and backward pass, limited by the GPU performance. All other processes happening during training have been deeply optimized and parallelized when needed so their contribution to the training time is minimal. Apart from the training, we also measured 24.4% of the total tracking time spent in reading and segmenting the video files and 11.1% in processing the identification images and detecting crossings.

      (11) An important part of the computational cost is related to model training. It would be interesting to test whether a model trained on one video of a specific animal type (e.g., zebrafish_5) generalizes to another video of the same type (e.g., zebrafish_7). This would assess the model's generalizability across different videos of the same species and spare a lot of compute. Alternatively, instead of training a model from scratch for each video, the authors could also consider training a base model on a superset of images from different videos and then fine-tuning it with a lower learning rate for each specific video. This could potentially save time and resources while still achieving good performance.

      Already before v6, there was the possibility for the user to start training the identification network by copying the final weights from another tracking session. This knowledge transfer feature is still present in v6 and it still decreases the training times significatively. This information has been added in Appendix 4, lines 906-909.

      We have already begun working on the interesting idea of a general base model but it brings some complex challenges. It could be a very useful new feature for future idtracker.ai releases.

      We thank the reviewer for the many suggestions. We have implemented all of them.

    1. Injury, exercise, and other activities lead to remodeling, but even without injury or exercise, about 5 to 10 percent of the skeleton is remodeled annually just by destroying old bone and replacing it with fresh bone.

      A discussion of Wolff's Law seems appropriate here, especially as we earlier referenced changes in bone density due to force placed upon them.

    2. the arms (i.e., humerus, ulna, and radius) and legs (i.e., femur, tibia, fibula), as well as in the fingers

      We might want to say upper and lower extremity rather than arms and legs here to avoid later confusion when we discuss muscle action (thigh vs leg, for example).

    1. Founders don’t mind paying for value.They mind paying forever for the same output.

      Change to Founders are happy to pay for value – they just don’t want to pay forever for the same outcome.

    2. SaaS

      Change from: I’m not a big SaaS company. To: I’m not a big SaaS company (that's software as a service if you're not familiar with the acronym).

    3. They just needed:A clean, Google-compliant product reviews feedSomething that works with their existing review app, no migration neededAnd a fair price

      Change to:

      • A clean, Google-compliant product reviews feed
      • Something that works with their existing review app, no migration needed; and
      • A fair price

      And doesn't work because the clause is "They just needed:" Which translates to: They just needed: ... And a fair price.

    4. I wrote a full breakdown of the most common reasons Google product ratings don’t appear — from feed formatting to Merchant Center issues, here.

      Instead of "here", write your sentence with a keywords in mind and link the keyword back to the post.

    1. eLife Assessment

      This important study provides a detailed analysis of the transcriptional landscape of the mouse hippocampus in the context of various physiological states. The main conclusions have solid support: that most transcriptional targets are generally stable, with notable exceptions in the dentate gyrus and with regard to circadian changes. There are some weaknesses and it would improve the manuscript to address them.

    2. Reviewer #1 (Public review):

      Olmstead et al. present a single-cell nuclear sequencing dataset that interrogates how hippocampal gene expression changes in response to distinct physiological stimuli and across circadian time. The authors perform single-nucleus RNA sequencing on mouse hippocampal tissue after (1) kainic acid-induced seizure, (2) exposure to an enriched environment, and (3) at multiple circadian phases.

      The dataset is rigorously collected, and a major strength is the use of the previously established ABC taxonomy from Yao et al. (2023) to define cell types. The authors further show that this taxonomy is largely independent of activity-driven transcriptional programs. Using these annotations, they examine activity-regulated gene expression across neuronal and glial subclasses. They identify ZT12, corresponding to the transition from the light to the dark period, as transcriptionally distinct from other circadian time points, and show that this pattern is conserved across many cell types. Finally, they test how circadian phase influences activity-dependent gene expression by exposing mice to an enriched environment at different times of day, and report no significant interaction between circadian phase and enriched environment exposure.

      A crucial consideration for users of this dataset is the potential confounding effect between circadian phase and locomotor activity. This is particularly relevant because dentate gyrus activity is strongly modulated by locomotion. The authors acknowledge this issue in the Discussion and provide useful guidance for how to interpret their findings, considering this confound.

      Taken together, this dataset represents a useful resource for the neuroscience community, particularly for investigators interested in how novel experience and circadian phase shape activity-related and immediate early gene expression in the hippocampus

    3. Reviewer #2 (Public review):

      This manuscript presents the ACT-DEPP dataset, a comprehensive single-nucleus RNA-sequencing atlas of the mouse hippocampus that examines how activity-dependent and circadian transcriptional programs intersect. The dataset spans multiple experimental conditions and circadian time points, clarifying how cell-type identity relates to transcriptional state. In particular, the authors compare stimulus-evoked activity programs (environmental enrichment and kainate-induced seizures) with circadian phase-dependent transcriptional oscillations. They also identify a transcriptional inflection point near ZT12 and argue that immediate early gene (IEG) induction is broadly maintained across circadian phases, with minimal ZT-dependent modulation.

      Strengths:

      The study is ambitious in scope and data volume, and outlines the data-processing and atlas-registration workflows. The side-by-side treatment of stimulus paradigms and ZT sampling provides a coherent framework for parsing state (activity) from phase (circadian) across diverse neuronal and non-neuronal classes. Several findings - especially the ZT12 "inflection" and the differential sensitivity of pathways across subclasses - are intriguing.

      Weaknesses:

      (1) The authors acknowledge, but do not adequately address, the fundamental confounding factor between circadian phase and spontaneous locomotor activity. The assertion that these represent "orthogonal regulatory axes," based on largely non-overlapping DEGs, may be overstated. The absence of behavioral monitoring during baseline is a major limitation.

      (2) The statement "Thus, novel experiences and seizures trigger categorically distinct transcriptional responses-with respect to both magnitude and specific genes-in these hippocampal subregions" is overstated, given the data presented. Figure 2A-B shows that approximately one-third of EE-induced DEGs at 30 minutes overlap with KA DEGs, and this overlap increases substantially at 6 hours in CA1 (where EE and KA responses become "fully shared"). This suggests the responses are quantitatively different rather than "categorically distinct."

      (3) In Figure 4B, "active cells" are defined as those with {greater than or equal to}3 of 15 IEGs above the 90th percentile, with thresholds apparently calibrated in CA1. Because baseline expression distributions differ across subclasses, this rule can bias activation rates across cell types.

      (4) Few genes show significant ZT × stimulus (EE or seizure) interactions, concentrated in neuronal populations. Given unequal nucleus counts and biological replicates across subclasses, small effects may be underpowered.

      (5) In Figure 6 I, J, the relationship between the highlighted pathways/functions and circadian phase is not yet explicit.

      (6) Line 276-280: The enrichment of lncRNAs at ZT12 in CA1 is intriguing but underdeveloped. What are these lncRNAs, and what might they regulate?

      Overall, most descriptive conclusions are supported (e.g., broad phase-robustness of classical IEGs; an inflection near ZT12). Claims about the separability/orthogonality of activity vs circadian programs, and about categorical distinctions between EE and KA responses, would benefit from more conservative wording or additional analyses to rule out behavioral and power-related alternatives.

    1. is the third layer of the skin directly below the dermis

      Do we want to split the hair that hypodermis is technically not part of the skin, but supports the skin and attaches it to the muscle beneath?