1. Last 7 days
    1. Author response:

      Puvlic Reviews:

      Reviewer #1 (Public Review): 

      Summary: 

      Dr. Santamaria's group previously utilized antigen-specific nanomedicines to induce immune tolerance in treating autoimmune diseases. The success of this therapeutic strategy has been linked to expanded regulatory mechanisms, particularly the role of T-regulatory type-1 (TR1) cells. However, the differentiation program of TR1 cells remained largely unclear. Previous work from the authors suggested that TR1 cells originate from T follicular helper (TFH) cells. In the current study, the authors aimed to investigate the epigenetic mechanisms underlying the transdifferentiation of TFH cells into IL-10-producing TR1 cells. Specifically, they sought to determine whether this process involves extensive chromatin remodeling or is driven by preexisting epigenetic modifications. Their goal was to understand the transcriptional and epigenetic changes facilitating this transition and to explore the potential therapeutic implications of manipulating this pathway. 

      The authors successfully demonstrated that the TFH-to-TR1 transdifferentiation process is driven by pre-existing epigenetic modifications rather than extensive new chromatin remodeling. The comprehensive transcriptional and epigenetic analyses provide robust evidence supporting their conclusions. 

      Strengths: 

      (1) The study employs a broad range of bulk and single-cell transcriptional and epigenetic tools, including RNA-seq, ATAC-seq, ChIP-seq, and DNA methylation analysis. This comprehensive approach provides a detailed examination of the epigenetic landscape during the TFH-to-TR1 transition. 

      (2) The use of high-throughput sequencing technologies and sophisticated bioinformatics analyses strengthens the foundation for the conclusions drawn. 

      (3) The data generated can serve as a valuable resource for the scientific community, offering insights into the epigenetic regulation of T-cell plasticity. 

      (4) The findings have significant implications for developing new therapeutic strategies for autoimmune diseases, making the research highly relevant and impactful. 

      We thank the reviewer for providing constructive feedback on the manuscript.

      Weaknesses: 

      (1) While the scope of this study lies in transcriptional and epigenetic analyses, the conclusions need to be validated by future functional analyses. 

      We fully agree with the reviewer’s suggestion. The current study provides a foundational understanding of how the epigenetic landscape of TFH cells evolves as they transdifferentiate into TR1 progeny in response to chronic ligation of cognate TCRs using pMHCII-NPs. Functional validation is indeed the focus of our current studies, where we are carrying out extensive perturbation studies of the TFH-TR1 transdifferentiation pathway in conditional transcription factor gene knock-out mice. In these ongoing studies, genes coding for a series of transcription factors expressed along the TFH-TR1 pathway are selectively knocked out in T cells, to ascertain (i) the specific roles of key transcription factors in the various cell conversion events and transcriptional changes that take place along the TFH-TR1 cell axis; (ii) the roles that such transcription factors play in the chromatin re-modeling events that underpin the TFH-TR1 transdifferentiation process; and (iii) the effects of transcription factor gene deletion on phenotypic and functional readouts of TFH and regulatory T cell function.

      (2) This study successfully identified key transcription factors and epigenetic marks. How these factors mechanistically drive chromatin closure and gene expression changes during the TFH-to-TR1 transition requires further investigation. 

      Agreed. Please see our response to point #1 above.  

      (3) The study provides a snapshot of the epigenetic landscape. Future dynamic analysis may offer more insights into the progression and stability of the observed changes. 

      We have previously shown that the first event in the pMHCII-NP-induced TFH-TR1 transdifferentiation process involves proliferation of cognate TFH cells in the splenic germinal centers. This event is followed by immediate conversion of the proliferated TFH cells into transitional and terminally differentiated TR1 subsets. Although the snapshot provided by our single cell studies reported herein documents the simultaneous presence of the different subsets composing the TFH-TR1 cell pathway upon the termination of treatment, the transdifferentiation process itself is extremely fast, such that proliferated TFH cells already transdifferentiate into TR1 cells after a single pMHCII-NP dose (Sole et al., 2023a). This makes it extremely challenging to pursue dynamic experiments. Notwithstanding this caveat, ongoing studies of cognate T cells post treatment withdrawal, coupled to single cell studies of the TFHTR1 pathway in transcription factor gene knockout mice exhibiting perturbed transdifferentiation processes are likely to shed light into the progression and stability of the epigenetic changes reported herein. 

      We will revise the manuscript accordingly, to address the three concerns raised by the reviewer, in the context of the ongoing studies mentioned above. 

      Reviewer #2 (Public Review): 

      Summary: 

      This study, based on their previous findings that TFH cells can be converted into TR1 cells, conducted a highly detailed and comprehensive epigenetic investigation to answer whether TR1 differentiation from TFH is driven by epigenetic changes. Their evidence indicated that the downregulation of TFH-related genes during the TFH to TR1 transition depends on chromatin closure, while the upregulation of TR1-related genes does not depend on epigenetic changes. 

      Strengths: 

      (1) A significant advantage of their approach lies in its detailed and comprehensive assessment of epigenetics. Their analysis of epigenetics covers chromatin open regions, histone modifications, DNA methylation, and using both single-cell and bulk techniques to validate their findings. As for their results, observations from different epigenetic perspectives mutually supported each other, lending greater credibility to their conclusions. This study effectively demonstrates that (1) the TFH-to-TR1 differentiation process is associated with massive closure of OCRs, and (2) the TR1-poised epigenome of TFH cells is a key enabler of this transdifferentiation process. Considering the extensive changes in epigenetic patterns involved in other CD4+ T lineage commitment processes, the similarity between TFH and TR1 in their epigenetics is intriguing. 

      (2) They performed correlation analysis to answer the association between "pMHC-NPinduced epigenetic change" and "gene expression change in TR1". Also, they have made their raw data publicly available, providing a comprehensive epigenomic database of pMHC-NPinduced TR1 cells. This will serve as a valuable reference for future research. 

      We thank the reviewer for his/her constructive feedback and suggestions for improvement of the manuscript.

      Weaknesses: 

      (1) A major limitation is that this study heavily relies on a premise from the previous studies performed by the same group on pMHC-NP-induced T-cell responses. This significantly limits the relevance of their conclusion to a broader perspective. Specifically, differential OCRs between Tet+ and naïve T cells were limited to only 821, as compared to 10,919 differential OCRs between KLH-TFH and naïve T cells (Figure 2A), indicating that the precursors and T cell clonotypes that responded to pMHC-NP were extremely limited. This limitation should be clearly discussed in the Discussion section. 

      We agree that this study focuses on a very specific, previously unrecognized pathway discovered in mice treated with pMHCII-NPs. Despite this apparent narrow perspective, we now have evidence that this is a naturally occurring pathway that also develops in other contexts (i.e., in mice that have not been treated with pMHCII-NPs). Furthermore, this pathway affords a unique opportunity to further understand the transcriptional and epigenetic mechanisms underpinning T cell plasticity; the findings reported here can help guide/inform not only upcoming translational studies of pMHCII-NP therapy in humans, but also other research in this area. We will discuss the limitations and opportunities that this research provides more explicitly in a revised manuscript to provide a clearer context for the scope and applicability of our findings.

      We acknowledge that, in the bulk ATAC-seq studies, the differences in the number of OCRs found in tetramer+ cells or KLH-induced TFH cells vs. naïve T cells may be influenced by the intrinsic oligoclonality of the tetramer+ T cell pool arising in response to repeated pMHCII-NP challenge (Sole et al., 2023a). However, we note that scATAC-seq studies of the tetramer+ T cell pool found similar differences between the oligoclonal tetramer+ TFH subpool and its (also oligoclonal) tetramer+ TR1 counterparts (i.e., substantially higher number of OCRs in the former vs. the latter relative to naïve T cells). This will be clarified in a revised version of the manuscript.

      (2) This article uses peak calling to determine whether a region has histone modifications, claiming that the regions with histone modifications in TFH and TR1 are highly similar. However, they did not discuss the differences in histone modification intensities measured by ChIP-seq. For example, as shown in Figure 6C, IL10 H3K27ac modification in Tet+ cells showed significantly higher intensity than KLH-TFH, while in this article, it may be categorized as "possessing same histone modification region". This will strengthen their conclusions.

      We appreciate your suggestion to discuss differences in histone modification intensities as measured by ChIP-seq. However, we respectfully disagree with the reviewer’s interpretation of these data.

      Our study primarily focuses on the identification of epigenetic similarities and differences between pMHCII-NP-induced tetramer+ cells and KLH-induced TFH cells relative to naive T cells. The outcome of direct comparisons of histone deposition (ChIP-seq) between these cell types is summarized in the lower part of Figure 4B and detailed in Datasheet 5. Throughout this section, we report the number of differentially enriched regions, their overlap with OCRs shared between tetramer+ TFH and tetramer+ TR1 cells based on scATAC-seq data, and the associated genes. Clearly, most of the epigenetic modifications that TR1 cells inherit from TFH cells had already been acquired by TFH cells upon differentiation from naïve T cell precursors. 

      Regarding the specific point raised by the reviewer on differences in the intensity of the H3K27Ac peaks linked to Il10 in Figure 6C, we note that the genomic tracks shown are illustrative. However, thorough statistical analyses involving signal background for each condition and p-value adjustment did not support differential enrichment for H3K27Ac deposition around the Il10 gene between pMHCII-NP-induced tetramer+ T cells and KLHinduced TFH cells. 

      We acknowledge that peak calling alone does not account for intensity variations of histone modifications. However, our analysis includes both qualitative and quantitative assessments to ensure robust conclusions. We will edit the relevant sections of the manuscript to clarify these points and better communicate our methodology and findings to the readers.

      (3) Last, the key findings of this study are clear and convincing, but some results and figures are unnecessary and redundant. Some results are largely a mere confirmation of the relationship between histone marks and chromatin status. I propose to reduce the number of figures and text that are largely confirmatory. Overall, I feel this paper is too long for its current contents. 

      We understand this reviewer’s concern about the potential redundancy of some results and figures. The goal of including these analyses is to provide a comprehensive understanding of the intricate relationships between epigenetic features and transcriptomic differences. We believe that a detailed examination of these relationships is crucial for several reasons: (i) the breadth of the data allows for a thorough exploration of the relationships between histone marks, chromatin accessibility and transcriptional differences. This comprehensive approach helps ensure that our conclusions are robust and well-supported by the data; (ii) some of the results that may appear confirmatory are, in fact, important for validating and reinforcing the consistency of our findings across different contexts. These details intend to provide a nuanced understanding of the interactions between epigenetic features and gene expression; and (iii) by presenting a detailed analysis, we aim to offer a solid foundation for future research in this area. The extensive datasets that are presented in this paper will serve as a valuable resource for others in the field who may seek to build upon our findings.

      That said, we will carefully review the manuscript to identify and streamline any elements that may be overly redundant. We will consider consolidating figures and refining the text to ensure that the paper remains concise and focused while retaining the depth of analysis that we believe is essential.

    2. eLife assessment

      This study provides important information on pre-existing epigenetic modification in T cell plasticity. The evidence supporting the conclusions is compelling, supported by comprehensive transcriptional and epigenetic analyses. The work will be of interest to immunologists and colleagues studying transcriptional regulation.

    3. Reviewer #1 (Public Review):

      Summary:

      Dr. Santamaria's group previously utilized antigen-specific nanomedicines to induce immune tolerance in treating autoimmune diseases. The success of this therapeutic strategy has been linked to expanded regulatory mechanisms, particularly the role of T-regulatory type-1 (TR1) cells. However, the differentiation program of TR1 cells remained largely unclear. Previous work from the authors suggested that TR1 cells originate from T follicular helper (TFH) cells. In the current study, the authors aimed to investigate the epigenetic mechanisms underlying the transdifferentiation of TFH cells into IL-10-producing TR1 cells. Specifically, they sought to determine whether this process involves extensive chromatin remodeling or is driven by pre-existing epigenetic modifications. Their goal was to understand the transcriptional and epigenetic changes facilitating this transition and to explore the potential therapeutic implications of manipulating this pathway.

      The authors successfully demonstrated that the TFH-to-TR1 transdifferentiation process is driven by pre-existing epigenetic modifications rather than extensive new chromatin remodeling. The comprehensive transcriptional and epigenetic analyses provide robust evidence supporting their conclusions.

      Strengths:

      (1) The study employs a broad range of bulk and single-cell transcriptional and epigenetic tools, including RNA-seq, ATAC-seq, ChIP-seq, and DNA methylation analysis. This comprehensive approach provides a detailed examination of the epigenetic landscape during the TFH-to-TR1 transition.

      (2) The use of high-throughput sequencing technologies and sophisticated bioinformatics analyses strengthens the foundation for the conclusions drawn.

      (3) The data generated can serve as a valuable resource for the scientific community, offering insights into the epigenetic regulation of T-cell plasticity.

      (4) The findings have significant implications for developing new therapeutic strategies for autoimmune diseases, making the research highly relevant and impactful.

      Weaknesses:

      (1) While the scope of this study lies in transcriptional and epigenetic analyses, the conclusions need to be validated by future functional analyses.

      (2) This study successfully identified key transcription factors and epigenetic marks. How these factors mechanistically drive chromatin closure and gene expression changes during the TFH-to-TR1 transition requires further investigation.

      (3) The study provides a snapshot of the epigenetic landscape. Future dynamic analysis may offer more insights into the progression and stability of the observed changes.

    4. Reviewer #2 (Public Review):

      Summary:

      This study, based on their previous findings that TFH cells can be converted into TR1 cells, conducted a highly detailed and comprehensive epigenetic investigation to answer whether TR1 differentiation from TFH is driven by epigenetic changes. Their evidence indicated that the downregulation of TFH-related genes during the TFH to TR1 transition depends on chromatin closure, while the upregulation of TR1-related genes does not depend on epigenetic changes.

      Strengths:

      A significant advantage of their approach lies in its detailed and comprehensive assessment of epigenetics. Their analysis of epigenetics covers chromatin open regions, histone modifications, DNA methylation, and using both single-cell and bulk techniques to validate their findings. As for their results, observations from different epigenetic perspectives mutually supported each other, lending greater credibility to their conclusions. This study effectively demonstrates that (1) the TFH-to-TR1 differentiation process is associated with massive closure of OCRs, and (2) the TR1-poised epigenome of TFH cells is a key enabler of this transdifferentiation process. Considering the extensive changes in epigenetic patterns involved in other CD4+ T lineage commitment processes, the similarity between TFH and TR1 in their epigenetics is intriguing.

      They performed correlation analysis to answer the association between "pMHC-NP-induced epigenetic change" and "gene expression change in TR1". Also, they have made their raw data publicly available, providing a comprehensive epigenomic database of pMHC-NP-induced TR1 cells. This will serve as a valuable reference for future research.

      Weaknesses:

      A major limitation is that this study heavily relies on a premise from the previous studies performed by the same group on pMHC-NP-induced T-cell responses. This significantly limits the relevance of their conclusion to a broader perspective. Specifically, differential OCRs between Tet+ and naïve T cells were limited to only 821, as compared to 10,919 differential OCRs between KLH-TFH and naïve T cells (Figure 2A), indicating that the precursors and T cell clonotypes that responded to pMHC-NP were extremely limited. This limitation should be clearly discussed in the Discussion section.

      This article uses peak calling to determine whether a region has histone modifications, claiming that the regions with histone modifications in TFH and TR1 are highly similar. However, they did not discuss the differences in histone modification intensities measured by ChIP-seq. For example, as shown in Figure 6C, IL10 H3K27ac modification in Tet+ cells showed significantly higher intensity than KLH-TFH, while in this article, it may be categorized as "possessing same histone modification region". This will strengthen their conclusions.

      Last, the key findings of this study are clear and convincing, but some results and figures are unnecessary and redundant. Some results are largely a mere confirmation of the relationship between histone marks and chromatin status. I propose to reduce the number of figures and text that are largely confirmatory. Overall, I feel this paper is too long for its current contents.

    1. eLife assessment

      This study employed a comprehensive approach to examining how the MT+ region integrates into a complex cognition system in mediating human visuo-spatial intelligence. While the findings are useful, the experimental evidence is incomplete and the study designs, hypotheses, and data analyses need to be improved. The work will be of interest to researchers in psychology, cognitive science, and neuroscience.

    2. Reviewer #1 (Public Review):

      Summary:

      The study of human intelligence has been the focus of cognitive neuroscience research, and finding some objective behavioral or neural indicators of intelligence has been an ongoing problem for scientists for many years. Melnick et al, 2013 found for the first time that the phenomenon of spatial suppression in motion perception predicts an individual's IQ score. This is because IQ is likely associated with the ability to suppress irrelevant information. In this study, a high-resolution MRS approach was used to test this theory. In this paper, the phenomenon of spatial suppression in motion perception was found to be correlated with the visuo-spatial subtest of gF, while both variables were also correlated with the GABA concentration of MT+ in the human brain. In addition, there was no significant relationship with the excitatory transmitter Glu. At the same time, SI was also associated with MT+ and several frontal cortex FCs.

      Strengths:

      (1) 7T high-resolution MRS is used.

      (2) This study combines the behavioral tests, MRS, and fMRI.

      Weaknesses:

      Major:

      (1) In Melnick (2013) IQ scores were measured by the full set of WAIS-III, including all subtests. However, this study only used visual spatial domain of gF. I wonder why only the visuo-spatial subtest was used not the full WAIS-III? I am wondering whether other subtests were conducted and, if so, please include the results as well to have comprehensive comparisons with Melnick (2013).

      Minor:

      (1) Table 1 and Table supplementary 1-3 contain many correlation results. But what are the main points of these values? Which values do the authors want to highlight? Why are only p-values shown with significance symbols in Table supplementary 2??

      (2) Line 27, it is unclear to me what is "the canonical theory".

      (3) Throughout the paper, the authors use "MT+", I would suggest using "hMT+" to indicate the human MT complex, and to be consistent with the human fMRI literature.

      (4) At the beginning of the results section, I suggest including the total number of subjects. It is confusing what "31/36 in MT+, and 28/36 in V1" means.

      (5) Line 138, "This finding supports the hypothesis that motion perception is associated with neural activity in MT+ area". This sentence is strange because it is a well established finding in numerous human fMRI papers. I think the authors should be more specific about what this finding implies.

      (6) There are no unit labels for all x- and y-axies in Figure 1. I only see the unit for Conc is mmol per kg wet weight.

      (7) Although the correlations are not significant in Figure supplement 2&3, please also include the correlation line, 95% confidence interval, and report the r values and p values (i.e., similar format as in Figure 1C).

      (8) There is no need to separate different correlation figures into Figure supplementary 1-4. They can be combined into the same figure.

      (9) Line 213, as far as I know, the study (Melnick et al., 2013) is a psychophysical study and did not provide evidence that the spatial suppression effect is associated with MT+.

      (10) At the beginning of the results, I suggest providing more details about the motion discrimination tasks and the measurement of the BDT.

      (11) Please include the absolute duration thresholds of the small and large sizes of all subjects in Figure 1.

      (12) Figure 5 is too small. The items in plot a and b can be barely visible.

    3. Reviewer #3 (Public Review):

      (1) Throughout the manuscript, hMT+ connectivity with the frontal cortex has been treated as an a priori hypothesis/space. However, there is no such motivation or background literature mentioned in the Introduction. Can the authors clarify the necessity of functional connectivity? In other words, can BOLD activity of hMT+ in the localizer task substitute for functional connectivity between hMT+ and the frontal cortex?

      (2) There is an obvious mismatch between the in-text description and the content of the figure:

      "In contrast, there was no correlation between BDT and GABA levels in V1 voxels (figure supplement 1a). Further, we show that SI significantly correlates with GABA levels in hMT+ voxels (r = 0.44, P = 0.01, n = 31, Figure 3d). In contrast, no significant correlation between SI and GABA concentrations in V1 voxels was observed (figure supplement 1b)."

      (3) The authors' response to my previous round of review indicated that the "V1 ROIs" covered a substantial amount of V3 (32%). Therefore, it would no longer be appropriate to call these "V1 ROIs". I'd suggest renaming them as "Early Visual Cortex (EVC) ROIs" to be more accurate. Can the authors justify why choosing the left hemisphere for visual intelligence task, which is typically believed to be right lateralized?

      (4) "Small threshold" and "large threshold" are neither standard descriptions, and it is unclear what "small threshold" refers to in the following figure caption. Additionally, the unit (ms) is confusing. Does it refer to timing?

      "(f) Peason's correlation showing significant negative correlations between BDT and small threshold."

      (5) In the response letter, the authors mentioned incorporating the neural efficiency hypothesis in the Introduction, but the revised Introduction does not contain such information.

    4. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      The study of human intelligence has been the focus of cognitive neuroscience research, and finding some objective behavioral or neural indicators of intelligence has been an ongoing problem for scientists for many years. Melnick et al, 2013 found for the first time that the phenomenon of spatial suppression in motion perception predicts an individual's IQ score. This is because IQ is likely associated with the ability to suppress irrelevant information. In this study, a high-resolution MRS approach was used to test this theory. In this paper, the phenomenon of spatial suppression in motion perception was found to be correlated with the visuo-spatial subtest of gF, while both variables were also correlated with the GABA concentration of MT+ in the human brain. In addition, there was no significant relationship with the excitatory transmitter Glu. At the same time, SI was also associated with MT+ and several frontal cortex FCs.

      Strengths:

      (1) 7T high-resolution MRS is used.

      (2) This study combines the behavioral tests, MRS, and fMRI.

      Weaknesses:

      (1) In the intro, it seems to me that the multiple-demand (MD) regions are the key in this study. However, I didn't see any results associated with the MD regions. Did I miss something?

      Thank you to the reviewer for pointing this out. After careful consideration, we agree with your point of view. According to the results of Melnick 2013, the motion surround suppression (SI) and the time thresholds of small and large gratings representing hMT+ functionality are correlated with Verbal Comprehension, Perceptual Reasoning, Working Memory, and Processing Speed Indicators, with correlation coefficients of 0.69, 0.47, 0.49, and 0.50, respectively. This suggests that hMT+ does have the potential to become the core of MD system. However, due to our results only delving into “the GABA-ergic inhibition in human MT predicts visuo-spatial intelligence mediated through the frontal cortex”, it is not yet sufficient to prove that hMT+is the core node of the MD system, we have adjusted the explanatory logic of the article. Briefly, we emphasize the de-redundancy of hMT+ in visual-spatial intelligence and the improvement of information processing efficiency, while weaken the significance of hMT+ in MD systems.

      (2) How was the sample size determined? Is it sufficient?

      Thank you to reviewer for pointing this out. We use G*power to determine our sample size. In the study by Melnick (2013), they reported a medium effect between SI and Perception Reasoning sub-ability (r=0.47). Here we use this r value as the correlation coefficient (ρ H1), setting the power at the commonly used threshold of 0.8 and the alpha error probability at 0.05. The required sample size is calculated to be 26. This ensures that our study has reasonable power to yield valid statistical results. Furthermore, compared to earlier within-subject studies like Schallmo et al.'s 2018 research, which used 22 datasets to examine GABA levels in MT+ and the early visual cortex (EVC), our study includes an enough dataset.

      (3) In Schallmo elife 2018, there was no correlation between GABA concentration and SI. How can we justify the different results different here?

      Thank reviewer for pointing this out. There are several differences between us:

      a. While the earlier study by Schallmo et al. (2018) employed 3T MRS, we utilize 7T MRS, enhancing our ability to detect and measure GABA with greater accuracy.

      b. Schallmo elife 2018 choose to use the bilateral hMT+ as the MRS measurement region while we use the left hMT+. The reason why we focus on left hMT+ are describe in reviewer 1. (6). Briefly, use of left MT/V5 as a target was motivated by studies demonstrating that left MT/V5 TMS is more effective at causing perceptual effects (Tadin et al., 2011).

      c. The resolution of MRS sequence in Schallmo elife 2018 is 3 cm isotropic voxel, while we apply 2 cm isotropic voxel. This helps us more precisely locate hMT+ and exclude more white matter signal.

      (4) Basically this study contains the data of SI, BDT, GABA in MT+ and V1, Glu in MT+ and V1-all 6 measurements. There should be 6x5/2 = 15 pairwise correlations. However, not all of these results are included in Figure 1 and supplementary 1-3. I understand that it is not necessary to include all figures. But I suggest reporting all values in one Table.

      We thank the reviewer for the good suggestion, we have made a correlation matrix to reporting all values in Figure Supplementary 9.

      (5) In Melnick (2013), the IQ scores were measured by the full set of WAIS-III, including all subtests. However, this study only used the visual spatial domain of gF. I wonder why only the visuo-spatial subtest was used not the full WAIS-III?

      We thank the reviewer for pointing this out. The decision was informed by Melnick’s findings which indicated high correlations between Surround suppression (SI) and the Verbal Comprehension, Perceptual Reasoning, Working Memory, and Processing Speed Indexes, with correlation coefficients of 0.69, 0.47, 0.49, and 0.50, respectively. It is well-established that the hMT+ region of the brain is a sensory cortex involved in visual perception processing (3D perception). Furthermore, motion surround suppression (SI), a specific function of hMT+, aligns closely with this region's activities. Given this context, the Perception Reasoning sub-ability was deemed to have the clearest mechanism for further exploration. Consequently, we selected the most representative subtest of Perception Reasoning—the Block Design Test—which primarily assesses 3D visual intelligence.

      (6) In the functional connectivity part, there is no explanation as to why only the left MT+ was set to the seed region. What is the problem with the right MT+?

      We thank the reviewer for pointing this out. The main reason is that our MRS ROI is the left hMT+, we would like to make different models’ ROI consistent to each other. Use of left MT/V5 as a target was motivated by studies demonstrating that left MT/V5 TMS is more effective at causing perceptual effects (Tadin et al., 2011).

      (7) In Melnick (2013), the authors also reported the correlation between IQ and absolute duration thresholds of small and large stimuli. Please include these analyses as well.

      We thank the reviewer for the good advice. Containing such result do help researchers compare the result between Melnick and us. We have made such figures in the revised version (Figure 3f, g).

      Reviewer #2 (Public Review):

      Summary:

      Recent studies have identified specific regions within the occipito-temporal cortex as part of a broader fronto-parietal, domain-general, or "multiple-demand" (MD) network that mediates fluid intelligence (gF). According to the abstract, the authors aim to explore the mechanistic roles of these occipito-temporal regions by examining GABA/glutamate concentrations. However, the introduction presents a different rationale: investigating whether area MT+ specifically, could be a core component of the MD network.

      Strengths:

      The authors provide evidence that GABA concentrations in MT+ and its functional connectivity with frontal areas significantly correlate with visuo-spatial intelligence performance. Additionally, serial mediation analysis suggests that inhibitory mechanisms in MT+ contribute to individual differences in a specific subtest of the Wechsler Adult Intelligence Scale, which assesses visuo-spatial aspects of gF.

      Weaknesses:

      (1) While the findings are compelling and the analyses robust, the study's rationale and interpretations need strengthening. For instance, Assem et al. (2020) have previously defined the core and extended MD networks, identifying the occipito-temporal regions as TE1m and TE1p, which are located more rostrally than MT+. Area MT+ might overlap with brain regions identified previously in Fedorenko et al., 2013, however the authors attribute these activations to attentional enhancement of visual representations in the more difficult conditions of their tasks. For the aforementioned reasons, It is unclear why the authors chose MT+ as their focus. A stronger rationale for this selection is necessary and how it fits with the core/extended MD networks.

      We really appreciate reviewer’s opinions. The reason why we focus on hMT+ is following: According to the results of Melnick 2013, the motion surround suppression (SI) and the time thresholds of small and large gratings representing hMT+ functionality are correlated with Verbal Comprehension, Perceptual Reasoning, Working Memory, and Processing Speed Indicators, with high correlation coefficients of 0.69, 0.47, 0.49, and 0.50, respectively. In addition, Fedorenko et al. 2013, the averaged MD activity region appears to overlap with hMT+. Based on these findings, we assume that hMT+ does have the potential to become the core of MD system.

      (2) Moreover, although the study links MT+ inhibitory mechanisms to a visuo-spatial component of gF, this evidence alone may not suffice to position MT+ as a new core of the MD network. The MD network's definition typically encompasses a range of cognitive domains, including working memory, mathematics, language, and relational reasoning. Therefore, the claim that MT+ represents a new core of MD needs to be supported by more comprehensive evidence.

      Thank reviewer for pointing this out. After careful consideration, we agree with your point of view. Due to our results only delving into visuo-spatial intelligence, it is not yet sufficient to prove that hMT is the core node of the MD system. We will adjust the explanatory logic of the article, that is, emphasizing the de-redundancy of hMT+in visual-spatial intelligence and the improvement of information processing efficiency, while weakening the significance of hMT+ in MD systems.

      Reviewer #3 (Public Review):

      Summary:

      This manuscript aims to understand the role of GABA-ergic inhibition in the human MT+ region in predicting visuo-spatial intelligence through a combination of behavioral measures, fMRI (for functional connectivity measurement), and MRS (for GABA/glutamate concentration measurement). While this is a commendable goal, it becomes apparent that the authors lack fundamental understanding of vision, intelligence, or the relevant literature. As a result, the execution of the research is less coherent, dampening the enthusiasm of the review.

      Strengths:

      (1) Comprehensive Approach: The study adopts a multi-level approach, i.e., neurochemical analysis of GABA levels, functional connectivity, and behavioral measures to provide a holistic understanding of the relationship between GABA-ergic inhibition and visuo-spatial intelligence.

      (2) Sophisticated Techniques: The use of ultra-high field magnetic resonance spectroscopy (MRS) technology for measuring GABA and glutamate concentrations in the MT+ region is a recent development.

      Weaknesses:

      Study Design and Hypothesis

      (1) The central hypothesis of the manuscript posits that "3D visuo-spatial intelligence (the performance of BDT) might be predicted by the inhibitory and/or excitation mechanisms in MT+ and the integrative functions connecting MT+ with the frontal cortex." However, several issues arise:

      (1.1) The Suppression Index depicted in Figure 1a, labeled as the "behavior circle," appears irrelevant to the central hypothesis.

      We thank the reviewer for pointing this out. In our study, the inhibitory mechanisms in hMT+ are conceptualized through two models: the neurotransmitter model and the behavioral model. The Suppression Index is essential for elucidating the local inhibitory mechanisms within the behavioral model. However, we acknowledge that our initial presentation in the introduction may not have clearly articulated our hypothesis, potentially leading to misunderstandings. We have revised the introduction to better clarify these connections and ensure the relevance of the Suppression Index is comprehensively understood.

      (1.2) The construct of 3D visuo-spatial intelligence, operationalized as the performance in the Block Design task, is inconsistently treated as another behavioral task throughout the manuscript, leading to confusion.

      We thank the reviewer for pointing this out. We acknowledge that our manuscript may have inconsistently presented this construct across different sections, causing confusion. To address this, we ensured a consistent description of 3D visuo-spatial intelligence in both the introduction and the discussion sections. But we maintained ‘Block Design task score' within the results section to help readers clarify which subtest we use.

      (1.3) The schematics in Figure 1a and Figure 6 appear too high-level to be falsifiable. It is suggested that the authors formulate specific and testable hypotheses and preregister them before data collection.

      We thank the reviewer for pointing this out. We have revised the Figure 1a and made it less abstract and more logical. For Figure 6, the schematic represents our theoretical framework of how hMT+ contributes to 3D visuo-spatial intelligence, we believe the elements within this framework are grounded in related theories and supported by evidence discussed in our results and discussions section, making them specific and testable.

      (2) Central to the hypothesis and design of the manuscript is a misinterpretation of a prior study by Melnick et al. (2013). While the original study identified a strong correlation between WAIS (IQ) and the Suppression Index (SI), the current manuscript erroneously asserts a specific relationship between the block design test (from WAIS) and SI. It should be noted that in the original paper, WAIS comprises Similarities, Vocabulary, Block design, and Matrix reasoning tests in Study 1, while the complete WAIS is used in Study 2. Did the authors conduct other WAIS subtests other than the block design task?

      Thank you for pointing this out. Reviewer #1 also asked this question, we copy the answers in here “The decision was informed by Melnick’s findings which indicated high correlations between Surround suppression (SI) and the Verbal Comprehension, Perceptual Reasoning, Working Memory, and Processing Speed Indexes, with correlation coefficients of 0.69, 0.47, 0.49, and 0.50, respectively. It is well-established that the hMT+ region of the brain is a sensory cortex involved in visual perception processing (3D perception). Furthermore, motion surround suppression (SI), a specific function of hMT+, aligns closely with this region's activities. Given this context, the Perception Reasoning sub-ability was deemed to have the clearest mechanism for further exploration. Consequently, we selected the most representative subtest of Perception Reasoning—the Block Design Test—which primarily assesses 3D visual intelligence.”

      (3) Additionally, there are numerous misleading references and unsubstantiated claims throughout the manuscript. As an example of misleading reference, "the human MT ... a key region in the multiple representations of sensory flows (including optic, tactile, and auditory flows) (Bedny et al., 2010; Ricciardi et al., 2007); this ideally suits it to be a new MD core." The two references in this sentence are claims about plasticity in the congenitally blind with sensory deprivation from birth, which is not really relevant to the proposal that hMT+ is a new MD core in healthy volunteers.

      Thank you for pointing this out. We have carefully read the corresponding references and considered the corresponding theories and agree with these comments. Due to our results only delving into “the GABA-ergic inhibition in human MT predicts visuo-spatial intelligence mediated by reverberation with frontal cortex”, it is not yet sufficient to prove that hMT+ is the core node of the MD system, we will adjust the explanatory logic of the article, that is, emphasizing the de redundancy of hMT+in visual-spatial intelligence and the improvement of information processing efficiency, while weakening the significance of hMT+ in MD systems. In addition, regarding the potential central role of hMT+ in the MD system, we agree with your view that research on hMT+ as a multisensory integration hub mainly focuses on developmental processes. Meanwhile, in adults, the MST region of hMT+ is considered a multisensory integration area for visual and vestibular inputs, which potentially supports the role of hMT+ in multitasking multisensory systems (Gu et al., J. Neurosci, 26(1), 73–85, 2006; Fetsch et al., Nat. Neurosci, 15, 146–154, 2012.). Further research could explore how other intelligence sub-ability such as working memory and language comprehension are facilitated by hMT+'s features.

      Another example of unsubstantiated claim: the rationale for selecting V1 as the control region is based on the assertion that "it mediates the 2D rather than 3D visual domain (Born & Bradley, 2005)". That's not the point made in the Born & Bradley (2005) paper on MT. It's crucial to note that V1 is where the initial binocular convergence occurs in cortex, i.e., inputs from both the right and left eyes to generate a perception of depth.

      Thank you for pointing this out. We acknowledge the inappropriate citation of "Born & Bradley, 2005," which focuses solely on the structure and function of the visual area MT. However, we believe that choosing hMT+ as the domain for 3D visual analysis and V1 as the control region is justified. Cumming and DeAngelis (Annu Rev Neurosci, 24:203–238.2001) state that binocular disparity provides the visual system with information about the three-dimensional layout of the environment, and the link between perception and neuronal activity is stronger in the extrastriate cortex (especially MT) than in the primary visual cortex. This supports our choice and emphasizes the relevance of hMT+ in our study. We have revised our reference in the revised version.

      Results & Discussion

      (1) The missing correlation between SI and BDT is crucial to the rest of the analysis. The authors should discuss whether they replicated the pattern of results from Melnick et al. (2013) despite using only one WAIS subtest.

      We thank for the reviewer’s suggestion. We have placed it in the main text (Figure 3e).

      (2) ROIs: can the authors clarify if the results are based on bilateral MT+/V1 or just those in the left hemisphere? Can the authors plot the MRS scan area in V1? I would be surprised if it's precise to V1 and doesn't spread to V2/3 (which is fine to report as early visual cortex).

      We thank for the reviewer’s suggestion. We have drawn the V1 ROI MRS scanning area (Figure supplement 1). Using the template, we checked the coverage of V1, V2, and V3. Although the MRS overlap regions extend to V2 (3%) and V3 (32%), the major coverage of the MRS scanning area is in V1, with 65% overlap across subjects.

      (3) Did the authors examine V1 FC with either the frontal regions and/or whole brain, as a control analysis? If not, can the author justify why V1 serves as the control region only in the MRS but not in FC (Figure 4) or the mediation analysis (Figure 5)? That seems a little odd given that control analyses are needed to establish the specificity of the claim to MT+

      We thank for the reviewer’s suggestion. We have done the V1 FC-behavior connection as control analysis (Figure supplement 7). Only positive correlations in the frontal area were detected, suggesting that in the 3D visuo-spatial intelligence task, V1 plays a role in feedforward information processing. However, hMT+, which showed specific negative correlations in the frontal, is involved in the inhibition mechanism. These results further emphasize the de-redundancy function of hMT+ in 3D visuo-spatial intelligence.

      Regarding the mediation analysis, since GABA/Glu concentration in V1 has no correlation with BDT score, it is not sufficient to apply mediation analysis.

      (4) It is not clear how to interpret the similarity or difference between panels a and b in Figure 4.

      We thank the reviewer for pointing this out. We have further interpreted the difference between a and b in the revised version. Panels a represents BDT score correlated hMT+-region FC, which is obviously involved in frontal cortex. While panels b represents SI correlated hMT+-region FC, which shows relatively less regions. The overlap region is what we are interested in and explain how local inhibitory mechanisms works in the 3D visuo-spatial intelligence. In addition, we have revised Figure 4 and point out the overlap region.

      (5) SI is not relevant to the authors‘ priori hypothesis, but is included in several mediation analyses. Can the authors do model comparisons between the ones in Figure 5c, d, and Figure S6? In other words, is SI necessary in the mediation model? There seem discrepancies between the necessity of SI in Figures 5c/S6 vs. Figure 5d.

      We thank the reviewer for highlighting this point. The relationship between the Suppression Index (SI) and our a priori hypotheses is elaborated in the response to reviewer 3, section (1). SI plays a crucial role in explicating how local inhibitory mechanisms, on the psychological level, function within the context of the 3D visuo-spatial task. Additionally, Figure 5c illustrates the interaction between the frontal cortex and hMT+, showing how the effects from the frontal cortex (BA46) on the Block Design Task are fully mediated by SI. This further underscores the significance of SI in our model.

      (6) The sudden appearance of "efficient information" in Figure 6, referring to the neural efficiency hypothesis, raises concerns. Efficient visual information processing occurs throughout the visual cortex, starting from V1. Thus, it appears somewhat selective to apply the neural efficiency hypothesis to MT+ in this context.

      We thank the reviewer for highlighting this point. There is no doubt that V1 involved in efficient visual information processing. However, in our result, the V1 GABA has no significant correlation between BDT score, suggesting that the V1 efficient processing might not sufficiently account for the individual differences in 3D visuo-spatial intelligence. Additionally, we will clarify our use of the neural efficiency hypothesis by incorporating it into the introduction of our paper to better frame our argument.

      Transparency Issues:

      (1) Don't think it's acceptable to make the claim that "All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary information". It is the results or visualizations of data analysis, rather than the raw data themselves, that are presented in the paper/supp info.

      We thank the reviewer for pointing this out. We realized that such expression would lead to confusion. We have deleted this expression.

      (2) No GitHub link has been provided in the manuscript to access the source data, which limits the reproducibility and transparency of the study.

      We thank the reviewer for pointing this out. We have attached the GitHub link in the revised version.

      Minor:

      "Locates" should be replaced with "located" throughout the paper. For example: "To investigate this issue, this study selects the human MT complex (hMT+), a region located at the occipito-temporal border, which represents multiple sensory flows, as the target brain area."

      We thank the reviewer for pointing this out. We have revised it.

      Use "hMT+" instead of "MT+" to be consistent with the term in the literature.

      We thank the reviewer for pointing this out. We agree to use hMT+ in the literature.

      "Green circle" in Figure 1 should be corrected to match its actual color.

      We thank the reviewer for pointing this out. We have revised it.

      The abbreviation for the Wechsler Adult Intelligence Scale should be "WAIS," not "WASI."

      We thank the reviewer for pointing this out. We have revised it.

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      (1) The figures and tables should be substantially improved.

      We thank the reviewer for pointing this out. We have improved some of the figures’ quality.

      (2) Please explain the sample size, and the difference between Schallmo eLife 2018, and Melnick, 2013.

      We thank the reviewer for pointing this out. These questions are answered in the public review. We copy the answer in the public review.

      (2.1)  How was the sample size determined? Is it sufficient??

      Thank you to the reviewer for pointing this out. We use G*power to determine our sample size. In the study by Melnick (2013), they reported a medium effect between SI and Perception Reasoning sub-ability (r=0.47). Here we use this r value as the correlation coefficient (ρ H1), setting the power at the commonly used threshold of 0.8 and the alpha error probability at 0.05. The required sample size is calculated to be 26. This ensures that our study has adequate power to yield valid statistical results. Furthermore, compared to earlier within-subject studies like Schallmo et al.'s 2018 research, which used 22 subjects to examine GABA levels in MT+ and the early visual cortex (EVC), our study includes an enough dataset.

      (2.2)  In Schallmo elife 2018, there was no correlation between GABA concentration and SI. How can we justify the different results different here?

      Thank you to the reviewer for pointing this out. There are several differences between the two studies, ours and theirs:

      a. While the earlier study by Schallmo et al. (2018) employed 3T MRS, we utilize 7T MRS, enhancing our ability to detect and measure GABA with greater accuracy.

      b. Schallmo elife 2018 choose to use the bilateral hMT+ as the MRS measurement region while we use the left hMT+. The reason why we focus on left hMT+ are described in review 1. (6). Briefly, use of left MT/V5 as a target was motivated by studies demonstrating that left MT/V5 TMS is more effective at causing perceptual effects (Tadin et al., 2011).

      c. The resolution of MRS sequence in Schallmo elife 2018 is 3 cm isotropic voxel, while we apply 2 cm isotropic voxel. This helps us more precisely locate hMT+ and exclude more white matter signal.

      (3) Table 1 and Table Supplementary 1-3 contain many correlation results. But what are the main points of these values? Which values do the authors want to highlight? Why are only p-values shown with significance symbols in Table Supplementary 2?

      (3.1) what are the main points of these values?

      Thank you to the reviewer for pointing this out. These correlations represent the relationship between behavior task (SI/BDT) and resting-state functional connectivity. It indicates that left hMT+ is involved in the efficient information integration network when it comes to the BDT task. In addition, left hMT+’s surround suppression is involved in several hMT+ - frontal connectivity. Furthermore, the overlapping regions between two tasks indicate a shared underlying mechanism.

      (3.2) Which values do the authors want to highlight?

      Table 1 and Table Supplementary 1-3 present the preliminary analysis results for Table 2 and Table Supplementary 4-6. So, we generally report all value. Conversely, in the Table 2 and Table Supplementary 4-6, we highlight (bold font) indicating the significant correlations survived from multi correlation correction.

      (3.3) Why are only p-values shown with significance symbols in Table Supplementary 2?

      Thank you for pointing this out, it is a mistake. We have revised it and delete the significance symbols.

      (4) Line 27, it is unclear to me what is "the canonical theory".

      We thank the reviewer for pointing this out. We have revised “the canonical theory" to “the prevailing opinion”.

      (5) Throughout the paper, the authors use "MT+", I would suggest using "hMT+" to indicate the human MT complex, and to be consistent with the human fMRI literature.

      We thank the reviewer for pointing this out. We have revised them and used "hMT+" to be consistent with the human fMRI literature.

      (6) At the beginning of the results section, I suggest including the total number of subjects. It is confusing what "31/36 in MT+, and 28/36 in V1" means.

      We thank the reviewer for pointing this out. We have included the total number of subjects in the beginning of result section.

      (7) Line 138, "This finding supports the hypothesis that motion perception is associated with neural activity in MT+ area". This sentence is strange because it is a well-established finding in numerous human fMRI papers. I think the authors should be more specific about what this finding implies.

      We thank the reviewer for pointing this out. We have deleted the inappropriate sentence "This finding supports the hypothesis that motion perception is associated with neural activity in MT+ area".

      (8) There are no unit labels for all x- and y-axies in Figure 1. I only see the unit for Conc is mmol per kg wet weight.

      We thank the reviewer for pointing this out. Figure 1 is a schematic and workflow chart, so labels for x- and y-axes are not needed. I believe this confusion might pertain to Figure 3. In Figures 3a and 3b, the MRS spectrum does not have a standard y-axis unit as it varies based on the individual physical conditions of the scanner; it is widely accepted that no y-axis unit is used. While the x-axis unit is ppm, which indicate the chemical shift of different metabolites. In Figure 3c, the BDT represents IQ scores, which do not have a standard unit. Similarly, in Figures 3d and 3e, the Suppression Index does not have a standard unit.

      (9) Although the correlations are not significant in Figure Supplement 2&3, please also include the correlation line, 95% confidence interval, and report the r values and p values (i.e., similar format as in Figure 1C).

      We thank the reviewer for pointing this out. We have revised them.

      (10) There is no need to separate different correlation figures into Figure Supplementary 1-4. They can be combined into the same figure.

      We thank the reviewer for the suggestion. However, each correlation figure in the supplementary figures has its own specific topic and conclusion. The correlation figures in Supplementary Figure 1 indicate that GABA in V1 does not show any correlation with BDT and SI, illustrating that inhibition in V1 is unrelated to both 3D visuo-spatial intelligence and motion suppression processing. The correlations in Supplementary Figure 2 indicate that the excitation mechanism, represented by Glutamate concentration, does not contribute to 3D visuo-spatial intelligence in either hMT+ or V1. Supplementary Figure 3 validates our MRS measurements. Supplementary Figure 4 addresses potential concerns regarding the impact of outliers on correlation significance. Even after excluding two “outliers” from Figures 3d and 3e, the correlation results remain stable.

      (11) Line 213, as far as I know, the study (Melnick et al., 2013) is a psychophysical study and did not provide evidence that the spatial suppression effect is associated with MT+.

      We thank the reviewer for pointing this out. It was a mistake to use this reference, and we have revised it accordingly.

      (12) At the beginning of the results, I suggest providing more details about the motion discrimination tasks and the measurement of the BDT.

      We thank the reviewer for pointing this out. We have included some brief description of task at the beginning of the result section.

      (13) Please include the absolute duration thresholds of the small and large sizes of all subjects in Figure 1.

      We thank the reviewer for the suggestion. We have included these results in Figure 3.

      (14) Figure 5 is too small. The items in plot a and b can be barely visible.

      We thank the reviewer for pointing this out. We increase the size and resolution of Figure 5.

      Reviewer #2 (Recommendations For The Authors):

      Recommendations for improving the writing and presentation.

      I highly recommend editing the manuscript for readability and the use of the English language. I had significant difficulties following the rationale of the research due to issues with the way language was used.

      We thank the reviewer for pointing this out. We apologize for any shortcomings in our initial presentation. We have invited a native English speaker to revise our manuscript.

    1. Reviewer #1 (Public Review):

      In this revised manuscript, authors have conducted epigenetic and transcriptomic profiling to understand how environmental chemicals such as BPS can cause epimutations that can propagate to future generations. They used isolated somatic cells from mice (Sertoli, granulosa), pluripotent cells to model preimplantation embryos (iPSCs) and cells to model the germline (PGCLCs). This enabled them to model sequential steps in germline development, and when/how epimutations occur. The major findings were that BPS induced unique epimutations in each cell type, albeit with qualitative and quantitative cell-specific differences; that these epimutations are prevalent in regions associated with estrogen-response elements (EREs); and that epimutations induced in iPSCs are corrected as they differentiate into PGCLCs, concomitant with the emergence of de novo epimutations. This study will be useful in understanding the multigenerational effects of EDCs, and underlying mechanisms.

      Strengths include:

      (1) Using different cell types representing life stages of epigenetic programming and during which exposures to EDCs have different effects. This progression revealed information both about the correction of epimutations and the emergence of new ones in PGCLCs.

      (2) Work conducted by exposing iPSCs to BPS or vehicle, then differentiating to PGCLCs, revealed that novel epimutations emerged.

      (3) Relating epimutations to promoter and enhancer regions

      During the review process, authors improved the manuscript through better organization, clarifying previous points from reviewers, and providing additional data.

    2. Reviewer #2 (Public Review):

      Summary:

      This manuscript uses cell lines representative of germ line cells, somatic cells and pluripotent cells to address the question of how the endocrine disrupting compound BPS affects these various cells with respect to gene expression and DNA methylation. They find a relationship between the presence of estrogen receptor gene expression and the number of DNA methylation and gene expression changes. Notably, PGCLCs do not express estrogen receptors and although they do have fewer changes, changes are nevertheless detected, suggesting a nonconical pathway for BPS-induced perturbations. Additionally, there was a significant increase in the occurrence of BPS-induced epimutations near EREs in somatic and pluripotent cell types compared to germ cells. Epimutations in the somatic and pluripotent cell types were predominantly in enhancer regions whereas that in the germ cell type was predominantly in gene promoters.

      Strengths:

      The strengths of the paper include the use of various cell types to address sensitivity of the lineages to BPS as well as the observed relationship between the presence of estrogen receptors and changes in gene expression and DNA methylation.

      Weaknesses:

      The weakness, which has been addressed by the authors, includes the fact that exposures are more complicated in a whole organism than in an isolated cell line.

    3. eLife assessment

      This important study, characterizing the epigenetic and transcriptomic response of a variety of cell types representative of somatic, germline, and pluripotent cells to BPS, reveals the cell type-specific changes in DNA methylation and the relationship with the genome sequence. The findings are convincing and provide a basis for future analyses in vivo. This work should be of interest to biomedical researchers who work on epigenetic reprogramming and epigenetic inheritance.

    4. Author response:

      The following is the authors’ response to the previous reviews.

      Reviewing editor’s list of items remaining to be addressed followed by our responses/actions:

      (1) The order and organization of supplemental figures and tables is almost impossible to navigate. Please put them in order. 

      All the sections from the previous Supplementary files have been divided into individual Supplementary files so that each can be referenced without confusion from the text. All of the references in the body of the text and the author responses have been updated to reflect this change.

      (2) The question of sample sizes was partially addressed, with authors stating that cell culture work in iPSCs and PGCLCs was done in replicates of 3. Sertoli and granulosa cells were generated from pooled preps - how many individuals, were they littermates? 

      Sertoli and granulosa primary cultures were generated from littermates and each prep used 5 animals (males for Sertoli cells and females for granulosa cells). These changes have been added to the body of the text on pages 39 and 40.

      (3) Authors need to discuss the limitations of doing work in triplicates. Their PCA (Supplement Figure 9) reveals that in several cases samples from the same treatment were not discriminated by PC1 and/or PC2. This is especially true in e and f, the variance of which was explained by PC1 for cell type, but for which treatments showed poor discrimination by PC2. Some discussion of the limitations of sample size should be provided.

      Additional text has been added to what is now Supplementary file 15 to acknowledge this limitation imposed by the limited number of replicates (three) and the ability to resolve the differences in treatments by PCA in subplots e and f. However, we also note that the differences were sufficient to identify significant DMCs/DMRs/DEGs.

      Reviwer 2 also noted a potential weakness that “exposures are more complicated in a whole organism than in an isolated cell line.”

      We note that in our revised manuscript we included wording noting that despite the advantages of using an in vitro approach to deduce underlying molecular mechanisms, results of such in vitro studies “ultimately warrant validation of results discerned from studies of in vitro models to ensure they also reflect functions ongoing in the more complex and heterogeneous environment of the intact animal in vivo.” Thus we have endeavored to acknowledge the reviewer’s point.

      Reviewer #1 (Public Review): 

      Critiques/Comments: 

      (1) A problem with in vitro work is that homogeneous cell lines/cultures are, by nature, absent from the rest of the microenvironment. The authors need to discuss this. 

      [Addressed on pages: 24-25] – We have added two sentences to the second paragraph of the Discussion section in which we now acknowledge this concern, but also point out that in vitro models of this sort also provide an experimental advantage in that they facilitate a deconvolution of the extensive complexity resident within the intact animal. Nevertheless, we acknowledge that this deconvolution requires ultimate validation of findings obtained within an in vitro model system to ensure they accurately recapitulate functions that occur in the intact animal in vivo.

      In response to Reviewer 2’s stated weakness of our study that “The weakness includes the fact that exposures are more complicated in a whole organism than in an isolated cell line,” please note that this added text includes the statement that despite the advantages of using an in vitro approach to deduce underlying molecular mechanisms, results of such in vitro studies “ultimately warrant validation of results discerned from studies of in vitro models to ensure they also reflect functions ongoing in the more complex and heterogeneous environment of the intact animal in vivo.” Thus we have endeavored to acknowledge the reviewer’s point.

      (2) What are n's/replicates for each study? Were the same or different samples used to generate the data for RNA sequencing, methylation beadchip analysis, and EM-seq? This clarification is important because if the same cultures were used, this would allow comparisons and correlations within samples.  

      Addressed on pages: 39-45 and in new Supplementary file 15 – Additional text has been added in the Methods section to indicate that all samples involving cell culture models which include iPSCs and PGCLCs came from a single XY iPS cell line aliquoted into replicates and all primary cultures which included Sertoli and granulosa cells were generated from pooled tissue preps from mice and then aliquoted into replicates. Finally, all experiments in the study were performed on three replicates. Because this experimental design did indeed allow for comparisons among samples, we have added a new Supplementary file 15

      which displays PCA plots showing clustering among control and treatment datasets, respectively, as well as distinctions between each cluster representing each experimental condition.

      (3) In Figure 1, it is interesting that the 50 uM BPS dose mainly resulted in hypermethylation whereas 100 uM appears to be mainly hypomethylation. (This is based on the subjective appearance of graphs). The authors should discuss and/or present these data more quantitatively. For example, what percentage of changes were hypo/hypermethylation for each treatment? How many DMRs did each dose induce? For the RNA-seq results, again, what were the number of up/down-regulated genes for each dose?  

      Addressed on pages: 6-7 and in new Supplementary files 1-3  – The experiment shown in Figure 1 was designed to 1) serve as proof of principle that cells maintained in culture could be susceptible to EDC-induced epimutagenesis at all, 2) determine if any response observed would be dose-dependent, and 3) identify a minimally effective dose of BPS to be used for the remaining experiments in this study (which we identified as 1 μM). We agree that it is interesting that the 50 µM dose of BPS induced predominantly hypermethylation changes whereas the 1 µM and 100 µM doses induced predominantly hypomethylation changes, but are not in a position to offer a mechanistic explanation for this outcome at this time. As the results shown satisfied our primary objectives of demonstrating that exposure of cells in culture to BPS could indeed induce DNA methylation epimutations, that this occurs in a dose-dependent manner, and that a dose of as low as 1 µM of BPS was sufficient to induce epimutagenesis, the data obtained satisfied all of the initial objectives of this experiment. That said, in response to the reviewer’s request we have now added text on pages 6-7 alluding to new Supplementary files 1-3 indicating the total number of DMCs and DMRs, as well as the number of DEGs, detected in response to exposure to each dose of BPS shown in Figure 1, as well as stratifying those results to indicate the numbers of hyper- and hypomethylation epimutations and up- and down-regulated DEGs induced in response to each dose of BPS. While, as noted above, investigating the mechanistic basis for the difference in responses induced by the 50 µM versus 1 and 100 µM doses of BPS was beyond the scope of the study presented in this manuscript, we do find this result reminiscent of the “U-shaped” response curves often observed in toxicology studies. Importantly, this result does demonstrate the elevated resolution and specificity of analysis facilitated by our in vitro cell culture model system.

      (4) Also in Figure 1, were there DMRs or genes in common across the doses? How did DMRs relate to gene expression results? This would be informative in verifying or refuting expectations that greater methylation is often associated with decreased gene expression.  

      Addressed on pages: 6-7 and new Supplementary files 1-6 – In general, we observed a coincidence between changes in DNA methylation and changes in gene expression (Supplementary files 1-3). Pertaining directly to the reviewer’s question about the extent to which we observed common DMRs and DEGs across all doses, while we only found 3 overlapping DMRs conserved across all doses tested, we did find an average of 51.25% overlap in DMCs and an average of 80.45% overlap in DEGs across iPSCs exposed to the different doses of BPS shown in Figure 1. In addition, within each dose of BPS tested in iPSCs, we also found that there was an overlap between DMCs and the promoters or gene bodies of many DEGs (Supplementary file 5). Specifically within gene promoters, we observed a correlation between hypermethylated DMCs and decreased gene expression and hypomethylated DMCs and increased gene expression, respectively (Supplementary file 6).

      (5) In Figure 2, was there an overlap in the hypo- and/or hyper-methylated DMCs? Please also add more description of the data in 2b to the legend including what the dot sizes/colors mean, etc. Some readers (including me) may not be familiar with this type of data presentation. Some of this comes up in Figure 4, so perhaps allude to this earlier on, or show these data earlier.  

      Addressed on pages: 8-9 and new Supplementary file 4 – We observed an average of 11.05% overlapping DMCs between different pairs of cell types, we did not observe any DMCs that were shared among all four cell types. Indeed, this limited overlap of DMCs among different cell types exposed to BPS was the primary motivation for the analysis described in Figure 2. Thus, instead of focusing solely on direct overlap between specific DMCs, we instead examined similarities among the different cell types tested in the occurrence of epimutations within different annotated genomic regions. To better describe this, we have now added additional text to page 9. We have also added more detail to the legend for Figure 2 on page 8 to more clearly explain the significance of the dot sizes and colors, explaining that the dot sizes are indicative of the relative number of differentially methylated probes that were detected within each specific annotated genomic region, and that the dot colors are indicative of the calculated enrichment score reflecting the relative abundance of epimutations occurring within a specific annotated genomic region. The relative score is calculated by iterating down the list of DMCs and increasing a running-sum statistic when encountering a DMC within the specific annotated genomic region of interest and decreasing the sum when the epimutation is not in that annotated region. The magnitude of the increment depends upon the relative occurrence of DMCs within a specific annotated genomic region.

      (6) iPSCs were derived from male mice MEFs, and subsequently used to differentiate into PGCLCs. The only cell type from an XX female is the granulosa cells. This might be important, and should be mentioned and its potential significance discussed (briefly).  

      Addressed on page: 29 – We have added a new paragraph just before the final paragraph of the Discussion section in which we acknowledge that most of the cell types analyzed during our study were XY-bearing “male” cells and that the manner in which XX-bearing “female” cells might respond to similar exposures could differ from the responses we observed in XY cells. However, we also noted that our assessment of XX-bearing granulosa cells yielded results very similar to those seen in XY Sertoli cells suggesting that, at least for differentiated somatic cell types, there does not appear to be a significant sex-specific difference in response to exposure to a similar dose of the same EDC. That said, we also acknowledged that in cell types in which dosage compensation based on X-chromosome inactivation is not in place, differences between XY- and XX-bearing cells could accrue.

      (7) EREs are only one type of hormone response element. The authors make the point that other mechanisms of BPS action are independent of canonical endocrine signaling. Would authors please briefly speculate on the possibility that other endocrine pathways including those utilizing AREs or other HREs may play a role? In other words, it may not be endocrine signaling independent. The statement that the differences between PGCLCs and other cells are largely due to the absence of ERs is overly simplistic.  

      Addressed on page: 11 and in a new Supplementary file 8  – Previous reports have indicated that BPS does not have the capacity to bind with the androgen receptor (Pelch et al., 2019; Yang et al., 2024). However there have been reports indicating that BPS can interact with other endocrine receptors including PPARγ and RXRα, which play a role in lipid accumulation and the potential to be linked to obesity phenotypes (Gao et al., 2020; Sharma et al., 2018). To address the reviewer’s comment we assessed the expression of a panel of hormone receptors including PPARγ, RXRα, and AR  in each of the cell types examined in our study and these results are now shown in a new Supplementary file 8. We show that in addition to not expressing either estrogen receptor (ERa or ERb), germ cells also do not express any of the other endocrine receptors we tested including AR, PPARγ, and RXRα. Thus we now note that these results support our suggestion that the induction of epimutations we observed in germ cells in response to exposure to BPS appears to reflect disruption of non-canonical endocrine signaling. We also note that non-canonical endocrine signaling is well established (Brenker et al., 2018; Ozgyin et al., 2015; Song et al., 2011; Thomas and Dong, 2006). Thus we feel the suggestion that the effects of BPS exposure could conceivably reflect either disruption of canonical or non-canonical signaling in any cell type is well justified and that our data suggests that both of these effects appear to have accrued in the cells examined in our study as suggested in the text of our manuscript.

      (8) Interpretation of data from the GO analysis is similarly overly simplistic. The pathways identified and discussed (e.g. PI3K/AKT and ubiquitin-like protease pathways) are involved in numerous functions, both endocrine and non-endocrine. Also, are the data shown in Figure 6a from all 4 cell types? I am confused by the heatmap in 6c, which genes were significantly affected by treatment in which cell types?  

      Addressed on pages: 19-21 – Per the reviewer’s request, we have added text to indicate that Figure 6a is indeed data from all four cell types examined. We have also modified the text to further clarify that Figure 6c displays the expression of other G-coupled protein receptors which are expressed at similar, if not higher, levels than either ER in all cell types examined, and that these have been shown to have the potential to bind to either 17β-estradiol or BPA in rat models. As alluded to by the reviewer, this is indicative of a wide variety of distinct pathways and/or functions that can potentially be impacted by exposure to an EDC such as BPS. Thus, we have attempted to acknowledge the reviewer’s primary point that BPS may interact with a variety of receptors or other factors involved with a wide variety of different pathways and functions. Importantly, this illustrates the strength of our model system in that it can be used to identify potential impacted target pathways that can then be subsequently pursued further as deemed appropriate.

      (9) In Figure 7, what were the 138 genes? Any commonalities among them? 

      Addressed on page: 22 and in a new Supplementary files 13 and 14 – We have now added a new supplemental Excel file (Supplementary file 13) that lists the 138 overlapping conserved DEGs that did not become reprogrammed/corrected during the transition from iPSCs to PGCLCs. In addition, we have added new text on page 22 and a new Supplementary file 14 which displays KEGG analysis of pathways associated with these 138 retained DEGs. We find that these genes are primarily involved with cell cycle and apoptosis pathways which, interestingly, have the potential to be linked to cancer development which is often linked to disruptions in chromatin architecture.

      (10) The Introduction is very long. The last paragraph, beginning line 105, is a long summary of results and interpretations that better fit in a Discussion section.

      Addressed on page: 6 – We have now significantly reduced the length and scope of the final paragraph of the Introduction per the reviewer’s recommendation.

      (11) Provide some details on husbandry: e.g. were they bred on-site? What food was given, and how was water treated? These questions are to get at efforts to minimize exposure to other chemicals.  

      Addressed on page: 37 – We have added additional text detailing that all mice used in the project were bred onsite, water was non-autoclaved conventional RO water, and our selection of 5V5R extruded feed for mice used in this study which was highly controlled for the presence of isoflavones and has been certified to be used for estrogen-sensitive animal protocols.

      Reviewer #2 (Public Review): 

      Summary: 

      This manuscript uses cell lines representative of germ line cells, somatic cells, and pluripotent cells to address the question of how the endocrine-disrupting compound BPS affects these various cells with respect to gene expression and DNA methylation. They find a relationship between the presence of estrogen receptor gene expression and the number of DNA methylation and gene expression changes. Notably, PGCLCs do not express estrogen receptors and although they do have fewer changes, changes are nevertheless detected, suggesting a nonconical pathway for BPS-induced perturbations. Additionally, there was a significant increase in the occurrence of BPS-induced epimutations near EREs in somatic and pluripotent cell types compared to germ cells. Epimutations in the somatic and pluripotent cell types were predominantly in enhancer regions whereas that in the germ cell type was predominantly in gene promoters. 

      Strengths: 

      The strengths of the paper include the use of various cell types to address the sensitivity of the lineages to BPS as well as the observed relationship between the presence of estrogen receptors and changes in gene expression and DNA methylation. 

      Weaknesses: 

      The weaknesses include the lack of reporting of replicates, superficial bioinformatic analysis, and the fact that exposures are more complicated in a whole organism than in an isolated cell line. 

      Recommendations for the authors: please note that you control which revisions to undertake from the public reviews and recommendations for the authors. 

      Reviewer #2 (Recommendations For The Authors): 

      Overall, this is an intriguing paper but more transparency in the replicates and methods and a more rigorous bioinformatic treatment of the data are required. 

      Specific comments: 

      (1) End of abstract "These results suggest a unique mechanism by which an EDC-induced epimutated state may be propagated transgenerationally following a single exposure to the causative EDC." This is overly speculative for an abstract. There is only epigenetic inheritance following mitosis or differentiation presented in this study. There is no meiosis and therefore no ability to assess multi- or transgenerational inheritance. 

      Addressed on page: 2 – We have modified the text at the end of the abstract to more precisely reflect our intended conclusions based on our data. In our view, the ability of induced epimutations to transcend meiosis per se is not as relevant to the mechanism of transgenerational inheritance as their ability to transcend major waves of epigenetic reprogramming that normally occur during development of the germ line. In this regard the transition from pluripotent iPSCs to germline PGCLCs has been shown to recapitulate at least the first portion of normal germline reprogramming, and now our data provide novel insight into the fate of induced epimutations during this process. Specifically, we show that a prevelance of epimutations was conserved during the iPSC à germ cell transition but that very few (< 5%) of the specific epimutations present in the the BPS-exposed iPSCs were retained when those cells were induced to form PGCLCs. Rather, we observed apparent correction of a large majority of the initially induced epimutations during this transition, but this was accompanied by the apparent de novo generation of novel epimutations in the PGCLCs. We suggest, based on other recent reports in the literature, that this is a result of the BPS exposure inducing changes in the chromatin architecture in the exposed iPSCs such that when the normal germline reprogramming mechanism is imposed on this disrupted chromatin template there is both correction of many existing epimutations and the genesis of many novel epimutations. This observation has the potential to explain the long-standing question of why the prevalence of epimutations persists across multiple generations despite the occurrence of epigenetic reprogramming during each generation. Nevertheless, as noted above, we have modified the text at the end of the abstract to temper this interpretation given that it is still somewhat speculative at this point.

      (2) Doses used in the experiments. One needs to be careful when stating that the dose used is "below FDA's suggested safe environmental level established for BPA" because a different bisphenol is being used here (BPA vs BPS) and the safe level is that which the entire organism experiences. It is likely that cell lines experience a higher effective dose.  

      Addressed on pages: 3, 5, and 26 – We have now made a point of noting that our reference to an EPA-recommended “safe dose” of BPA was for humans and/or intact animals. Changes to this effect have been made in the second and sixth paragraphs of the Introduction section. In addition, we have added text at the end of the fourth paragraph of the Discussion section acknowledging that, as the reviewer suggests, the same dose of an EDC could exert greater effects on cells in a homogeneous culture than on the same cell type within an intact animal given the potential for mitigating metabolic effects in the latter. However, we also note that the ability we demonstrated to quantify the effects of such exposures on the basis of numbers of epimutations (DMCs or DMRs) induced could potentially be used in future studies to study this question by assessing the effects of a specific dose of a specific EDC on a specific cell type when exposed either within a homogeneous culture or within an intact animal.

      (3) Figure 1: In the dose response, what was the overlap in DMCs and DEGs among the 3 doses? Are the responses additive, synergistic, or completely non-overlapping? This is an important point that should be addressed. 

      Addressed on page: 6-7 and in Supplementary files 1-5 – Please see our response to Reviewer 1 critique #4 above where we address similar concerns. While we do find overlap among different cell types with respect to the DMCs, DMRs, and DEGs displayed in Figure 1, we found the effect to be only partially additive as opposed to synergistic in any apparent manner. The fold increase in DMCs, DMRs, and DEGs resulting from exposure to doses of 1 μM or 50 μM ranged from 2.5x to 4.4x, which was well below the 50x increase that would have been expected from a strictly additive effect, and the effect increased even less, if at all, in response to exposure to doses of 50 μM versus 100 μM BPS. Finally, as now noted in the Discussion section on page 25, our conclusion is that these results display a limited dose-dependent effect that was partially additive but also plateaued at the highest doses tested.

      (4) Methods: How many times was each exposure performed on a given cell type? This information should be in the figure legends and methods. In the case of multiple exposures for a given line, do the biological replicates agree? 

      Addressed on pages: 39-45 and in new Supplementary file 15 –  Please see our response to Reviewer 1 critique #2 where we address similar concerns with newly added text and analysis. We now note repeatedly on pages 39-45 that each analysis was conducted on three replicate samples, and we display the similarity among those replicates graphically in a new Supplementary file 15.

      (5) DNA methylation analyses. Very little analysis is presented on the BeadChip array other than hypermethylated/hypomethylated and genomic regions of DMCs. What is the range of methylation changes? Does it vary between hypo vs. hyper DMCs? How many array experiments were performed (biological replicates) and what stats were used to determine the DMCs? Are there DMCs in common among the various cell types? As an example, if more meaningful analysis, one can plot the %5mC over a given array for comparisons between control and treated cell types. For more granularity, the %5mC can be presented according to the element type (enhancers vs promoters). 

      Addressed on pages: 10 and 39-45 and in new Supplementary files 1-5, 15 –  Please see our response to Reviewer 1 critique #2 above where we address similar concerns regarding the number of biological replicates used in this study. DMCs on the Infinium array are identified using mixed linear models. This general supervised learning framework identifies CpG loci at which differential methylation is associated with known control vs. treated co-variates. CpG probes on the array were defined as having differential changes that met both p-value and FDR (≤ 0.05) significant thresholds between treatment and control samples for each cell type analyzed. The range of medians across all samples was 0.0278 to 0.0059 for hypermethylated beta values and -0.0179 to -0.0033 for hypomethylated beta values. As noted above, we did observe an overlap in DMCs between cell types. Thus, we observed an average of 11.05% overlapping DMCs between two or more cell types but we did not observe any DMCs shared between all four cell types. We have added additional text on page 9 and new Supplementary files 1-5 to now more clearly describe that this limited similarity in direct overlap of DMCs was the underlying motivation for the analysis described in Figure 2. Finally, the enrichment dot plots shown in Figure 2 provide the information the reviewer requested regarding the %5mC observed at different annotated genomic element types.

      (6) The investigators correlate the number of DMCs in a given cell type with the presence of estrogen receptors. Does the correlation extend to the methylation difference (delta beta) at the statistically different probes?

      Addressed in a new Supplementary file 7 – We have added a new Supplementary file 7 in which we provide data addressing this question. In brief, we find that the delta betas of probes enriched at enhancer regions and associated with relative proximity to ERE elements in Sertoli cells, granulosa cells, and iPSCs appear very similar to those associated with DMCs not located within these enriched regions. However, when we compared the similarity of the two data sets with goodness of fit tests, we found these relatively small differences were, in fact, statistically significant based on a two-sample Kolmogorov-Smirnov test. These observed significant differences appear to indicate that there is higher variability among the delta betas associated with hypomethylated, but not hypermethylation changes occurring at DMCs associated with enhancers, potentially suggesting a greater tendency for exposure to BPS to induce hypomethylation rather than hypermethylation changes, at least in these specific regions.

      (7) Methylation changes relative to EREs are presented in multiple figures. Are other sequences enriched in the DMCs? 

      Addressed in a new Supplementary file 11. We profiled the genomic sequence within 500 bp of cell type-specific enriched DMCs that were either associated with enhancer regions in Sertoli, granulosa, or iPS cells or transcription factor binding sites in PGCLCs for the identification of higher abundance motif sequences. We then compared any motifs identified with the JASPAR database to potentially find transcription factors that could be binding to these regions. Interestingly we found that the two most common motifs across all cell types were associated with either the chromatin remodeling transcription factor HMG1A or the pluripotency factor KLF4.

      (8) Please present a correlation plot between the methylation differences and the adjacent DEGs. Again, the absence of consideration of the absolute changes in methylation and gene expression minimizes the impact of the data. 

      Addressed on pages 6, 7, and 17 and in a new Supplementary file 6 – We analyzed the relationship between DMCs at DEGs promoter regions and the corresponding change in expression of that DEG. Our data support a relationship between up-regulated genes showing decreased methylation in promoter regions and down-regulated genes showing increased methylation at promoter regions, although there were some exceptions to this relationship.

      (9) EM-Seq is mentioned in Figure 7 and in the material and methods. Where is it used in this study? 

      Addressed on page 22 – We now note in the text on page 22 that EM-seq was used during experiments assessing the propagation of BPS-induced epimutations during the iPSC à EpiLC à PGCLC cell state transitions to gather higher resolution data of changes to DNA methylation differences at the whole-epigenome level.

      References

      Brenker C, Rehfeld A, Schiffer C, Kierzek M, Kaupp UB, Skakkebæk NE, Strünker T. 2018. Synergistic activation of CatSper Ca2+ channels in human sperm by oviductal ligands and endocrine disrupting chemicals. Hum Reprod 33:1915–1923. doi:10.1093/humrep/dey275

      Gao P, Wang L, Yang N, Wen J, Zhao M, Su G, Zhang J, Weng D. 2020. Peroxisome proliferator-activated receptor gamma (PPARγ) activation and metabolism disturbance induced by bisphenol A and its replacement analog bisphenol S using in vitro macrophages and in vivo mouse models. Environ Int 134. doi:10.1016/J.ENVINT.2019.105328

      Ozgyin L, Erdos E, Bojcsuk D, Balint BL. 2015. Nuclear receptors in transgenerational epigenetic inheritance. Prog Biophys Mol Biol. doi:10.1016/j.pbiomolbio.2015.02.012

      Pelch KE, Li Y, Perera L, Thayer KA, Korach KS. 2019. Characterization of Estrogenic and Androgenic Activities for Bisphenol A-like Chemicals (BPs): In Vitro Estrogen and Androgen Receptors Transcriptional Activation, Gene Regulation, and Binding Profiles. Toxicol Sci 172:23–37. doi:10.1093/TOXSCI/KFZ173

      Sharma S, Ahmad S, Khan MF, Parvez S, Raisuddin S. 2018. In silico molecular interaction of bisphenol analogues with human nuclear receptors reveals their stronger affinity vs. classical bisphenol A. Toxicol Mech Methods 28:660–669. doi:10.1080/15376516.2018.1491663

      Song K-H, Lee K, Choi H-S. 2011. Endocrine Disrupter Bisphenol A Induces Orphan Nuclear Receptor Nur77 Gene Expression and Steroidogenesis in Mouse Testicular Leydig Cells. Endocrinology 143:2208–2215. doi:10.1210/endo.143.6.8847

      Thomas P, Dong J. 2006. Binding and activation of the seven-transmembrane estrogen receptor GPR30 by environmental estrogens: A potential novel mechanism of endocrine disruption. J Steroid Biochem Mol Biol 102:175–179. doi:10.1016/j.jsbmb.2006.09.017

      Yang Z, Wang L, Yang Y, Pang X, Sun Y, Liang Y, Cao H. 2024. Screening of the Antagonistic Activity of Potential Bisphenol A Alternatives toward the Androgen Receptor Using Machine Learning and Molecular Dynamics Simulation. Environ Sci Technol 58:2817–2829. doi:10.1021/ACS.EST.3C09779/ASSET/IMAGES/LARGE/ES3C09779_0004.JPEG

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:  

      Reviewer #1 (Public Review):  

      Summary:  

      Heer and Sheffield used 2 photon imaging to dissect the functional contributions of convergent dopamine and noradrenaline inputs to the dorsal hippocampus CA1 in head-restrained mice running down a virtual linear path. Mice were trained to collect water rewards at the end of the track and on test days, calcium activity was recorded from dopamine (DA) axons originating in the ventral tegmental area (VTA, n=7) and noradrenaline axons from the locus coeruleus (LC, n=87) under several conditions. When mice ran laps in a familiar environment, VTA DA axons exhibited ramping activity along the track that correlated with distance to reward and velocity to some extent, while LC input activity remained constant across the track, but correlated invariantly with velocity and time to motion onset. A subset of recordings taken when the reward was removed showed diminished ramping activity in VTA DA axons, but no changes in the LC axons, confirming that DA axon activity is locked to reward availability. When mice were subsequently introduced to a new environment, the ramping to reward activity in the DA axons disappeared, while LC axons showed a dramatic increase in activity lasting 90 s (6 laps) following the environment switch. In the final analysis, the authors sought to disentangle LC axon activity induced by novelty vs. behavioral changes induced by novelty by removing periods in which animals were immobile and established that the activity observed in the first 2 laps reflected novelty-induced signal in LC axons.  

      Strengths:  

      The results presented in this manuscript provide insights into the specific contributions of catecholaminergic input to the dorsal hippocampus CA1 during spatial navigation in a rewarded virtual environment, offering a detailed analysis of the resolution of single axons. The data analysis is thorough and possible confounding variables and data interpretation are carefully considered.  

      Weaknesses:  

      Aspects of the methodology, data analysis, and interpretation diminish the overall significance of the findings, as detailed below.  

      The LC axonal recordings are well-powered, but the DA axonal recordings are severely underpowered, with recordings taken from a mere 7 axons (compared to 87 LC axons).

      Additionally, 2 different calcium indicators with differential kinetics and sensitivity to calcium changes (GCaMP6S and GCaMP7b) were used (n=3, n=4 respectively) and the data pooled. This makes it very challenging to draw any valid conclusions from the data, particularly in the novelty experiment. The surprising lack of novelty-induced DA axon activity may be a false negative. Indeed, at least 1 axon (axon 2) appears to be showing a novelty-induced rise in activity in Figure 3C. Changes in activity in 4/7 axons are also referred to as a 'majority' occurrence in the manuscript, which again is not an accurate representation of the observed data.  

      We appreciate the reviewer's detailed feedback regarding the analysis of VTA axons in our dataset. The relatively low sample size for VTA axons is due to their sparsity in the dCA1 region of the hippocampus and the inherent difficulty in recording from these axons. VTA axons are challenging to capture due to their low baseline fluorescence and long-range axon segments, resulting in a typical yield of only a single axon per field of view (FOV) per animal. In contrast, LC axons are more abundant in dCA1.

      To address the disparity in sample sizes between LC and VTA axons, we down-sampled the LC axons to match the number of VTA axons, repeating this process 1000 times to create a distribution. However, we acknowledge the reviewer's concern that the relatively low sample size for VTA axons might result in insufficient sampling of this population. Increasing the baseline expression of GCaMP to record from VTA axons requires several months, limiting our ability to quickly expand the sample size.

      In response to the reviewer's comments, we have added recordings from 2 additional VTA axons, increasing the sample size from 7 to 9. We re-analyzed all data from the familiar environment with n=9 VTA axons, comparing them to down-sampled LC axons as previously described. However, the additional axons were not recorded in the novel environment. We agree with the reviewer that the lack of novelty-induced DA axon activity may be a false negative. To address this, we have revised the description of our results to include the following sentence:

      “However, 1 VTA ROI showed an increase in activity immediately following exposure to novelty, indicating heterogeneity across VTA axons in CA1, and the lack of a novelty signal on average may be due to a small sample size.”

      Regarding the use of two different GCaMP constructs, we understand the reviewer's concern. We used GCaMP6s and GCaMP7b variants to determine if one would improve the success rate of recording from VTA axons. Given the long duration of these experiments and the low yield, we pooled the data from both GCaMP variants to increase statistical power. However, we recognize the importance of verifying that there are no differences in the signals recorded with these variants.

      With the addition of 2 VTA DA axons expressing GCaMP6s, we now have n=5 GCaMP6s and n=4 GCaMP7b VTA DA axons. This allowed us to compare the activity of the two sensors in the familiar environment. As shown in new Supplementary Figure 2, both sets of axons responded similarly to the variables measured: position in VR, time to motion onset, and animal velocity (although the GCaMP6s expressing axons showed stronger correlations). Since all LC axons recorded expressed GCaMP6s, we also specifically compared VTA GCaMP6s axons to LC GCaMP6s axons (Supp Fig. 3). Our conclusions remained consistent when comparing this subset of VTA axons to LC axons.

      Overall, our paper now includes comparisons of combined VTA axons (n=9) and separately the GCaMP6s-expressing VTA axons (n=5) with LC axons. Both datasets support our initial conclusions that VTA axons signal proximity to reward, while LC axons encode velocity and motion initiation in familiar environments.

      The authors conducted analysis on recording data exclusively from periods of running in the novelty experiment to isolate the effects of novelty from novelty-induced changes in behavior. However, if the goal is to distinguish between changes in locus coeruleus (LC) axon activity induced by novelty and those induced by motion, analyzing LC axon activity during periods of immobility would enhance the robustness of the results.  

      We appreciate the reviewer's insightful suggestion to analyze LC axon activity during periods of immobility to distinguish between changes induced by novelty and those induced by motion. This additional analysis would indeed strengthen our conclusions regarding the LC novelty signal.

      In response to this suggestion, we performed the same analysis as before, but focused on periods of immobility. Our findings indicate that following exposure to novelty, there was a significant increase in LC activity specifically during immobility. This supports the idea that LC axons produce a novelty signal that is independent of novelty-induced behavioral changes. The results of this analysis are now presented in new Supplementary Figure 5b

      The authors attribute the ramping activity of the DA axons to the encoding of the animals' position relative to reward. However, given the extensive data implicating the dorsal CA1 in timing, and the remarkable periodicity of the behavior, the fact that DA axons could be signalling temporal information should be considered.  

      This is an insightful comment regarding the potential role of VTA DA axons in signaling temporal information. We agree that VTA DA axons could indeed be encoding temporal information, as previous work from our lab has shown that these axons exhibit ramping activity when averaged by time to reward (Krishnan et al., 2022).

      To address this, we have now examined DA axon activity relative to time to reward, as shown in new Supplementary Figure 4. Our analysis confirms that these axons ramp up in activity relative to time to reward. Given the periodicity of our mice's behavior in these experiments, as the reviewer correctly points out, we are unable to distinguish between spatial proximity to reward and time to reward. We have added a sentence to our paper highlighting this limitation and stating that further experiments are necessary to differentiate these two variables.

      Krishnan, L.S., Heer, C., Cherian, C., Sheffield, M.E. Reward expectation extinction restructures and degrades CA1 spatial maps through loss of a dopaminergic reward proximity signal. Nat Commun 13, 6662 (2022).

      The authors should explain and justify the use of a longer linear track (3m, as opposed to 2m in the DAT-cre mice) in the LC axon recording experiments.  

      We appreciate the reviewer's insightful comment regarding the use of a longer linear track (3m, as opposed to 2m in the DAT-cre mice) in the LC axon recording experiments. The choice of a 3m track for LC axon recordings was made to align with a previous experiment from our lab (Dong et al., 2021), in which mice were exposed to a novel 3m track while CA1 pyramidal cell populations were recorded. In that study, we detailed the time course of place field formation within the novel track. Our current hypothesis is that LC axons signal novelty, and we aimed to investigate whether the time course of LC axon activity aligns with the time course of place field formation. This hypothesis, and the potential role of LC axons in facilitating plasticity for new place field formation, is further discussed in the Discussion section of our paper.

      For the VTA axon recordings, we utilized a 2m track, consistent with another recent study from our lab (Krishnan et al., 2022), where reward expectation was manipulated, and CA1 pyramidal cell populations were recorded. By matching the track length to this prior study, we aimed to explore how VTA dopaminergic inputs to CA1 might influence CA1 population dynamics along the track under conditions of varying reward expectations.

      We acknowledge that using different track lengths for LC and VTA recordings introduces a variable that could potentially confound direct comparisons. To address this, we normalized the track lengths for our LC versus VTA comparison analysis. This normalization allowed us to directly compare patterns of activity across the two types of axons by adjusting the data to a common scale, thereby ensuring that any observed differences or similarities are attributable to the intrinsic properties of the axons rather than differences in track lengths. By doing so, we could assess relative changes in activity levels at matched spatial bins.

      Although the experiences of the animals on the different track lengths are not identical, our observations suggest that LC and VTA axon signals are not majorly influenced by variations in track length. LC axons are associated with velocity and a pre-motion initiation signal, neither of which are affected by track length. VTA axons, which also correlate with velocity, can be compared to LC axon velocity signals because mice reach maximal velocity very quickly a long the track, well before the end of the 2m track. The range of velocities are therefore capture on both track lengths. While VTA axons exhibit ramping activity as they approach the reward zone—a signal potentially modulated by track length—LC axons do not show such ramping to reward signals. Thus, a comparison across different track lengths is justified for this aspect of our analysis.

      To further enhance the rigor of our comparisons between axon dynamics recorded on 2m and 3m tracks, we conducted an additional analysis plotting axon activity by time to reward and actual (un-normalized) distance from reward (Supplementary Figure 4). This analysis revealed very similar signals between the two sets of axons, supporting our initial conclusions.

      We thank the reviewer for raising this important point and hope that our detailed explanation and additional analysis address their concern.

      Krishnan, L.S., Heer, C., Cherian, C., Sheffield, M.E. Reward expectation extinction restructures and degrades CA1 spatial maps through loss of a dopaminergic reward proximity signal. Nat Commun 13, 6662 (2022).

      Dong, C., Madar, A. D. & Sheffield, M.E. Distinct place cell dynamics in CA1 and CA3 encode experience in new environments. Nat Commun 12, 2977 (2021).

      Reviewer #2 (Public Review):  

      Summary:  

      The authors used 2-photon Ca2+-imaging to study the activity of ventral tegmental area (VTA) and locus coeruleus (LC) axons in the CA1 region of the dorsal hippocampus in head-fixed male mice moving on linear paths in virtual reality (VR) environments.  

      The main findings were as follows:  

      - In a familiar environment, the activity of both VTA axons and LC axons increased with the mice's running speed on the Styrofoam wheel, with which they could move along a linear track through a VR environment.  

      - VTA, but not LC, axons showed marked reward position-related activity, showing a ramping-up of activity when mice approached a learned reward position.  

      - In contrast, the activity of LC axons ramped up before the initiation of movement on the Styrofoam wheel.  

      - In addition, exposure to a novel VR environment increased LC axon activity, but not VTA axon activity.  

      Overall, the study shows that the activity of catecholaminergic axons from VTA and LC to dorsal hippocampal CA1 can partly reflect distinct environmental, behavioral, and cognitive factors. Whereas both VTA and LC activity reflected running speed, VTA, but not LC axon activity reflected the approach of a learned reward, and LC, but not VTA, axon activity reflected initiation of running and novelty of the VR environment.  

      I have no specific expertise with respect to 2-photon imaging, so cannot evaluate the validity of the specific methods used to collect and analyse 2-photon calcium imaging data of axonal activity.  

      Strengths:  

      (1) Using a state-of-the-art approach to record separately the activity of VTA and LC axons with high temporal resolution in awake mice moving through virtual environments, the authors provide convincing evidence that the activity of VTA and LC axons projecting to dorsal CA1 reflect partly distinct environmental, behavioral and cognitive factors.  

      (2) The study will help a) to interpret previous findings on how hippocampal dopamine and norepinephrine or selective manipulations of hippocampal LC or VTA inputs modulate behavior and b) to generate specific hypotheses on the impact of selective manipulations of hippocampal LC or VTA inputs on behavior.  

      Weaknesses:  

      (1) The findings are correlational and do not allow strong conclusions on how VTA or LC inputs to dorsal CA1 affect cognition and behavior. However, as indicated above under Strengths, the findings will aid the interpretation of previous findings and help to generate new hypotheses as to how VTA or LC inputs to dorsal CA1 affect distinct cognitive and behavioral functions.  

      (2) Some aspects of the methodology would benefit from clarification.  

      First, to help others to better scrutinize, evaluate, and potentially to reproduce the research, the authors may wish to check if their reporting follows the ARRIVE (Animal Research: Reporting of In Vivo Experiments) guidelines for the full and transparent reporting of research involving animals (https://arriveguidelines.org/). For example, I think it would be important to include a sample size justification (e.g., based on previous studies, considerations of statistical power, practical considerations, or a combination of these factors). The authors should also include the provenance of the mice. Moreover, although I am not an expert in 2-photon imaging, I think it would be useful to provide a clearer description of exclusion criteria for imaging data.

      We thank the reviewer for helping us formalize the scientific rigor of our study. There are ten ARRIVE Guidelines and we have addressed most of them in our study already. However, there is an opportunity to add detail. We have listed below all ten points and how we have addressed each one (and point out any new additions):

      (1) Experimental design - we go into great depth explaining the experimental set-up, how we used the autofluorescent blebs as imaging controls, how we controlled for different sample sizes between the two populations, and the statistical tests used for comparisons. We also carefully accounted for animal behavior when quantifying and describing axon dynamics both in the familiar and novel environments.

      (2) Sample size - we state both the number of ROIs and mice for each analysis. We have now also added the number of mice we observed specific types of activity in. 

      (3) Inclusion/exclusion criteria - The following has now been added to the Methods section: Out of the 36 NET-Cre mice injected, 15 were never recorded from for either failing to reach behavioral criteria, or a lack of visible expression in axons. Out of the 54 DAT-Cre mice injected, imaging was never conducted in 36 of them for lack of expression or failing to reach behavioral criteria. Out of the remaining 21 NET-CRE, 5 were excluded for heat bubbles, z-drift, or bleaching, while 10 DAT-Cre were excluded for the same reasons. This was determined by visually assessing imaging sessions, followed by using the registration metrics output by suite2p. This registration metric conducted a PCA on the motion-corrected ROIs and plotted the first PC. If the PC drifted largely, to the point where no activity was apparent, the video was excluded from analysis. 

      (4) Randomization - Already included in the paper is a description of random downsampling of LC axons to make statistical comparisons with VTA axons. LC axons were selected pseudo-randomly (only one axon per imaging session) to match VTA sampling statistics. This randomization was repeated 1000 times and comparisons were made against this random distribution. 

      (5) Blinding-masking - no blinding/masking was conducted as no treatments were given that would require this. We will include this statement in the next version. 

      (6) Outcomes - We defined all outcomes measured, such as those related to animal behavior and axon signaling. 

      (7) Statistical methods - None of the reviewers had any issues regarding our description of statistical methods, which we described in great detail in this version of the paper. 

      (8) Experimental animals - We have now described that DAT- Cre mice were obtained through JAX labs, and NET-Cre mice were obtained from the Tonegawa lab (Wagatsuma et al. 2017). This was absent in the initial version of the paper.

      (9) Experimental procedure - Already listed in great detail in Methods section.

      (10) Results - Rigorously described in detail for behaviors and related axon dynamics.

      Wagatsuma, Akiko, Teruhiro Okuyama, Chen Sun, Lillian M. Smith, Kuniya Abe, and Susumu Tonegawa. “Locus Coeruleus Input to Hippocampal CA3 Drives Single-Trial Learning of a Novel Context.” Proceedings of the National Academy of Sciences 115, no. 2 (January 9, 2018): E310–16. https://doi.org/10.1073/pnas.1714082115.

      Second, why were different linear tracks used for studies of VTA and LC axon activity (from line 362)? Could this potentially contribute to the partly distinct activity correlates that were found for VTA and LC axons?  

      We thank the reviewer for pointing this out and giving us a chance to address it directly. A detailed response to this is written above for a similar comment from reviewer 1.

      Third, the authors seem to have used two different criteria for defining immobility. Immobility was defined as moving at <5 cm/s for the behavioral analysis in Figure 3a, but as <0.2 cm/s for the imaging data analysis in Figure 4 (see legends to these figures and also see Methods, from line 447, line 469, line 498)? I do not understand why, and it would be good if the authors explained this.  

      This is a typo leftover from before we converted velocity from rotational units of the treadmill to cm/s. This has now been corrected.

      (3) In the Results section (from line 182) the authors convincingly addressed the possibility that less time spent immobile in the novel environment may have contributed to the novelty-induced increase of LC axon activity in dorsal CA1 (Figure 4). In addition, initially (for the first 2-4 laps), the mice also ran more slowly in the novel environment (Figure 3aIII, top panel). Given that LC and VTA axon activity were both increasing with velocity (Figure 1F), reduced velocity in the novel environment may have reduced LC and VTA axon activity, but this possibility was not addressed. Reduced LC axon activity in the novel environment could have blunted the noveltyinduced increase. More importantly, any potential novelty-induced increase in VTA axon activity could have been masked by decreases in VTA axon activity due to reduced velocity. The latter may help to explain the discrepancy between the present study and previous findings that VTA neuron firing was increased by novelty (see Discussion, from line 243). It may be useful for the authors to address these possibilities based on their data in the Results section, or to consider them in their Discussion.  

      We appreciate the reviewer's insightful comment regarding the potential impact of decreased velocity on novelty responses in LC and VTA axons. The decreased velocity in the novel environment could lead to a diminished novelty response in LC axons and could mask a subtle novelty signal in VTA axons. We have now included the following points in our discussion:

      “In addition, as noted above, on average we did observe a velocity associated signal in VTA axons. When mice were exposed to the novel environment their velocity initially decreased. This would be expected to reduce the average signal across the VTA axon population relative to the higher velocity in the familiar environment. It is possible that this decrease could somewhat mask a subtle novelty induced signal in VTA axons. Therefore, additional experiments should be conducted to investigate the heterogeneity of these axons and their activity under different experimental conditions during tightly controlled behavior.”

      “As discussed above, the slowing down of animal behavior in the novel environment could have decreased LC axon activity and reduced the magnitude of the novelty signal we detected during running. The novelty signal we report here may therefore be an under estimate of it's magnitude under matched behavioral settings.”

      However, it is important to note that although VTA axons, on average, showed activity modulated by velocity in a familiar rewarded environment, this relationship was largely due to the activity of two VTA axons that were strongly modulated by velocity, indicating heterogeneity within the VTA axon population in dCA1. We have highlighted this point in the discussion. We also discuss that:

      “It is possible that some VTA DA inputs to dCA1 respond to novel environments, and the small number of axons recorded here are not representative of the whole population.”

      (4) Sensory properties of the water reward, which the mice may be able to detect, could account for reward-related activity of VTA axons (instead of an expectation of reward). Do the authors have evidence that this is not the case? Occasional probe trials, intermixed with rewarded trials, could be used to test for this possibility.  

      Mice receive their water reward through a water spout that is immobile and positioned directly in front of their mouth. Water delivery is triggered by a solenoid when the mice reach the end of the virtual track. Therefore, because the water spout is immobile and the water reward is not delivered until they reach the end of the track, there is nothing for the mice to detect during their run. We have added clarifications about the water spout to the Methods and Results sections, along with appropriate discussion points.

      Additionally, we note that the ramping activity of VTA axons is still present on the initial laps with no reward (Krishnan et al., 2022), indicating that this activity is not directly related to the presence or absence of water but is instead associated with the animal’s reward expectation.

      We thank the reviewer for raising this point and hope that these clarifications address their concern.

      Reviewer #3 (Public Review):  

      Summary:  

      Heer and Sheffield provide a well-written manuscript that clearly articulates the theoretical motivation to investigate specific catecholaminergic projections to dorsal CA1 of the hippocampus during a reward-based behavior. Using 2-photon calcium imaging in two groups of cre transgenic mice, the authors examine the activity of VTA-CA1 dopamine and LC-CA1 noradrenergic axons during reward seeking in a linear track virtual reality (VR) task. The authors provide a descriptive account of VTA and LC activities during walking, approach to reward, and environment change. Their results demonstrate LC-CA1 axons are activated by walking onset, modulated by walking velocity, and heighten their activity during environment change. In contrast, VTA-CA1 axons were most activated during the approach to reward locations. Together the authors provide a functional dissociation between these catecholamine projections to CA1. A major strength of their approach is the methodological rigor of 2-photon recording, data processing, and analysis approaches. These important systems neuroscience studies provide solid evidence that will contribute to the broader field of learning and memory. The conclusions of this manuscript are mostly well supported by the data, but some additional analysis and/or experiments may be required to fully support the author's conclusions.  

      Weaknesses:  

      (1) During teleportation between familiar to novel environments the authors report a decrease in the freezing ratio when combining the mice in the two experimental groups (Figure 3aiii). A major conclusion from the manuscript is the difference in VTA and LC activity following environment change, given VTA and LC activity were recorded in separate groups of mice, did the authors observe a similar significant reduction in freezing ratio when analyzing the behavior in LC and VTA groups separately?  

      In response to the comment regarding the freezing ratios during teleportation between familiar and novel environments, we have analyzed the freezing ratios and lap velocities of DAT-Cre and NET-Cre mice separately (Fig. 3Aiii). Our analysis shows that the mean lap velocities of both groups overlap in the familiar environment and significantly decrease on the first lap of the novel environment (Fig. 3iii, top). For subsequent laps, the velocities in both groups are not statistically significantly different from the familiar environment lap velocities.

      Freezing ratios also show a statistically significant decrease on the first lap of the novel environment compared to the familiar environment in both groups (Fig. 3iii, bottom). In the NETCRE mice, the freezing ratios remain statistically lower in subsequent laps, while in the DATCRE mice, the following laps show a similar trend but without statistical significance. This lack of statistical significance in the DAT-CRE mice is likely due to their already lower freezing ratios in the familiar environment. Overall, the data demonstrate similar behavioral responses in the two groups of mice during the switch from the familiar to the novel environment.

      (2) The authors satisfactorily apply control analyses to account for the unequal axon numbers recorded in the LC and VTA groups (e.g. Figure 1). However, given the heterogeneity of responses observed in Figures 3c, 4b and the relatively low number of VTA axons recorded (compared to LC), there are some possible limitations to the author's conclusions. A conclusion that LC-CA1 axons, as a general principle, heighten their activity during novel environment presentation, would require this activity profile to be observed in some of the axons recorded in most all LC-CA1 mice.

      We agree with the reviewer’s point. To address this issue, when downsampling LC axons to compare to VTA axons, we matched the sampling statistics of the VTA axons/mice by only selecting one LC axon from each mouse to match the VTA dataset.

      Additionally, we have now included the number of recording sessions and the number of mice in which we observed each type of activity. This information has been added to further clarify and support our conclusions.

      Additionally, if the general conclusion is that VTA-CA1 axons ramp activity during the approach to reward, it would be expected that this activity profile was recorded in the axons of most all VTA-CA1 mice. Can the authors include an analysis to demonstrate that each LC-CA1 mouse contained axons that were activated during novel environments and that each VTA-CA1 mouse contained axons that ramped during the approach to reward?  

      As above, we have now added the number of mice that had each activity type we report in the paper here.  

      (3) A primary claim is that LC axons projecting to CA1 become activated during novel VR environment presentation. However, the experimental design did not control for the presentation of a familiar environment. As I understand, the presentation order of environments was always familiar, then novel. For this reason, it is unknown whether LC axons are responding to novel environments or environmental change. Did the authors re-present the familiar environment after the novel environment while recording LC-CA1 activity?  

      While we did not vary the presentation order of familiar and novel environments, we recorded the activity of LC axons in some mice when exposed to a dark environment (no VR cues) prior to exposure to the familiar environment. Our analysis of this data demonstrates that LC axons are also active following abrupt exposure to the familiar environment.

      We have added a new figure showing this response (Supplementary Figure 5A) and expanded on our original discussion point that LC axon activity generally correlates with arousal, as this result also supports that interpretation.

      We thank the reviewer for highlighting this important consideration. It certainly helps with the interpretation regarding what LC axons generally encode.  

      >Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):  

      In addition to what has been described in the public review, I have the following recommendations:  

      The sample size of DA axon recordings should be increased with the use of a single GCaMP for valid conclusions to be made about the lack of novelty-inducted activity in these axons.  

      We have increased the n of VTA GCaMP6s axons in the familiar environment by including two axons that were recorded in the familiar rewarded condition. We have also conducted an analysis comparing GCaMPs versus GCaMP7b, which is discussed in detail above.

      Regarding the concerns about valid conclusions of novelty-induced activity in VTA axons, we have added a comment in the discussion to tone down our conclusions regarding the lack of a novelty signal in the VTA axons. This valid concern is discussed in detail above.  

      The title is currently very generic, and non-informative. I recommend the use of more specific language in describing the type of behavior under investigation. It is not clear to the reviewer why 'learning' is included here.  

      Original title: “Distinct catecholaminergic pathways projecting to hippocampal CA1 transmit contrasting signals during behavior and learning”

      To make it more specific to the experiments conducted here, we have changed the title to this:

      New title: “Distinct catecholaminergic pathways projecting to hippocampal CA1 transmit contrasting signals during navigation in familiar and novel environments”

      Error noted in Figure 4C legend - remove reference to VTA ROIs.  

      The reference to VTA ROIs has been removed from the figure legend

      Reviewer #2 (Recommendations For The Authors):  

      (1) The concluding sentence of the Abstract could be more specific: which distinct types of information are reflected/'signaled'/'encoded' by LC and VTA inputs to dorsal CA1?  

      The abstract has been adjusted accordingly. The new sentence is more specific: “These inputs encode unique information, with reward information in VTA inputs and novelty and kinematic information in LC inputs, likely contributing to differential modulation of hippocampal activity during behavior and learning.”

      (2) Line 46/47: The study by Mamad et al. (2017) did not quite show that VTA dopamine input to dorsal CA1 'drives place preference'. To my understanding, the study showed that suppression of VTA dopamine signaling in a specific place caused avoidance of this place and that VTA dopamine signaling modulated hippocampal place-related firing. So, please consider rephrasing.  

      Corrected, thanks for pointing this out.

      (3) Legend to Figure 3AIII: 'Each lap was compared to the first lap in F . . .' Could you clarify if 'F' refers to the 'familiar environment?  

      Figure legend has been changed accordingly

      (4) Line 176: '36 LC neurons' - should this not be '36 imaged axon terminals in dorsal CA1' or something along these lines?  

      This reference has been changed to “LC axon ROIs”

      (5) Line 353: Why was water restriction started before the hippocampal window implant, if behavioral training to run for water reward only started after the implant? Please clarify.

      A sentence was added to the methods to explain that this was done to reduce bleeding and swelling during the hippocampal window implantation.  

      (6) Line 377: '. . . which took 10-14 days (although some mice never reached this threshold).' How many mice did not reach the criterion within 14 days? I think it is not accurate to say the mice 'never' reached the threshold, as they were only tested for a limited period of time.  

      We have added details of how many mice were excluded from each group and the reason why they were excluded.

      (7) Exclusion criteria for imaging data: The authors state (from line 402): 'Imaging sessions with large amounts of drift or bleaching were excluded from analysis (8 sessions for NET mice, 6 sessions for LC Mice).' What exactly were the quantitative exclusion criteria? Were these defined before the onset of the study or throughout the study?  

      Imaging sessions were first qualitatively assessed by looking for disappearance or movement of structures in the Z-plane throughout the imaging FOV. Additionally, following motion correction in suite2p, we used the registration metrics, which plots the first Principle Component of the motion corrected images, to assess for drift, bleaching, or heat bubbles. If this variable increased or decreased greatly throughout a session, to the point where any apparent activity was not visible in the first PC, the dataset was excluded. We have added these exclusion criteria to the methods section.

      Reviewer #3 (Recommendations For The Authors):  

      Please provide a justification or rationale for having two different criteria for immobility (< 5cm/sec) and freezing (<0.2 cm/sec). If VTA and LC axon activities are different between these two velocities, please provide some commentary on this difference.  

      This is a typo leftover from before we converted velocity from rotational units to cm/s.

    2. eLife assessment

      This manuscript provides important results that assessed the contribution of two catecholaminergic projections to the hippocampus during environment-guided reward behavior. The authors use 2-photon imaging in the hippocampus of behaving mice to provide solid evidence that there are dissociable roles of dopamine and norepinephrine in this structure. Although of great interest to the field of learning and memory, the results would be strengthened by additional data collected from dopaminergic projections to the hippocampus.

    3. Reviewer #1 (Public Review):

      Summary:

      Heer and Sheffield used 2 photon imaging to dissect the functional contributions of convergent dopamine and noradrenaline inputs to the dorsal hippocampus CA1 in head restrained mice running down a virtual linear path. Mice were trained to collect water reward at the end of the track and on test days, calcium activity was recorded from dopamine (DA) axons originating in ventral tegmental area (VTA, n=7) and noradrenaline axons from the locus coeruleus (LC, n=87) under several conditions. When mice ran laps in a familiar environment, VTA DA axons exhibited ramping activity along the track that correlated with distance to reward and velocity to some extent, while LC input activity remained constant across the track, but correlated invariantly with velocity and time to motion onset. A subset of recordings taken when the reward was removed showed diminished ramping activity in VTA DA axons, but no changes in the LC axons, confirming that DA axon activity is locked to reward availability. When mice were subsequently introduced to a new environment, the ramping to reward activity in the DA axons disappeared, while LC axons showed a dramatic increase in activity lasting 90s (6 laps) following the environment switch. In the final analysis, the authors sought to disentangle LC axon activity induced by novelty vs. behavioral changes induced by novelty by removing periods in which animals were immobile and established that the activity observed in the first 2 laps reflected novelty-induced signal in LC axons.

      The revised manuscript included additional evidence of increased (but transient) signal in LC axons after a transition to a novel environment during periods of immobility, and also that a change from dark to familiar environment induces a peak in LC axon activity, showing that LC input to dCA1 may not solely signal novelty.

      Strengths:

      The results presented in this manuscript provide insights into the specific contributions of catecholaminergic input to the dorsal hippocampus CA1 during spatial navigation in a rewarded virtual environment, offering a detailed analysis at the resolution of single axons. The data analysis is thorough and possible confounding variables and data interpretation are carefully considered.

      The authors have addressed my concerns in a thorough manner. The reviewer also appreciates the increased transparency of reporting in the revised manuscript.

      Weaknesses:

      Listed below are some remaining comments.<br /> The increase in LC activity with any change in environment (from familiar to novel or from dark to familiar) suggests that LC input acts not solely as a novelty signal, but as a general arousal or salience signal in response to environmental changes. Based on this, I have a couple of questions:

      • Is the overall claim that LC input to the dHC signals novelty still valid based on observed findings - as claimed throughout the manuscript?<br /> • Would the omission of a reward be considered a salient change in the environment that activates LC signals, or is the LC not involved with processing reward-related information? Has the activity of LC and VTA axons been analysed in the seconds following reward presentation and/or omission?

    4. Reviewer #2 (Public Review):

      Summary:

      The authors used 2-photon Ca2+-imaging to study the activity of ventral tegmental area (VTA) and locus coeruleus (LC) axons in the CA1 region of the dorsal hippocampus in head-fixed male mice moving on linear paths in virtual reality (VR) environments.

      The main findings were as follows:<br /> - In a familiar environment, activity of both VTA axons and LC axons increased with the mice's running speed on the Styrofoam wheel, with which they could move along a linear track through a VR environment.<br /> - VTA, but not LC, axons showed marked reward position-related activity, showing a ramping-up of activity when mice approached a learned reward position.<br /> - In contrast, activity of LC axons ramped up before initiation of movement on the Styrofoam wheel.<br /> - In addition, exposure to a novel VR environment increased LC axon activity, but not VTA axon activity.

      Overall, the study shows that the activity of catecholaminergic axons from VTA and LC to dorsal hippocampal CA1 can partly reflect distinct environmental, behavioral and cognitive factors. Whereas both VTA and LC activity reflected running speed, VTA, but not LC axon activity reflected the approach of a learned reward and LC, but not VTA, axon activity reflected initiation of running and novelty of the VR environment.

      I have no specific expertise with respect to 2-photon imaging, so cannot evaluate the validity of the specific methods used to collect and analyse 2-photon calcium imaging data of axonal activity.

      Strengths:

      (1) Using a state-of-the-art approach to record separately the activity of VTA and LC axons with high temporal resolution in awake mice moving through virtual environments, the authors provide convincing evidence that activity of VTA and LC axons projecting to dorsal CA1 reflect partly distinct environmental, behavioral and cognitive factors.

      (2) The study will help a) to interpret previous findings on how hippocampal dopamine and norepinephrine or selective manipulations of hippocampal LC or VTA inputs modulate behavior and b) to generate specific hypotheses on the impact of selective manipulations of hippocampal LC or VTA inputs on behavior.

      Weaknesses:

      (1) The findings are correlational and do not allow strong conclusions on how VTA or LC inputs to dorsal CA1 affect cognition and behavior. However, as indicated above under Strengths, the findings will aid the interpretation of previous findings and help to generate new hypotheses as to how VTA or LC inputs to dorsal CA1 affect distinct cognitive and behavioral functions.

      (2) Some aspects of the methodology would benefit from clarification.<br /> First, to help others to better scrutinize, evaluate and potentially to reproduce the research, the authors may wish to check if their reporting follows the ARRIVE (Animal Research: Reporting of In Vivo Experiments) guidelines for the full and transparent reporting of research involving animals (https://arriveguidelines.org/). For example, I think it would be important to include a sample size justification (e.g., based on previous studies, considerations of statistical power, practical considerations or a combination of these factors). The authors should also include the provenance of the mice. Moreover, although I am not an expert in 2-photon imaging, I think it would be useful to provide a clearer description of exclusion criteria for imaging data (see below, Recommendations for the authors).<br /> Second, why were different linear tracks used for studies of VTA and LC axon activity (from line 362)? Could this potentially contribute to the partly distinct activity correlates that were found for VTA and LC axons?<br /> Third, the authors seem to have used two different criteria for defining immobility. Immobility was defined as moving at <5 cm/s for the behavioral analysis in Fig. 3a, but as <0.2 cm/s for the imaging data analysis in Fig. 4 (see legends to these figures and also see Methods, from line 447, line 469, line 498)? I do not understand why, and it would be good if the authors explained this.

      (3) In the Results section (from line 182) the authors convincingly addressed the possibility that less time spent immobile in the novel environment may have contributed to the novelty-induced increase of LC axon activity in dorsal CA1 (Fig. 4). In addition, initially (for the first 2-4 laps), the mice also ran more slowly in the novel environment (Fig. 3aIII, top panel). Given that LC and VTA axon activity were both increasing with velocity (Fig. 1F), reduced velocity in the novel environment may have reduced LC and VTA axon activity, but this possibility was not addressed. Reduced LC axon activity in the novel environment could have blunted the novelty-induced increase. More importantly, any potential novelty-induced increase in VTA axon activity could have been masked by decreases in VTA axon activity due to reduced velocity. The latter may help to explain the discrepancy between the present study and previous findings that VTA neuron firing was increased by novelty (see Discussion, from line 243). It may be useful for the authors to address these possibilities based on their data in the Results section, or to consider them in their Discussion.

      (4) Sensory properties of the water reward, which the mice may be able to detect, could account for reward-related activity of VTA axons (instead of an expectation of reward). Do the authors have evidence that this is not the case? Occasional probe trials, intermixed with rewarded trials, could be used to test for this possibility.

      REVIEW OF THE REVISED MANUSCRIPT<br /> I thank the authors for their responses addressing some of the weaknesses I raised in my original comments.

      Regarding their clarification of some methodological issues [Point 2) above], I have a few additional comments:<br /> - I appreciate that the authors clearly state the sample sizes contributing to the data. However, sample size justifications (e.g. based on previous studies, considerations of statistical power, practical considerations or a combination of these factors) are still lacking.<br /> - It is good that the authors have now clearly indicated how many mice they excluded due to lack of GCaMP expression or due to failure to reach the behavioral criteria. They also indicated that they discarded some of the collected datasets, based on the visual assessment of imaging sessions and the registration metrics output by suite2p. I appreciate that this may be common practice (although I am not using 2-photon imaging myself). However, I note that to minimize the risk of experimenter bias and improve reproducibility, it would be preferable to have more clearly defined quantitative criteria for such exclusions.<br /> - The authors clarified in their response why they used two different linear tracks for their studies of VTA and LC axon activity. I would encourage them to include this clarification in the manuscript. From the authors' response, I understand that they chose the different track lengths to facilitate comparison to previous studies involving LC and VTA axon recordings. However, given that the present paper aimed to compare LC and VTA axon recordings, the use of different track lengths for LC and VTA axon recordings remains a limitation of the present paper.

    5. Reviewer #3 (Public Review):

      Summary:

      Heer and Sheffield provide a well-written manuscript that clearly articulates the theoretical motivation to investigate specific catecholaminergic projections to dorsal CA1 of the hippocampus during a reward-based behavior. Using 2-photon calcium imaging in two groups of cre transgenic mice, the authors examine activity of VTA-CA1 dopamine and LC-CA1 noradrenergic axons during reward seeking in a linear track virtual reality (VR) task. The authors provide a descriptive account of VTA and LC activities during walking, approach to reward, and environment change. Their results demonstrate LC-CA1 axons are activated by walking onset, modulated by walking velocity, and heighten their activity during environment change. In contrast, VTA-CA1 axons were most activated during approach to reward locations. Together the authors provide a functional dissociation between these catecholamine projections to CA1. A major strength to their approach is the methodological rigor of 2-photon recording, data processing, and analysis approaches to accommodate their unequal LC-CA1 and VTA-CA1 sample sizes. These important systems neuroscience studies provide solid evidence that will contribute to the broader field of navigation and memory.

      Weaknesses:

      The conclusions of this manuscript are mostly well supported by the data. However, increasing the sample size of the VTA-CA1 group and using experimental methods that are identical among LC-CA1 and VTA-CA1 groups would help to fully support the author's conclusions.

    1. it really depends on how the organization's legal counsel interprets the laws and how risk-averse they are. Some organizations might say only Germany requires double opt-in, while others also include Austria and Switzerland. Some organizations might say the US operates under "everyone is opted in until they opt out" while others might say everyone needs to opt in, regardless of country.

      .

    1. And everything that I learn, I learn for a particular task, and once it’s done, I immediately forget it

      Terribly relatable :-)

    2. Cohesion and coherence may not exist in his notes for us as distant viewers of them, but this doesn’t mean that they do not exist for him while using his box of notes.

      Internal models/schemas of slipbox use take precedence over external models/schemas of slipbox use (indexing, addressing, etc.). Implicates Luhmann may have had tacit "handling" experiences with his slipbox unknown to us even today

    3. They neither require internal “cohesion nor coherence” in their systems which are direct extensions of their minds where that cohesion and coherence are stored.

      General definition of Zettelkasten as a collection of notes that serves to extend/reflect one's thinking and/or memory, networked or not

    4. What’s the intention behind finding these people?

      Less about finding people, more about finding their work -- namely whether the Zettelkasten does or does not aid productivity. Many people have goals for their slipbox -- personal learning, theory-crafting, information organization, etc. -- but to what extent have they actually attained these goals?

    1. What had our m othe rs beendoing then that they had no wealth to leave u s?

      i doubt they had the oppurtunity to

    2. , if any thing , a little f as ter than before,because it was now evening (s even tw en ty- thr ee to bepr ecise) and a breeze (from the s outhw est to be exact)had risen

      stop i love adding precise details when i write too but also its so interesting to think about how she's seen and experianced all of this but the exact scenerary is gone and she's dead and here i am reading about it

    3. b:1ck into the pa r, bcfore the" 'ar indeed. and to ct befo re my eye the model ofanother lunche on party held in ro oms not very far di tantfrom the e; but different.

      another anecdote

    Annotators

    1. Presidential campaigns increasingly are conducted as performances before a sympathetic audience, one that is invited to watch and listen but not to question or respond.

      broadly true, especially for Donald J. Trump

    2. Americans deserve a campaign that tests the strengths and weaknesses of the candidates; that highlights their differences and allows scrutiny of their plans; that motivates people to vote by giving them a clear account of how their choice in this election will affect their lives.

      Definitely this, but the majority of the right doesn't care about plans, choices, or strengths and weaknesses. They've bought into a cult of personality that washes out the ability to make informed decisions.

    1. application of computational approaches to support archival practice for the creation and preservation of reliable and authentic records and archives, investigating the use of such methods for (partially) automating or assisting archival processes such as appraisal, description, and more.
    2. some theoretical and practical issues around infrastructure new needs in the education and training of future (digital) archivists

      Hacer la traducción de la siguiente gráfica.

    1. the functions

      How discern correlation vs causation?

    2. farm-level data

      Need to measure farm-specific data? * Existing microbiome(s) * Existing distribution of "locked up" nutrients * P, N, \(\mathrm{Mg}^{2+}\), etc * Weather forecasting?

    1. hese strategies are intended to correctchronic attendance problems that are leading to poor student performanc

      We have students how have chronic health forms which permits them to miss so many days a month. Unfortunately some families abuse this. We have families who will have doctors sign off for 5 days a month. If you think about that it is at least one day a week or an entire week. They are missing so much instruction.

    2. ttendance is a shared responsibility among the schools, parents and students

      Ever since we started having Communities in Schools, the attendance rate has increased. They work consistently to get students to school. If a student is refusing to come to school they will go to the house to get them. They work along side our Truancy Probation Officer.

    3. Because of changes in assessment vendors,

      Since I started teaching twelve years ago, there have been a couple different formats of testing. This does not help in keeping scores consistent and students knowing how to navigate the system.

    4. English learners (EL) make up less than 1% of the West Virginia public school population andare geographically dispersed

      We have never had English Learners until this school year. We have a family who speaks Spanish and one that speaks Arabic. This has been a huge adjustment for our teachers.

    5. it is expected that improvements beyond the near-term 2020 target of 90% will bemore hard-fought and incremental.

      I am wondering if this would have been true if COVID did not happen. I know we have a had a difficult time getting students to come back in person since COVID. We have also seen an increase in social anxiety which has also contributed to students just dropping off.

    6. Students with disabilities 13.9% 57.0% 43.1% 3.3%

      The performance gap will continue to be high until the state finally creates a standardized test for students who are in between the GSA and ASA.

    7. the WVDE must sharpenits focus on its role in developing a knowledgeable, skilled, and credentialed workforce capableof attracting and retaining businesses to grow the State’s economy

      Bringing jobs to the state is beneficial for increasing school enrollment. The taxes increase also which will help the school district financially.

    8. forty percent (40%) will requireat least a high school diploma or General Education Diploma (GED)

      We have seen an increase in students dropping out. We encourage them to take the GED but some of them are just done. I feel as if students who were out during COVID are the ones who are dropping out.

    9. vocational associates

      I know our vocational program has grown with programs and enrollment. The Gas & Oil industry has brought a lot of vocational jobs which students are seeing the benefits of working in gas & oil.

    10. The Pre-K through grade 12 student population of West Virginia consists of slightly more than270,000 students and has been declining slightly for five years

      This does not surprise me. As a state we are not helping enrollment. The Hope Scholarship is only taking away from the number of students enrolled in public schools.

    1. Ask phase, every student is asked about their interests, strengths, and aspirations by advisors or admissions counselors.

      Week 1 Discussion or writing-to-learn activity: In the Message field, include any academic or personal information (experiences, interests, talents, accomplishments, goals, etc.) that would be important to a classmate or instructor meeting you for the first time. Write a minimum of two full paragraphs (6-10 sentences each). Pay special attention in your message to specific short- and long-term goals (or "...to your interests, strengths, and aspirations"?). What do you hope to achieve this semester? What do you hope to achieve by the time you complete your degree at MC (or beyond)? Your paragraph(s) should be well developed (with rich descriptions and details to help your readers get a sense of who you are).

    2. Ask-Connect-Inspire-Plan framework will benefit students generally, but it is likely to be especially beneficial to students who have not been served well by the education system in the past and may not have had guidance and support to explore their interest and develop a plan from family and peers. Thus, rethinking the new student experience using the program onboarding model is important not only to increase student persistence, but also to ensure more equitable outcomes.

      Equity a key component of onboarding at MC.

    3. The plan should show students what courses they need to take and in what timeframe to complete a program aligned with their goals for employment and further education.

      End-of-semester assignment: Complete the SAPC and submit

    4. In our program onboarding model and as a follow-up to the “ask” stage of the process, students are introduced to faculty, students, and others with whom they share interests. At the same time, faculty and program area personnel actively reach out to students beginning in orientation through activities designed to help expand students’ understanding of academic and career opportunities in their fields (including liberal arts and sciences) and to recruit students who share interests.

      See STSU 100 assignment. How many students take STSU? Who takes it? How should these activities align with ENGL 101+ SAPC assignment?

    5. conversations with advisors and faculty about what programs would enable them to pursue their interests and goals. Typically, students are encouraged to consult the college website, take a career assessment, and generally find their own way into a major

      Advising Day assignment: Consult the Montgomery College website and links on SAPC form (program advisors, etc.) to help you find a major.

    6. We conceive of “onboarding” as a process that may take students’ entire first year: It starts before a student even enrolls and extends until they have chosen a program direction, passed program foundation courses, and created an educational plan. The goal is to help students choose a program of study; connect with a community of faculty, students, and others with similar interests; take a course that “lights their fire” for learning; and build a full-program educational plan that shows the courses and timeline for completion.

      Align this onboarding process with existing SAPC?

    1. Eventually, there will be different ways of paying for different levels of quality. But today there some things we can do to make better use of the bandwidth we have, such as using compression and enabling many overlapping asynchronous requests. There is also the ability to guess ahead and push out what a user may want next, so that the user does not have to request and then wait. Taken to one extreme, this becomes subscription-based distribution, which works more like email or newsgroups. One crazy thing is that the user has to decide whether to use mailing lists, newsgroups, or the Web to publish something. The best choice depends on the demand and the readership pattern. A mistake can be costly. Today, it is not always easy for a person to anticipate the demand for a page. For example, the pictures of the Schoemaker-Levy comet hitting Jupiter taken on a mountain top and just put on the nearest Mac server or the decision Judge Zobel put onto the Web - both these generated so much demand that their servers were swamped, and in fact, these items would have been better delivered as messages via newsgroups. It would be better if the ‘system’, the collaborating servers and clients together, could adapt to differing demands, and use pre-emptive or reactive retrieval as necessary.

      It's hard to make sense of these comments in light of TBL's frequent claims that the Web is foremost about URLs. (Indeed, he starts out this piece describing the Web as a universal information space.) It can really only be reconciled if you ignore that and understand "the Web" here to mean HTML over HTTP.

      (In any case, the remarks and specific examples are now pretty stale and out of date.)

    1. species in our inoculant products are selected specifically for their ability to form close interactions with the plant’s roots and thrive in a hydroponic based environment.

      Any fungi? Don't need because no distance to transport substances? If any, mostly glomus hyphal networks?

    2. Any pathogen/disease resistance assistance to plant?

    3. nitrogen fixation

      any root nodules?

    1. ABSTRACT

      Very exciting technology, great work to the authors!

    2. PERC is gentle on cells, permitting sequential editing of multiple loci. As previously reported, this is one way to minimize chromosomal translocations1

      This is a really exciting implication that I hadn't considered before!

    3. Fig. 2 and Fig. 3

      I found this figures a little difficult to fully understand. Here are my small notes about what would improve the presentation, again, please take it or leave it:

      1. Colors: It would be helpful to have a legend that explains that different colors uses (light blue, dark blue, white, etc) . It took me a while to see the triangle vs. circle for washed and unwashed, but im not sure how the colors connect.

      2. Some stats would be helpful here! It can be difficult to asses the differences just by eye. It seems that sometimes washed vs. unwashed are different in terms of edited cell yield (like in the HSPCs) but then are the same for other metrics like % editing and % NHEJ? It would be useful to in the figure have some comparisons and indications of if the differences are statistically significant.

      3. This might just be a biorxiv figure display issue, but in 2c, 2f, 3f, 4c, and 4f the white NT bars are missing some of their outlines.

    4. INTRODUCTION

      This is a really helpful introduction to the technology. I appreciate the level of detail provided here, its clear the authors are being very thoughtful about enabling others to use this approach. Overall I found the description of the technology to clear and rigorous. I left some comments about details that I would wonder as a non-expert user if i were trying to get something similar off the ground, please take it or leave it.

    5. The peptide-only condition can be used to test a given cell type’s sensitivity to the peptide, although we note that peptide-mediated toxicity can be exacerbated by the absence of RNP cargo.

      How do you measure cell sensitivity? Is this overall viability, or are there other important metrics to consider?

    6. Assessing editing efficiency

      I really appreciate your step-by-step breakdown of how to evaluate editing success! It could be useful to also explain how to evaluate off-target editing. Do you have a routine approach for this?

    7. For synthesis, we recommend ≥95% purity as assessed by HPLC, which is also used for purification. It has been suggested that an acid exchange step (using HCl or acetate to displace trifluoroacetic acid)

      So helpful! thanks for including this detail.

    8. Two PERC peptides are commercially available: INF7TAT-A5K (A5K)15 and INF7TAT-P55 (P55).

      Where are they commercially available from?

    9. T cells and HSPCs are relatively fragile and generally resistant to transfection

      I can't tell if you are including LNPs under the transfection umbrella or not. I naively would assume yes but am not an expert. Below you talk about LNPs being a viable option for T cell / HSPC delivery, but this sentence up top is suggests otherwise. If you are referring to a different type of transfection reagents here being non-ideal or T cell/HSPC delivery , can you specify ?

    10. Spacing PERC delivery steps by ≥ 2 d allows each RNP to be metabolized by the cell26,

      Can you clarify if the cells are dividing during this time frame? Or if this is due to the overall stability of the RNP in the cell itself.

    11. NF7TAT peptide had served as the basis for a prior screen for lytic activity in red blood cells as a proxy for endosomal escape20. Our screen of INF7TAT variants in T cells15 identified A5K as well as three additional activating INF7TAT substitutions (G1K, G20L and Y22N) that we have now incorporated in a single peptide: INF7TAT-P55 (henceforth P55)

      Could you briefly explain a little bit more about your peptide reagent? How do the mutations impact activity?

      Also, why aren't the peptides lytic in this context? Do the mutations reduce this activity?

    1. on

      Throughout the paper text, add comments with specific critiques and takeaways from the paper. Some questions to consider: What is the gap in the literature? Are there related papers not included? Did you learn something useful from the way the problem was framed? What is the research question and how novel is it? Is the overall research design appropriate for the research questions? Are there specific limitations in the methods? What important assumptions are embedded in the results? What results did you find informative or surprising? Are there sections of text you think the authors didn't present clearly? Is each conclusion supported by the analysis? How do the key takeaways change the state-of-the-art in this field? What follow on studies would you like to see?

    2. d, UK

      No highlighting without annotations, please!

    3. Abs

      Put comments with overall strengths, weaknesses, and takeaways in the abstract

    Annotators

    1. If

      We will read this article. Provide as much information as we can like name of the OER. Same link, descriptive.

    1. the Perseverance rover had found a rock with compelling evidence of organic molecules and with intriguing markings that, if they were seen on Earth, would be consistent with biological activity in the past.

      Cool!

      • Overview of Graphs in Computation:

        • Graphs have been successful in domains like shader programming and signal processing.
        • Computation in these systems is usually expressed on nodes with edges representing information flow.
        • Traditional models often have a closed-world environment where node and edge types are pre-defined.
      • Introduction to Scoped Propagators (SPs):

        • SPs are a programming model embedded within existing environments and interfaces.
        • They represent computation as mappings between nodes along edges.
        • SPs reduce the need for a closed environment and add behavior and interactivity to otherwise static systems.
      • Definition and Mechanics:

        • A scoped propagator consists of a function taking a source and target node, returning a partial update to the target.
        • Propagation is triggered by specific events within a defined scope.
        • Four event scopes implemented: change (default), click, tick, and geo.
        • Syntax: scope { property1: value1, property2: value2 }.
      • Event Scopes and Syntax:

        • Example: click {x: from.x + 10, rotation: to.rotation + 1} updates target properties when the source is clicked.
      • Demonstration and Practical Uses:

        • SPs enable the creation of toggles and counters by mapping nodes to themselves.
        • Layout management is simplified as arrows move with nodes.
        • Useful for constraint-based layouts and debugging by transforming node properties.
        • Dynamic behaviors can be created using scopes like tick, which utilize time-based transformations.
      • Behavior Encoding and Side Effects:

        • All behavior is encoded in arrow text, allowing for easy reconstruction from static diagrams.
        • Supports arbitrary JavaScript for side effects, enabling creation of utilities or tools within the environment.
      • Cross-System Integration:

        • SPs can cross boundaries of siloed systems without editing source code.
        • Example: mapping a Petri Net to a chart, demonstrating flexibility in creating mappings between unrelated systems.
      • Complex Example:

        • A small game created with SPs includes joystick control, fish movement, shark behavior, toggle switch, death state, and score counter.
        • The game uses nine arrows to propagate behavior between different node types.
      • Comparison to Prior Work:

        • Differences from Propagator Networks: propagation along edges, scope conditions, arbitrary stateful nodes.
        • Previous work like Holograph influenced the use of the term "propagator."
      • Open Questions and Future Work:

        • Unanswered questions include function reuse, modeling side effects, multi-input-multi-output propagation, and applications to other domains.
        • Formalization of the model and examination of real-world usage are pending tasks.

      By following the structured format above, the summary captures the essence and main points of the text, providing clear insights into the Scoped Propagators model and its potential applications.

    1. Renaissance

      Renaisance not just fine arts but also other fields e.g. science, literature, civic affair,s theology, medicine. Also not the first intellectual revival after fall of rome: carolingian renaisance and 12th cent renaissance

    2. e Middle Ages and Renaissance.

      importance of SR background in middle age and renaissance

    3. Specific change

      Specific changes - intellectual, technological ,political - and modern developments e.g. engineering, art, literature took place differently in different places e.g. Italy before outskirts like England

    4. Early modern

      Early modern intellectuals reworked a lot of medieval questions and ideas but love to disparage the period

    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      The authors do not wish to provide a response at this time.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      Summary:

      Parkin, a E3 ubiquitin ligase, is involved in the clearance of damaged mitochondrial via mitophagy. Upon mitochondrial damage, the activated Parkin ubiquitinates many mitochondrial substrates, leading to the recruitment of mitophagy effectors. However, the mechanism of substrate recognition by Parkin is still not known.

      In this manuscript, Koszela et al. utilized diverse biochemical assays and biophysical approaches, combined with AlphaFold prediction, to identify a conserved region in the flexible linker between the Ubl and RING0 domains of Parkin that recognizes mitochondrial GTPase Miro1 via a stretch of hydrophobic residues and is critical for its ubiquitination activity on Miro1. This manuscript reveals the mechanisms by which Parkin recognizes and ubiquitinates substrate Miro1, providing a biochemical explanation for the presence of Parkin at the mitochondrial membrane prior to activation by mitochondrial damage. This study also provides insights into mitochondrial homeostasis and may facilitate new therapeutic approaches for Parkinson's disease.

      Major Comments:

      • The authors should expand the background introduction to include the biological function of Miro1, the domain architecture of Miro1 and more context of Miro1 K572 ubiquitination in mitophagy.
      • Figure 1B is confusing. Due to the presence of various bands, it is hard to assign specific bands in each lane. In addition, there are various unlabeled bands that makes things unclear. The authors should include loading controls to clearly discern pParkin, Ube1, Ube2L3, and all substrates.
      • In Figure 1B, it was not possible to identify the ubiquitination bands of E2 enzyme UBE2L3 and the E1 enzyme UBE1. Please indicate these bands on the gel.
      • Since ubiquitinated Miro1 and Mfn1 are similar in molecular weight (Fig. 1b), the authors should show a western blot against the Miro1 and Mfn1 tag as done in the supplementary information, At least for the competition assays involving both Miro1 and Mfn1.
      • The conclusion that Miro1 is pParkin's preferred substrate is not convincing. In the competition assay used to show substrate preference, Miro1 is at a five-fold higher concentration than the other substrates and 25-fold higher than FANCI/D2. This would ultimately drive pParkin's interaction with Miro1. This is further highlighted by the fact that it adding Mfn1 in excess has a similar effect. The competition assay should be done at equimolar concentrations of Miro1 and substrate. More convincing would be a competition assay where substrate ubiquitination is quantified at several different concentrations of Miro1.
      • In Figure 1F, it is unclear what is defined as "high" or "low" ubiquitination levels statistically. Some of the changes in ubiquitination levels are extremely subtle (ex. mitoNEET and FancI/D2 in the presence and absence of Miro1 and Mfn1). In some cases, I find it extremely difficult to tell if there is any change in the ubiquitination levels when comparing lanes containing excess of different substrates. I would like to see band quantifications of this experiment in triplicate to support the conclusions drawn from the competition assay.
      • The authors used both unmodified and phosphorylated Parkin for the crosslinking experiments and observe no difference in the intensity of the bands. However, this is not sufficient to draw any conclusion about the affinity between phosphorylated Parkin and Miro1 (which was done in lines 341-343). The authors should comment on why they did not test pParkin binding with Miro1, especially given the statement:

      "In our assays in the absence of pUb, pParkin must interact with its substrates without the action of pUb, likely through 158 transient, low affinity interactions" - The reference to Parkin115-124 as a "Substrate Targeting Region (STR)" is misleading. This would imply that this motif in Parkin is responsible for general substrate recognition when there is no direct evidence of this. In Figure 5F, the authors create a synthetic peptide based off the STR sequence. Although this sequence was effective in inhibiting the ubiquitination of Miro1, it was ineffective against Mfn1. This would indicate that Mfn1 relies on a completely different set of interactions for ubiquitination by Parkin. I suggest that the authors tone down the language in describing this region and rename this region (perhaps "Miro1 Targeting Region (MTR)"?). - The authors appear to confuse plDDT and PAE scores in Figure 5B. The PAE describes the expected positional error of each residue in the model. The plot should be colored in terms of Expected Position Error (Ångstrom), not plDDT scores.

      Minor Comments:

      • Figure 1A would benefit from a schematic showing the domain architecture. If the goal is to appreciate the length of the linker, then showing the actual amino acid length would be beneficial.
      • In Supplementary Figure 2D, the authors performed the MST experiment with His6-Smt3-tagged Parkin. The group had previously shown that the presence of the tag artificially interferes with autoubiquitination, potentially by forming intramolecular interactions. The SEC, Native Page, and ITC data of untagged Parkin with Miro1 provide sufficient evidence that the interaction between the two are weak. The authors should consider removing the MST data, since they are not congruent with the other experiments.
      • The ITC data in Supplementary Figure 2C look promising. It would be nice if the authors could try to quantify the Kd of their STR peptides to Miro1
      • Are STR peptides 1 and/or 2 unable to inhibit ubiquitination of other Parkin substrates besides Mfn1? Do these other substrates utilize the STR for recognition? AlphaFold modeling may provide some insight on Parkin recognition of other substrates.
      • The authors shold consider using AlphaFold3 to model the interaction of pParkin with Miro1 compares to unmodified Parkin.
      • Please label the protein names in Figure 4A for a better presentation.
      • Page 2, line 37. "...by a 65-residue flexible region (linker) to a unique to Parkin RING0 domain..." should be "...by a 65-residue flexible region (linker) to a unique Parkin RING0 domain...". The second "to" should be omitted.
      • Page 3, Line 48: "fulfill", not "fulfil"
      • Page 5, line 110. In sentence, "...phosphorylation at Ser65 of Parkin...", it is better to explicitly state that this phosphorylation happens on the Parkin Ubl domain.
      • Page 7, line151. Figure 1F should be Figure 1G.
      • Page 11, line 241. In sentence "...Miro1 residues R263, R265 and D228...", do the authors mean R261 and not R265?

      Significance

      Parkin is an E3 ubiquitin ligase that is activated to ubiquitinate diverse substrates on the mitochondrial membrane in response to mitochondrial damage, thereby recruiting mitophagy effectors. This study reveals the mechanisms by which Parkin recognizes and ubiquitinates Miro1, providing insights into mitochondrial homeostasis and facilitating new therapeutic approaches for Parkinson's disease.

      Readers with a background in protein ubiquitination and mitochondrial homeostasis might be interested in this study. My expertise includes protein ubiquitination and structural biology. However, I do not have sufficient expertise to evaluate the NMR experiments in this manuscript.

    3. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #2

      Evidence, reproducibility and clarity

      Koszela et al. have submitted this manuscript demonstrating the molecular mechanism of interaction between Parkin and one of its known substrates, Miro1. While the interaction and ubiquitination of Miro1 by Parkin (and it's role in mitochondrial quality control) has been known since 2011, as demonstrated by the Schwarz group and others, the mechanism of action has remained unknown. The ability of Parkin to ubiquitinate multiple proteins upon mitochondrial damage has indeed led many groups to speculate that Parkin is a promiscuous E3 ligase upon activation; this manuscript tries to provide a rationale for the interaction with one of its known substrates through a combination of biochemical and biophysical studies.

      The authors demonstrate that Miro1 is efficiently ubiquitinated in in vitro biochemical assays in comparison to a few mitochondrial and non-mitochondrial proteins in an attempt to show that Miro1 is a preferred substrate for Parkin. Cross-linking coupled with mass spectrometry, SAXS and NMR experiments were used to provide compelling evidence for a direct and specific interaction between Parkin and Miro1. Molecular modelling using Colabfold and biochemical assays with mutants of the proposed interaction site were then used to provide further proof for the specificity of the interaction. This interaction is shown to occur between the conserved a.a.115-122 (referred to in this study as STR; located in the linker connecting the Ubl to RING0) and the EF domain of Miro1. Interestingly, the authors show that peptides corresponding to 115-122 competitively inhibit ubiquitination of Miro1 by Parkin. Overall, this article constitutes an important addition to our understanding of Parkin's mechanism of action. However, some of the key claims remain unsubstantiated, as described below.

      Major issues:

      1. In line 151 the authors claim, 'these data strongly support the hypothesis that Miro1 is the preferred substrate of pParkin...'. Arguably, the biggest issue with this study is the lack of substantial proof that Miro1 is the preferred parkin substrate in a cellular or physiological context. This claim cannot be made based on a biochemical assay with three other proteins. The Harper group has performed in-depth proteomics studies on the kinetics of Parkin-mediated ubiquitination and proposed that VDACs and Mfn2 (among a few others) are most efficiently ubiquitinated upon mitochondrial damage in induced neurons (Ordureau et al, 2018,2020). Interestingly, neither of these papers have been mentioned by the authors in this manuscript. The Trempe group has shown that Mfn2 is efficiently targeted by Parkin through mitochondrial reconstitution assays and proximity ligation assays (Vranas et al, 2022). The authors need to substantiate their claim through cellular or mitochondrial assays to prove that Miro1 is the preferred physiological substrate of Parkin. Cellular experiments also account for cellular abundance and proximity of Parkin to the substrate, which is not possible in biochemical assays of the kind presented here. In the absence of strong experimental proof for this claim, these claims should be tampered down to Miro1 being "the preferred substrate compared to the other proteins in this assay", and the manuscript should focus more on the molecular mechanism of interaction between Miro1 and Parkin.
      2. In addition to the point above, the authors do not describe the rationale for specifically choosing Mfn1 and MitoNEET for their comparison with Miro1 as substrates. Interestingly, Miro1, MitoNEET and Mfn1 are not among the most efficiently ubiquitinated substrates of Parkin (Ordureau et al, 2018). Additionally, the authors have used a construct of Mfn1 that lacks the full HR1 domain for their assays. Previously, it has been shown that the HR1 of mitofusins is targeted by Parkin (McLelland et al. 2018). Can the authors prove that their Mfn1 construct is as efficiently ubiquitinated as full-length Mfn1 by Parkin? If it is not possible to obtain soluble full-length Mfn1 or other membrane proteins for these assays, then I strongly recommend the authors should perform mitochondrial reconstitution assays as others have performed previously (Vranas et al, 2022) and use this opportunity to also report the ubiquitination kinetics of multiple mitochondrial substrates compared to Miro1 to make a more compelling case for substrate preference.
      3. The authors show that both pParkin-Miro1 and Parkin-Miro1 complexes can be captured by chemical cross-linking. It is well-established in the field that pUbl binds to RING0 (Gladkova et al, 2018) (Sauve et al, 2018) while non-phosphorylated Ubl binds RING1 (Trempe et al, 2013). The Komander group has also shown that the ACT (adjacent to the STR) element binds RING2 in the activated Parkin structure (Gladkova et al, 2018). This suggests that STR could occupy different positions in the Parkin and pParkin. The authors have only reported the cross-link/MS data and model of the Parkin-Miro1 complex. Arguably, the pParkin-Miro1 data is just as, if not more, relevant given that pParkin represents the activated form the ligase. The authors need to robustly establish that Miro1 binds to the STR element in both cases by demonstrating the following:

      A. Mass spectrometry data from cross-linked pParkin-Miro1 complex suggesting the same interaction site.

      B. Colabfold modelling with the pParkin structure to show that Miro1 would bind to the same element. 4. Does Parkin only bind to Miro1, or can it bind to Miro2 as well? Are there differences between the binding site and Ub target sites between the two proteins? The author should also show experimentally if both proteins get ubiquitinated as efficiently by Parkin and if the STR element is involved in recognizing both proteins. Interestingly, the Harper group reports that Miro2 gets more efficiently ubiquitinated than Miro1 (Ordureau et al, 2018). 5. In Figure 5D, the level of unmodified Miro1 seems to be similar in assays with WT or I122Y Parkin, though the former seems to form longer chains while the latter forms shorter chains. Is there an explanation for this? Perhaps, the authors need to perform this assay at shorter time points to show that there is more unmodified Miro1 remaining when treated with I122Y Parkin (and similarly for the L221R mutant of Miro1)? Also, why is the effect of Miro1 L221R and Parkin I122Y not additive?

      Minor comments:

      1. The authors should report the full cross-linking/MS data report from Merox including the full peptide table and decoy analysis report.
      2. The authors should report statistics for the fit of the Colabfold model to the experimental SAXS curve.
      3. Why is the Parkin-Miro1 interaction only captured by NMR and not by ITC? The authors should at least attempt to show the interaction of the STR peptide with Miro1 by an orthogonal technique like ITC.
      4. The authors should report the NMR line broadening data quantitatively i.e. reporting the reduction in signal intensity for the peaks upon peptide Miro1 binding to quantitatively demonstrate that the 115-122 peak intensity reduction is more significant than other regions.
      5. Figure 4 (structure figure) and B (PAE plot) should be annotated with the names of domains and elements in Parkin and Miro1 to make these figures clearer and more informative.

      Referees cross-commenting

      I am in agreement with reviewers 1 and 2. Both of them raise valid and interesting points in their reviews.

      Specifically, I would like to highlight the following:

      1. Reviewer 1 makes a very good point (5/6) highlighting that L119A does not impair Parkin recruitment in the previously reported study. I second this concern and believe that the authors need to re-frame their discussion and make it much more nuanced with regards to the role of Miro1-Parkin interaction in mitophagy (if any at all). Additionally, the authors should also note that previous studies in the field from the Youle group (Narendra et al, 2008) and multiple other groups have shown a complete absence of Parkin recruitment to healthy mitochondria. Parkin recruitment to healthy mitochondria hence remains a controversial idea at best, with no evidence for it outside of Parkin overexpression systems (Safiulina et al, 2018) which can also lead to artifacts. The discussion should take all major studies/observations into account to propose a more nuanced picture of the role of Parkin-Miro1 interaction. Perhaps, this interaction plays more of a role in mitochondrial quarantine (Wang et al. 2011) as suggested by the Schwarz group than in Parkin recruitment?
      2. Reviewer 3 raises a valid concern about the lack of quantification in ubiquitination assays and alludes to the difficulty in visualizing ubiquitination of multiple proteins. That was a concern I also had but did not include in my review. Perhaps, the authors should also show western blots for each of the protein (in a time course experiment) demonstrating the difference in ubiquitination kinetics of each of proteins instead of busy SDS-PAGE gels for the assay.

      Significance

      The key strength of this study is the strong biophysical evidence of a direct interaction between Parkin and Miro1 and the discovery of the Miro1 binding site on Parkin. The biophysical and biochemical experiments in this study have been well-designed and executed. The evidence for a specific interaction between Parkin and Miro1 has been provided through multiple approaches. The authors should be commended for this effort. The biggest limitation of this study is the lack of proof that Miro1 is the preferred Parkin substrate in a cellular/physiological context since in biochemical assays Parkin can ubiquitinate multiple proteins non-specifically. Substrate preference claims need to be established in more physiologically relevant experimental settings.

      Overall, the study represents a mechanistic advance in terms of our understanding of the interaction between Parkin and one of its substrates i.e. Miro1, showing that Parkin can indeed specifically bind its substrates before targeting them for ubiquitination. This might also inspire others to investigate the molecular mechanism of action of Parkin with other substrates. This paper would likely appeal specialized audiences i.e. biochemists and structural biologists studying Parkin in mitochondrial quality control.

      Reviewer expertise: Expert biochemist and biophysicist with a number highly cited works in the field of mitochondrial quality control and Parkin.

    4. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      The manuscript by Koszela et al. explores the substrate preference of the Parkinson's disease associated ubiquitin E3-ligase, Parkin. They conclude that Miro1 is a preferred substrate of Parkin and go on to further characterize a binding site of Parkin to Miro1 using a range of biochemical approaches. This site is identical to previously reported (see point 2). The experimental work is strong with many high-quality assays supporting their ideas; however, there are several major points that should be considered:

      1. The majority (perhaps all) of their biochemical work on Miro1 uses a truncated form of Miro1 lacking the first GTPase domain. It isn't at all clear why this is the case, as no justification is given. Moreover, functional full-length Miro1 has been purified in several papers (e.g. PMID: 33132189). If ubiquitination kinetics are different between the full-length and truncated form of Miro1, this would call into question the significance of the findings in vivo, where the truncation does not exist.
      2. As the manuscript is currently written, there are areas which do not do justice to previous work. Firstly, the authors state throughout the manuscript that no previous work has identified a binding interface between Parkin and one of its substrates, e.g., in the abstract "no substrate interaction site in Parkin has been reported". This is not true as a recent paper already described the binding interface (DOI: 10.1038/s44318-024-00028-1). "we identify a conserved region in the flexible linker", again this interface is identical to that identified previously. Therefore, this study does not "identify" this interface. Given the timing, it is likely that this discovery has been "scooped" by the previous study, but since the present study goes much further in the biochemical characterization of the interface, it would not diminish the paper's importance to rewrite it, giving proper credit where due. Secondly, the authors spend a large part of their discussion speculating on the significance of non-activated Parkin being able to bind Miro e.g., "Importantly, our results suggest that Parkin can interact with Miro1 independently of its activation state, as Parkin phosphorylation does not detectably increase its interaction with Miro1...". Again, this was already known as Parkin has been shown to be recruited to mitochondria upon Miro1 overexpression in the absence of PINK1 (DOI: 10.15252/embj.201899384 and DOI: 10.15252 /embj. 2018100715). The further biochemical characterisation of the Parkin-Miro1 interaction is important and therefore, in both cases the work contained within the manuscript is still a significant contribution, which should, however, be properly discussed in the light of published work.
      3. The Miro L221R mutation is used to disrupt Miro-Parkin interaction. Yet, this non-conservative mutation in the midst of a folded domain might have other effects, like affecting calcium binding or preventing the folding of the domain. This is not tested. The complementary Parkin-I122Y used for the same purpose decreases but does not abolish Parkin-Miro1 binding. Parkin-L119A is proposed to abolish the Parkin-Miro1 interaction. The inclusion of this mutant might be important to fully ascertain the role of Parkin-Miro1 binding in Miro1 ubiquitination.
      4. The effect of Miro competition on other substrates' ubiquitylation is marginal and its reproducibility is questionable (whether mitoNEET ubiquitylation is affected at all in figure 1G is unclear. This blot is anyway over processed with an unnaturally uniform grey background). If the authors wish to make a point about it, these experiments should be repeated and quantified. Moreover, since the model is that the specific Miro-Parkin interaction is involved, the mutants above should be used in the same competition experiments and shown to be unable to compete.
      5. Related to the previous point, one important factor about the kinetics that the authors do not discuss is how any of it relates to mitophagy in vivo. There very well might be a slight intrinsic preference at a given concentration of substrate and Parkin; however, how this plays out in the cell is not clear, e.g., Miro1 may be many times more, or less, abundant than Mfn1, and so a preference might not have much of an effect. So ubiquitination kinetics would need to be considered in a broader cellular context.
      6. Related to the above point, the authors state "Parkin translocation was diminished upon L119A mutation, supporting the importance of the Parkin Miro1-interacting site in mitophagy.". However, the study (not cited but which this reviewer assumes to be DOI: 10.1038/s44318-024-00028-1 since the L119A mutation has only ever been used here) finds no change in Parkin recruitment upon damage. So, it cannot be used to support "the importance of the Parkin Miro1-interacting site in mitophagy".

      Referees cross-commenting

      The reviews align well together with many overlaping point and similar assessment of the significance. Reviewer 2 brings in interesting points pertaining to literature that we were not aware of, explaining why we didn't make these points.

      One comment on reviewer's 3 last major points

      It does not appear that there is a confusion between pIDDT and PAE scores. The plot is coloured according to PAE (which is a residue x residue 2D matrix, figure 4B), while the protein ribbon is coloured according to pIDDT, which is a 1D per-residue confidence score.

      Significance

      This study provides an in-depth in vitro assessment of a specific binding interface between the E3-ligase Parkin and one of its substrate Miro1. Although this interface has been recently described, this study goes well beyond previous knowledge by showing that the interface is important for complete Miro ubiquitylation by Parkin, therefore showing that interactions involving unstructured linkers participate in substrate recognition by the E3-ligase. The importance of this interaction remains to be assessed in vivo. This study is of interest to basic mitochondrial dynamics, quality control and mitophagy researcher as well as translational Parkinson's Disease researchers.

      The reviewer's expertise is in mitochondrial membrane dynamics.

    1. eLife assessment

      This study presents valuable framework and findings to our understanding of the brain as a fractal object by observing the stability of its shape property within 11 primate species and by highlighting an application to the effects of aging on the human brain. The evidence provided is solid but the link between brain shape and the underlying anatomy remains unclear. This study will be of interest to neuroscientists interested in brain morphology, whether from an evolutionary, fundamental or pathological point of view, and to physicists and mathematicians interested in modeling the shapes of complex objects.

    2. Reviewer #2 (Public Review):

      In this manuscript, the authors analyze the shapes of cerebral cortices from several primate species, including subgroups of young and old humans, to characterize commonalities in patterns of gyrification, cortical thickness, and cortical surface area. The authors state that the observed scaling law shares properties with fractals, where shape properties are similar across several spatial scales. One way the authors assess this is to perform a "cortical melting" operation that they have devised on surface models obtained from several primate species. The authors also explore differences in shape properties between brains of young (~20 year old) and old (~80) humans. A challenge the authors acknowledge struggling with in reviewing the manuscript is merging "complex mathematical concepts and a perplexing biological phenomenon." This reviewer remains a bit skeptical about whether the complexity of the mathematical concepts being drawn from are justified by the advances made in our ability to infer new things about the shape of the cerebral cortex.

      (1) The series of operations to coarse-grain the cortex illustrated in Figure 1 produces image segmentations that do not resemble real brains. The process to assign voxels in downsampled images to cortex and white matter is biased towards the former, as only 4 corners of a given voxel are needed to intersect the original pial surface, but all 8 corners are needed to be assigned a white matter voxel. The reason for introducing this bias (and to the extent that it is present in the authors' implementation) is not provided. The authors provide an intuitive explanation of why thickness relates to folding characteristics, but ultimately an issue for this reviewer is, e.g., for the right-most panel in Figure 2b, the cortex consists of several 4.9-sided voxels and thus a >2 cm thick cortex. A structure with these morphological properties is not consistent with the anatomical organization of typical mammalian neocortex.

      (2) For the comparison between 20-year-old and 80-year-old brains, a well-documented difference is that the older age group possesses more cerebral spinal fluid due to tissue atrophy, and the distances between the walls of gyri becomes greater. This difference is born out in the left column of Figure 4b. It seems this additional spacing between gyri in 80 year olds requires more extensive down-sampling (larger scale values in Figure 4a) to achieve a similar shape parameter K as for the 20 year olds. The authors assert that K provides a more sensitive measure (associated with a large effect size) than currently used ones for distinguishing brains of young vs. old people. A more explicit, or elaborate, interpretation of the numbers produced in this manuscript, in terms of brain shape, might make this analysis more appealing to researchers in the aging field.

      (3) In the Discussion, it is stated that self-similarity, operating on all length scales, should be used as a test for existing and future models of gyrification mechanisms. Given the lack of association between the abstract mathematical parameters described in this study and explicit properties of brain tissue and its constituents, it is difficult to envision how the coarse-graining operation can be used to guide development of "models of cortical gyrification."

      (4) There are several who advocate for analyzing cortical mid-thickness surfaces, as the pial surface over-represents gyral tips compared to the bottoms of sulci in the surface area. The authors indicate that analyses of mid-thickness representations will be taken on in future work, but this seems to be a relevant control for accepting the conclusions of this manuscript.

    3. Reviewer #3 (Public Review):

      Summary: Through a rigorous methodology, the authors demonstrated that within 11 different primates, the shape of the brain followed a universal scaling law with fractal properties. They enhanced the universality of this result by showing the concordance of their results with a previous study investigating 70 mammalian brains, and the discordance of their results with other folded objects that are not brains. They incidentally illustrated potential applications of this fractal property of the brain by observing a scale-dependant effect of aging on the human brain.

      Strengths:<br /> - New hierarchical way of expressing cortical shapes at different scales derived from previous report through implementation of a coarse-graining procedure<br /> - Investigation of 11 primate brains and contextualisation with other mammals based on prior literature<br /> - Proposition of tool to analyse cortical morphology requiring no fine tuning and computationally achievable<br /> - Positioning of results in comparison to previous works reinforcing the validity of the observation.<br /> - Illustration of scale-dependance of effects of brain aging in the human.

      Weaknesses:<br /> - The notion of cortical shape, while being central to the article, is not really defined, leaving some interpretation to the reader<br /> - The organization of the manuscript is unconventional, leading to mixed contents in different sections (sections mixing introduction and method, methods and results, results and discussion...). As a result, the reader discovers the content of the article along the way, it is not obvious at what stages the methods are introduced, and the results are sometimes presented and argued in the same section, hindering objectivity.<br /> To improve the document, I would suggest a modification and restructuring of the article such that: 1) by the end of the introduction the reader understands clearly what question is addressed and the value it holds for the community, 2) by the end of the methods the reader understands clearly all the tools that will be used to answer that question (not just the new method), 3) by the end of the results the reader holds the objective results obtained by applying these tools on the available data (without subjective interpretations and justifications), and 4) by the end of the discussion the reader understands the interpretation and contextualisation of the study, and clearly grasps the potential of the method depicted for the better understanding of brain folding mechanisms and properties.

    4. Author response:

      The following is the authors’ response to the previous reviews.

      eLife assessment:

      This study presents valuable framework and findings to our understanding of the brain as a fractal object by observing the stability of its shape property within 11 primate species and by highlighting an application to the effects of aging on the human brain. The evidence provided is solid but the link between brain shape and the underlying anatomy remains unclear. This study will be of interest to neuroscientists interested in brain morphology, whether from an evolutionary, fundamental or pathological point of view, and to physicists and mathematicians interested in modeling the shapes of complex objects.

      We now clarified the outstanding questions regarding if our model outputs can be related to actual primate brain anatomy, which we believe was mainly based on comments regarding the validity of our output of apparently thicker cortices than nature can produce.

      We address this point in more detail in the point-by-point response below, but want to address this misunderstanding directly here: Our algorithm does not produce thicker cortices with increasing coarse-graining scales; in fact, the cortical thickness never exceeds the actual cortical thickness in our outputs, but rather thins with each coarse-graining scale. In other words, we believe that our outputs are fully in line with neuroanatomy across species.

      Reviewer #2 (Public Review): 

      In this manuscript, the authors analyze the shapes of cerebral cortices from several primate species, including subgroups of young and old humans, to characterize commonalities in patterns of gyrification, cortical thickness, and cortical surface area. The authors state that the observed scaling law shares properties with fractals, where shape properties are similar across several spatial scales. One way the authors assess this is to perform a "cortical melting" operation that they have devised on surface models obtained from several primate species. The authors also explore differences in shape properties between brains of young (~20 year old) and old (~80) humans. A challenge the authors acknowledge struggling with in reviewing the manuscript is merging "complex mathematical concepts and a perplexing biological phenomenon." This reviewer remains a bit skeptical about whether the complexity of the mathematical concepts being drawn from are justified by the advances made in our ability to infer new things about the shape of the cerebral cortex. 

      To allow scientists from all backgrounds to adopt these complex ideas, we have made our code to “melt” the brains and for further downstream analysis publicly available. We have now also provided a graphical user interface, to allow users without substantial coding experience to run the analysis. We also believe that the algorithmic concepts are easy to understand due to the similarity to the coarse-graining procedures found in long-standing and well-accepted box-counting algorithms.

      Beyond the theoretical insight of the fractal nature of cortices and providing an explicit and crucial link between vastly different brains that are gyrified and those that are not, we believe that the advance gained by our methods for future applications is clearly demonstrated in our proof-of-principle with a four-fold increase in effect size. For reference, an effect size of 8 would translate to an almost perfect separation of groups, i.e. an ideal biomarker with near 100% sensitivity and specificity.

      (1) The series of operations to coarse-grain the cortex illustrated in Figure 1 produces image segmentations that do not resemble real brains.

      As re-iterated in our Methods and Discussion: “Note, of course, that the coarse-grained brain surfaces are an output of our algorithm alone and are not to be directly/naively likened to actual brain surfaces, e.g. in terms of the location or shape of the folds. Our comparisons here between coarse-grained brains and actual brains is purely on the level of morphometrics across the whole cortex.”

      Fig. 1 therefore serves as an explanation to the reader on the algorithmic outputs, but each melted brain is not supposed to be directly/visually compared to actual brains. Similar to algorithms measuring the fractal dimension, or the exposed surface area of a given brain, the intermediate outputs of these algorithms are not supposed to represent any biologically observed brain structures, but rather serve as an abstraction to obtain meaningful morphometrics.

      We additionally added a note to the caption of Fig. 1 to clarify this point:

      “Note that the actual size of the brains for analysis are rescaled (see Methods and Fig. 3); we display all brains scaled at an equal size here for the ease of visualisation of the method.”

      Finally, we also edited the entire paper for terminology to clearly distinguish the terms of (1) the cortex as a 3D object, (2) coarse-grained and voxelised versions thereof, and (3) summary morphological measures derived from the former. When we invite comparisons in our paper between real brains and coarse-grained brains, this is always at the level of summary morphological measures, not at the level of the 3D objects/voxelisations themselves.

      The process to assign voxels in downsampled images to cortex and white matter is biased towards the former, as only 4 corners of a given voxel are needed to intersect the original pial surface, but all 8 corners are needed to be assigned a white matter voxel. The reason for introducing this bias (and to the extent that it is present in the authors' implementation) is not provided.

      This detail was in the Supplementary, and we have now added additional clarification on this specific point to our Supplementary:

      “In detail, we assign all voxels in the grid with at least four corners inside the original pial surface to the pial voxelization. This process allows the exposed surface to remain approximately constant with increasing voxel sizes. A constant exposed surface is desirable, as we only want to gradually ‘melt’ and fuse the gyri, but not grow the bounding/exposed surface as well. We want the extrinsic area to remain approximately constant as we decrease the intrinsic area via coarse-graining; it is like generating iterates of a Koch curve in reverse, from more to less detailed, by increasing the length of smallest line segment.

      We then assign voxels with all eight corners inside the original white matter surface to the white matter voxelization. This is to ensure integrity of the white matter, as otherwise white matter voxels in gyri may become detached from the core white matter, and thus artificially increase white matter surface area. Indeed, the main results of the paper are not very sensitive to this decision using all eight corners, vs. e.g. only four corners, as we do not directly use white matter surface area for the scaling law measurements. However, we still maintained this choice in case future work wants to make use of the white matter voxelisations or derivative measures.”

      Note on the point of white matter integrity that if both grey and white matter voxelisations require all 8 corner to be inside the respective mesh, there will be voxels not assigned to either at the grey/white matter interface, causing potential downstream issues.

      We further acknowledge:

      “Of course, our proposed procedure is not the only conceivable way to erase shape details below a given scale; and we are actively working on related algorithms that are also computationally cheaper. Nevertheless, the current version requires no fine-tuning, is computationally feasible and conceptually simple, thus making it a natural choice for introducing the methodology and approach.”

      The authors provide an intuitive explanation of why thickness relates to folding characteristics, but ultimately an issue for this reviewer is, e.g., for the right-most panel in Figure 2b, the cortex consists of several 4.9-sided voxels and thus a >2 cm thick cortex. A structure with these morphological properties is not consistent with the anatomical organization of typical mammalian neocortex. 

      We assume the reviewer refers to Fig. 1B with the panel on scale=4.9mm. We would like to point out that Fig. 1 serves as an explanation of the voxelisation method. For the actual analysis and Results, we are using re-scaled brains (see Fig. 2 with the ever decreasing brain sizes). The rescaling procedure is now expanded as below:

      “Morphological properties, such as cortical thicknesses measured in our ‘melted’ brains are to be understood as a thickness relative to the size of the brain. Therefore, to analyse the scaling behaviour of the different coarse-grained realisations of the same brain, we apply an isometric rescaling process that leaves all dimensionless shape properties unaffected (more details in Suppl. S3.1). Conceptually, this process fixes the voxel size, and instead resizes the surfaces relative to the voxel size, which ensures that we can compare the coarse-grained realisations to the original cortices, and test if the former, like the latter, also scale according to Eqn. (1). Resizing, or more precisely, shrinking the cortical surface is mathematically equivalent to increasing the box size in our coarse-graining method. Both achieved an erasure of folding details below a certain threshold. After rescaling, as an example, the cortical thickness also shrinks with increasing levels of coarse-graining, and never exceeds the thickness measured at native scale.”

      We additionally added a note to the caption of Fig. 1 to clarify this point:

      “Note that the actual size of the brains for analysis are rescaled (see Methods and Fig. 3); we display all brains scaled at an equal size here for the ease of visualisation of the method.”

      Finally, we also edited the entire paper for terminology to clearly distinguish the terms of (1) the cortex as a 3D object, (2) coarse-grained versions thereof, and (3) summary morphological measures derived from the former. When we invite comparisons in our paper between real brains and coarse-grained brains, this is always at the level of summary morphological measures, not at the level of the 3D objects themselves and their detailed anatomical features.

      (2) For the comparison between 20-year-old and 80-year-old brains, a well-documented difference is that the older age group possesses more cerebral spinal fluid due to tissue atrophy, and the distances between the walls of gyri becomes greater. This difference is born out in the left column of Figure 4b. It seems this additional spacing between gyri in 80 year olds requires more extensive down-sampling (larger scale values in Figure 4a) to achieve a similar shape parameter K as for the 20 year olds. The authors assert that K provides a more sensitive measure (associated with a large effect size) than currently used ones for distinguishing brains of young vs. old people. A more explicit, or elaborate, interpretation of the numbers produced in this manuscript, in terms of brain shape, might make this analysis more appealing to researchers in the aging field.

      We have removed the main results relating to K and aging from our last revision already to avoid confusion. This is now only in the supplementary analysis, and our claim of K being a more sensitive measure for age and ageing – whilst still true – will be presented in more detail in a series of upcoming papers.

      (3) In the Discussion, it is stated that self-similarity, operating on all length scales, should be used as a test for existing and future models of gyrification mechanisms. Given the lack of association between the abstract mathematical parameters described in this study and explicit properties of brain tissue and its constituents, it is difficult to envision how the coarse-graining operation can be used to guide development of "models of cortical gyrification."

      We have clarified in more detail what we meant originally in Discussion:

      “Finally, this dual universality is also a more stringent test for existing and future models of cortical gyrification mechanisms at relevant scales, and one that moreover is applicable to individual cortices. For example, any models that explicitly simulate a cortical surface as an output could be directly coarse-grained with our method and the morphological trajectories can be compared with those of actual human and primate cortices. The simulated cortices would only be ‘valid’ in terms of the dual universality, if it also produces the same morphological trajectories.”

      However, we agree with the reviewer that our paper could be misread as demanding direct comparisons of each coarse-grained brain with an actual brain, and we have now added the following text to clarify that this is not our intention for the proposed method or outputs.

      “Note, we do not suggest to directly compare coarse-grained brain surfaces with actual biological brain surfaces. As we noted earlier, the coarse-grained brain surfaces are an output of our algorithm alone and not to be directly/naively likened to actual brain surfaces, e.g. in terms of the location or shape of the folds. Our comparisons here between coarse-grained brains and actual brains is purely on the level of morphometrics across the whole cortex.”

      Indeed, the dual universality imposes restrictive constraints on the possible shapes of real cortices, but do not fully specify them. Presumably, the location of individual folds in different individuals and species will depend on their respective evolutionary histories, so there is no reason to expect a match in fold location between the ‘melted’ cortices of more gyrified species, on one hand, and the cortex of a less-gyrified one, on the other,  even if their global morphological parameters and global mechanism of folding coincide.

      (4) There are several who advocate for analyzing cortical mid-thickness surfaces, as the pial surface over-represents gyral tips compared to the bottoms of sulci in the surface area. The authors indicate that analyses of mid-thickness representations will be taken on in future work, but this seems to be a relevant control for accepting the conclusions of this manuscript.

      In the context of some applications and methods, we agree that the mid-surface is a meaningful surface to analyse. However, in our work, the mid-surface is not. The fractal estimation rests on the assumption that the exposed area hugs the object of interest (hence convex hull of the pial surface), as the relationship between the extrinsic and intrinsic areas across scales determine the fractal relationship (Eq. 2). If we used the mid-surface instead of the pial surface for all estimation, this would not represent the actual object of interest, and it is separated from the convex hull. Estimating a new convex hull based on the mid surface would be the equivalent of asking for the fractal dimension of the mid-surface, not of the cortical ribbon. In other words, it would be a different question, bound to yield a different answer.

      Hence, we indicated in our original response that we only have a provisional answer, but more work beyond the scope of this paper is required to answer this question, as it is a separate question. The mid-surface, as a morphological structure in its own right, will have its own scaling properties, and our provisional understanding is that these also yield a scaling law parallel to those of the cortical ribbon with the same or a similar fractal dimension. But more systematic work is required to investigate this question at native scale and across scales.

      Reviewer #3 (Public Review):

      Summary: Through a rigorous methodology, the authors demonstrated that within 11 different primates, the shape of the brain followed a universal scaling law with fractal properties. They enhanced the universality of this result by showing the concordance of their results with a previous study investigating 70 mammalian brains, and the discordance of their results with other folded objects that are not brains. They incidentally illustrated potential applications of this fractal property of the brain by observing a scale-dependant effect of aging on the human brain. 

      Strengths: 

      - New hierarchical way of expressing cortical shapes at different scales derived from previous report through implementation of a coarse-graining procedure 

      - Investigation of 11 primate brains and contextualisation with other mammals based on prior literature 

      - Proposition of tool to analyse cortical morphology requiring no fine tuning and computationally achievable 

      - Positioning of results in comparison to previous works reinforcing the validity of the observation. 

      - Illustration of scale-dependance of effects of brain aging in the human. 

      Weaknesses: 

      - The notion of cortical shape, while being central to the article, is not really defined, leaving some interpretation to the reader 

      - The organization of the manuscript is unconventional, leading to mixed contents in different sections (sections mixing introduction and method, methods and results, results and discussion...). As a result, the reader discovers the content of the article along the way, it is not obvious at what stages the methods are introduced, and the results are sometimes presented and argued in the same section, hindering objectivity. 

      To improve the document, I would suggest a modification and restructuring of the article such that: 1) by the end of the introduction the reader understands clearly what question is addressed and the value it holds for the community, 2) by the end of the methods the reader understands clearly all the tools that will be used to answer that question (not just the new method), 3) by the end of the results the reader holds the objective results obtained by applying these tools on the available data (without subjective interpretations and justifications), and 4) by the end of the discussion the reader understands the interpretation and contextualisation of the study, and clearly grasps the potential of the method depicted for the better understanding of brain folding mechanisms and properties. 

      We thank this reviewer again for their attention to detail and constructive comments. We have followed the detailed suggestions provided by us in the Recommendations For The Authors, and summarise the main changes here:

      - We have restructured all sections to be more clearly following Introduction, Methods, Results, and Discussion; by using subsections, we believe the structure is now more accessible to readers.

      -  We have now clarified the concept of “cortical shape”, as we use it in our paper in several places, by distinguishing clearly the object of study, and the morphological properties measured from it.

      Recommendations for the authors: 

      Reviewer #2 (Recommendations For The Authors): None 

      Reviewer #3 (Recommendations For The Authors): 

      I once again compliment the authors for their elegant work. I am happy with the way they covered my first feedback. My second review takes into account some comments made by other reviewers with which I agree. 

      We thank this reviewer again for their attention to detail and constructive comments.

      Recommendations for clarifications: 

      General comments: The purpose of the article could be made clearer in the introduction. When I differentiate results from discussion, I think of results as objective measures or observations, while discussion will relate to the interpretation of these results (including comparison with previous literature, in most cases). 

      We have restructured all sections to be more clearly following Introduction, Methods, Results, and Discussion; by using subsection, we believe the structure is now more accessible to readers.

      - l.39: define or discuss "cortical shape" 

      We have gone through the entire paper and corrected for any ambiguities. We specifically distinguish between the cortex as a structure overall, shape measures derived from this structure, and coarse-grained versions of the structure.

      - l.48-74: this would match either an introduction or a discussion rather than a methods section. 

      Done

      - l.98-106: this would match a discussion rather than a methods section. 

      Done

      - l.111: here could be a good spot to discuss the 4 vs 8 corners for inclusion of pial vs white matter voxelization 

      We have discussed this in the more detailed Supplementary section now, as after restructuring, this appears to be the more suitable place.

      - l.140-180: it feels that this section mixes methods, results and discussion of the results 

      We agree and we have resolved this by removing sentences and re-arranging sections.

      - l.183-217: mix of results and discussion 

      We agree and we have resolved this by removing sentences and re-arranging sections.

      Small cosmetic suggestions: 

      - l.44: conservation of 'some' quantities: vague 

      Changed to conservation of morphological relationships across evolution

      - l.66: order of citations ([24, 22,23]) 

      Will be fixed at proof stage depending on format of references.

      - l.77: delete space between citation and period 

      Done

      - l.77: I would delete 'say' 

      Done

      - l.86: 'but to also analyse' -> 'to analyse' 

      Done

      - l.105: remove 'we are encouraged that' 

      Done

      - l.111: 'also see' -> 'see also' 

      Done

      - l.164: 'remarkable': subjective 

      Done

      - l.189: define approx. abbreviation 

      Done

      - l.190: 'approx' -> 'approx.' 

      Revised

      - l.195: 'dramatic': subjective 

      removed

      -l. 246: 'much' -> vague 

      explained

    1. eLife assessment

      This study presents a valuable finding on predator threat detection in C. elegans and the role of neuropeptide systems in defensive behavioral strategies. The evidence supporting the conclusions is solid, although additional analyses and control experiments would strengthen the claims of the study. Overall, the work is of interest to the C. elegans community as well as neuroethologists and ecologists studying predator-prey interactions.

    2. Reviewer #1 (Public Review):

      Summary:

      In this manuscript, Quach et al. report a detailed investigation into the defense mechanisms of Caenorhabditis elegans in response to predatory threats from Pristionchus pacificus. Based on principles from predatory imminence and prey refuge theories, the authors delineate three defense modes (pre-encounter, post-encounter, and circa-strike) corresponding to increasing levels of threat proximity. These modes are observed in a controlled but naturalistic setup and are quantified by multiple behavioral outputs defined in time and/or space domains allowing nuanced phenotypic assays. The authors demonstrate that C. elegans displays graded defense behavioral responses toward varied lethality of threats and that only life-threatening predators trigger all three defense modes. The study also offers a narrative on the behavioral strategies and underlying molecular regulation, focusing on the roles of SEB-3 receptors and NLP-49 peptides in mediating responses in these defense modes. They found that the interplay between SEB-3 and NLP-49 peptides appears complex, as evidenced by the diverse outcomes when either or both genes are manipulated in various behavioral modes.

      Strengths:

      The paper presents an interesting story, with carefully designed experiments and necessary controls, and novel findings and implications about predator-induced defensive behaviors and underlying molecular regulation in this important model organism. The design of experiments and description of findings are easy to follow and well-motivated. The findings contribute to our understanding of stress response systems and offer broader implications for neuroethological studies across species.

      Weaknesses:

      Although overall the study is well designed and movitated, the paper could benefit from further improvements on some of the methods descriptions and experiment interpretations.

    3. Reviewer #2 (Public Review):

      In this study, the authors characterize the defensive responses of C. elegans to the predatory Pristionchus species. Drawing parallels to ecological models of predatory imminence and prey refuge theory, they outline various behaviors exhibited by C. elegans when faced with predator threats. They also find that these behaviors can be modulated by the peptide NLP-49 and its receptor SEB-3 in various degrees.

      The conclusions of this paper are mostly well-supported, the writing and the figures are clear and easy to interpret. However, some of the claims need to be better supported and the unique findings of this work should be clarified better in text.

      (1) Previous work by the group (Quach, 2022) showed that Pristionchus adopt a "patrolling strategy" on a lawn with adult C. elegans and this depends on bacterial lawn thickness. Consequently, it may be hypothesized that C. elegans themselves will adopt different predator avoidance strategies depending on predator tactics differing due to lawn variations. The authors have not shown why they selected a particular size and density of bacterial lawn for the experiments in this paper, and should run control experiments with thinner and denser lawns with differing edge densities to make broad arguments about predator avoidance strategies for C. elegans. In addition, C. elegans leaving behavior from bacterial lawns (without predators) are also heavily dependent on density of bacteria, especially at the edges where it affects oxygen gradients (Bendesky, 2011), and might alter the baseline leaving rates irrespective of predation threats. The authors also do not mention if all strains or conditions in each figure panel were run as day-matched controls. Given that bacterial densities and ambient conditions can affect C. elegans behavior, especially that of lawn-leaving, it is important to run day-matched controls.

      (2) Both the patch-leaving and feeding in outstretched posture behaviors described here in this study were reported in an earlier paper by the same group (Quach, 2022) as mentioned by the authors in the first section of the results. While they do characterize these further in this study, these are not novel findings of this work.

      (3) For Figures 1F-H, given that animals can reside on the lawn edges as well as the center, bins explored are not a definitive metric of exploration since the animals can decide to patrol the lawn boundary (especially since the lawns have thick edges). The authors should also quantify tracks along the edge from videographic evidence as they have done previously in Figure 5 of Quach, 2022 to get a total measure of distance explored.

      (4) Where were the animals placed in the wide-arena predator-free patch post encounter? It is mentioned that the animal was placed at the center of the arena in lines 220-221. While this makes sense for the narrow-arena, it is unclear how far from the patch animals were positioned for the wide exit arena. Is it the same distance away as the distance of the patch from the center of the narrow exit arena? Please make this clear in the text or in the methods.

      (5) Do exit decisions from the bacterial patch scale with number of bites or is one bite sufficient? Do all bites lead to bite-induced aversive response? This would be important to quantify especially if contextualizing to predatory imminence.

      (6) Why are the threats posed by aversive but non-lethal JU1051 and lethal PS312 evaluated similarly? Did the authors characterize if the number of bites are different for these strains? Can the authors speculate on why this would happen in the discussion?

      (7) The authors indicate that bites from the non-aversive TU445 led to a low number of exits and thus it was consequently excluded from further analysis. If anything, this strain would have provided a good negative control and baseline metrics for other circa-strike and post-encounter behaviors.

      8) For Figures 3 G and H, the reduction in bins explored (bins_none - bins_RS1594) due to the presence of predators should be compared between wildtype and mutants, instead of the difference between none and RS5194 for each strain.

      (9) While the authors argue that baseline speeds of seb-3 are similar to wild type (Figure S3), previous work (Jee, 2012) has shown that seb-3 not only affects speed but also roaming/dwelling states which will significantly affect the exploration metric (bins explored) which the authors use in Figs 3G-H and 4E-F. Control experiments are necessary to avoid this conundrum. Authors should either visualize and quantify tracks (as suggested in 3) or quantify roaming-dwelling in the seb-3 animals in the absence of predator threat.

      (10) While it might be beyond the scope of the study, it would be nice if the authors could speculate on potential sites of actions of NLP-49 in the discussion, especially since it is expressed in a distinct group of neurons.

    1. eLife assessment

      A combination of molecular dynamics simulation and state-of-the-art statistical post-processing techniques provided valuable insight into GPCR-ligand dynamics. This manuscript provides solid evidence for differences in the binding/unbinding of classical cannabinoid drugs from new psychoactive substances. The results could aid in mitigating the public health threat these drugs pose.

    1. Welcome back.

      I spent the last few lessons going through DNS, helping you, I hope, understand how the system works at an architectural level. In this lesson, I want to finish off and talk about the types of records which can be stored in DNS, and I'll try to keep it quick, so let's get started.

      The first record type that I want to touch on are nameserver records or NS records. I've mentioned these in the previous lessons in this section on DNS. These are the record types which allow delegation to occur in DNS. So we've got the dot com zone, and that's managed by Verisign. This zone will have multiple nameserver records inside it for amazon.com. These nameserver records are how the dot com delegation happens for amazon.com, and they point at servers managed by the amazon.com team. These servers host the amazon.com zone. Inside this one are DNS records such as www, which is how you can access those records as part of DNS.

      Now, of course, the same is true on the other side. The root zone has delegated management of dot com by having nameservers in the root zone point at the servers that host the dot com zone. So nameserver records are how delegation works end-to-end in DNS. Nameservers are hugely important.

      Next up, we have a pair of record types that you will use a lot more often in DNS, and they're A records or AAAA records, and they actually do the same thing. Given a DNS zone, in this example, google.com, these types of records map host names to IP addresses. The difference is the type of IP address. For a given host, let's say www, an A record maps this onto an IP version four address. An AAAA record type is the same, but this maps the host onto an IP version six address. Generally, as an admin or a solutions architect, you will normally create two records with the same name. One will be an A record, and one will be an AAAA record. The client operating system and DNS software on that client can then pick the correct type of address that it wants, either AAAA, if it's capable of IP version six, or just a normal A record, if it's not capable of version six.

      Now next up is the CNAME record type, which stands for canonical name. For a given zone, the CNAME record type lets you create the equivalent of DNS shortcuts, so host to host records. Let's say that we have an A record called server, which points at an IP version four address. It's fairly common that a given server performs multiple tasks. Maybe in this case, it provides ftp, mail, and web services. Creating three CNAMEs and pointing them all at the A server record means that they will all resolve to the same IP version four address. CNAMEs are used to reduce admin overhead. In this case, if the IP version four address of the server changes, it's just the single record to update, the A record, because the three CNAMEs reference that A record, they'll automatically get updated. Now, CNAMEs cannot point directly at an IP address, only other names, and you can expect to see that feature in the exam as a trick question.

      Next is the MX record type, and this is hugely important for how the internet works, specifically how email on the internet works. Imagine if you're using your laptop via your email server and you want to send an email to hi@google.com. MX records are used as part of this process. Your email server needs to know which server to pass the email onto. So we start with the google.com zone. Inside this zone, we have an A record with the name mail, and this is pointing at an IP address. Now it's important to know from the offset that this could be called rabbits or apple or fluffy; the name isn't important to how email works using MX records. In this case, the A record is just called mail, but it doesn't matter.

      Now also inside the google.com zone is a collection of MX records, in this example, two records. MX records have two main parts, a priority and a value, and I'll revisit the priority soon. For now, let's focus on the values. The value can be just a host, as with the top example. So mail here is just mail. That's just a host. If it's just a host and we can tell that by the fact that it's got no dot on the right, it's assumed to be part of the same zone that it's in. So mail here actually means mail.google.com. It's the mail host inside the google.com zone. If you include a dot on the right, this means it's a fully qualified domain name. And so it can either point to the host inside the same zone or something outside that zone, maybe Office 365 if Google decided Microsoft's mail product was better.

      The way that MX records are used is that our email server looks at the two addresses on the mail, so hi@google.com, and it focuses on the domain, so google.com. It then does an MX query using DNS on google.com. This is the same process as any other record type, so it talks to the root first, then dot com, then google.com, and then it retrieves any MX records. In this case, two different records. Now, this is where the priority value is used to choose which record to use. Lower values for the priority field are actually higher priority. So in this example, mail is used first and then mail.other.domain is only used if mail isn't functional. If the priority is the same, then any of them could be selected. Whichever is used, the server gets the result of the query back and it uses this to connect to the mail server for google.com via SMTP and it uses this protocol to deliver the mail. So in summary, an MX record is how a server can find the mail server for a specific domain. MX records are used constantly. Whenever you send an email to a domain, the server that is sending the email on your behalf is using DNS to do an MX lookup and locate the mail server to use.

      The last record type that I want to talk about is a TXT record, also known as a text record. Text records allow you to add arbitrary text to a domain. It's a way in which the DNS system can provide additional functionality. One common usage for a TXT record type is to prove domain ownership. Let's say for the Animals for Life domain, we want to add it to an email system, maybe Google Mail or Office 365 or Amazon WorkMail. Whatever system we use to host our email might ask us to add a text record to the domain, containing a certain piece of text data. So let's say that the random text that we need to add is "cats are the best." Then our administrator would add a record inside this domain with that text data. And once our admin has done that, the external party, so the Google email system, would query that text data, make sure that it matches the value that they're expecting. And if it does, that would prove that we own that domain and we can manage it. So text records are fairly important in proving domain ownership, and that's one of the most common use cases that you will use the text record type for. There are other uses for the text record type. It can be used to fight spam. So you can add certain information to a domain indicating which entities are authorized to send email on your behalf. If any email servers receive email from any other servers, then that's a good indication that that email is spam and not authorized.

      So those are the record types that I want to cover. But there's one more concept that I need to discuss before we finish up. And that is DNS TTL or Time To Live. A TTL value is something that can be set on DNS records. It's a numeric value in seconds. Let's look at a visual example. We have a client looking to connect to amazon.com. And so it queries DNS using a resolver server that's hosted at its internet provider. That resolver server talks to the DNS root, which points at the dot com registry authoritative servers. And so the resolver queries those servers. Those authoritative servers for dot com provide the nameservers at the amazon.com zone, and so it goes ahead and queries that. That server hosts and is authoritative for the amazon.com zone, which has a record for www. And so it uses this record to get the IP address and connect to the server.

      This process takes time. This walking the tree process, talking to the root, and then all of the levels to get the eventual result that you need, it is a lengthy process. Getting a result from the authoritative source, so the source that is trusted by DNS, this is known as an authoritative answer. So you get an authoritative answer by talking to a nameserver, which is authoritative for that particular domain. So if I query the nameserver for amazon.com and I'm querying the www record in amazon.com, then I get back what's known as an authoritative answer. And that is always preferred because it's always going to be accurate. It's the single source of truth.

      But using TTL values, the administrator of amazon.com can indicate to others how long records can be cached for, what amount of time is appropriate. In this example, because the admin of amazon.com has set a 3,600 TTL value, which is in seconds, it means that the results of the query are stored at the resolver server for 3,600 seconds, which is one hour. If another client queries the same thing, which is pretty likely for amazon.com, then they will get back a non-authoritative answer. But that answer will be retrieved immediately because it's cached on the resolver server. The resolver server, remember, is hosted probably at our internet provider, and so it's much quicker to access that data.

      So non-authoritative answers are often the same as authoritative answers. Normally things in DNS don't change, and when they don't change, non-authoritative and authoritative is the same thing. But TTL is important for when things change. If you migrate your email service and you have a high TTL value on your MX record, and you change to a provider with a different IP address, then email delivery might be delayed because old IP addresses for those MX records will be cached and they will be used. TTLs are a balance. Low values mean more queries against your nameservers. High values mean fewer queries, but also less control if you need to change records. You can change TTL values before projects and upgrades or you can leave them permanently low. Also, keep in mind that the resolver should obey TTL values, but that's not always the case. It could ignore them. That configuration can be changed locally by the admin at the resolver server. DNS is often the cause of project failures because of TTL values. If you're doing any work that involves changing any DNS records, it's always recommended to lower the TTL value well in advance of the work, sometimes days or weeks in advance, and this will make sure that you have fewer caching issues when you finally do change those records.

      Okay, that's it. That's everything I wanted to cover in this lesson. I've covered the different DNS record types, as well as introduced you to the TTL concept, which is essential to understand if you want to avoid any DNS-related problems. Thanks for listening. Go ahead, complete this video and when you're ready, join me in the next.

  2. Local file Local file
    1. 1 A Header

      This header needs to be more specific! I can see it's a header

    2. $$T_{est}$$

    Tags

    Annotators

    1. Welcome back and in this demo lesson I'm going to step through how you can register a domain using Route 53. Now this is an optional step within the course. Worst case you should know how to perform the domain registration process within AWS and optionally you can use this domain within certain demos within the course to get a more real-world like experience.

      To get started, as always, just make sure that you're logged in to the IAM admin user of the general AWS account which is the management account of the organization. Now make sure that you have the Northern Virginia region selected. While Route 53 is a global service, I want you to get into the habit of using the Northern Virginia region. Now we're going to be using the Route 53 product, so click in the search box at the top of the screen, type Route 53 and then click to move to the Route 53 console.

      Now Route 53, at least in the context of this demo lesson, has two major areas. First is hosted zones and this is where you create or manage DNS zones within the product. Now DNS zones, as you'll learn elsewhere in the course, you can think of as databases which store your DNS records. When you create a hosted zone within Route 53, Route 53 will allocate four name servers to host this hosted zone. And that's important, you need to understand that every time you create a new hosted zone, Route 53 will allocate four different name servers to host that zone. Now the second area of Route 53 is registered domains, and it's in the registered domains area of the console where you can register a domain or transfer a domain in to Route 53.

      Now we're going to register a domain, but before we do that, if you do see any notifications about trying out new versions of the console, then go ahead and click to try out that new version. Where possible, I always like to teach using the latest version of the console UI because it's going to be what you'll be using long-term. So in my case, I'm going to go ahead and click on, try out the new console, depending on when you're doing this demo, you may see this or not. In either case, you want to be using this version of the console UI. So if you are going to register a domain for this course, then you need to go ahead and click register domains.

      The first step is to type the domain that you want into this box. Now, a case study that I use throughout the course is animals for life. So I'm going to go ahead and register a domain related to this case study. So if I type animalsforlive.com and press enter, it will search for the domain and tell us whether it's available. In this case, animalsforlive.com is not available. It's already been registered. In my case, I'm going to use an alternative, so I'm going to try and register animalsforlive.io. Now, I/O domains are one of the most expensive, so if you are registering a domain yourself, I would tend to advise you to look for one of the cheaper ones. I'm going to register this one and it is available.

      Once I've verified that it is available and it's the one I want, we're gonna go ahead and click on select. We can verify the price of this domain for one year, in this case it's 71 US dollars, and then go ahead and click on proceed to check out. Now it's here where you can specify a duration for the domain registration. You can use the default of one year, or alternatively you can go ahead and pick a longer registration period. For this domain I'm going to choose one year and then you can choose whether you want to auto renew the domain after that initial period. In my case I'm going to leave this selected. You'll see a subtotal at the price and then you can click next to move on to the next step.

      Now at this point you need to specify the contact type. In most cases you'll be putting a person or a company but there's also association, public body or reseller. You need to go ahead and fill in all of these details and they do need to be valid details, that's really important. If you are worried about privacy, most domains will allow you to turn on privacy protection, so any details that you enter here cannot be seen externally. Now obviously to keep my privacy intact, I'm going to go ahead and fill in all of these details and I'm going to hide the specifics and once I've entered them all, I'm going to go ahead and click on 'Next' and you should do the same. Again I've hidden my details on the bottom of the screen.

      Route 53 does tell you that in addition to the domain registration cost there is a monthly cost for the hosted zone which will be created as part of this registration. So there is a small monthly cost for every hosted zone which you have hosted using Route 53 and every domain that you have will need one hosted zone. So I'm going to scroll down. Everything looks good, you'll need to agree to the terms and conditions and then click on submit. Now at this point the domain is registering and it will take some time to complete. You may receive a registration email which may include something that you need to do, clicking on a link or some other form of identity verification. You might not get that, but if you do get it, it's important that you do follow all of the steps contained within that email. And if you don't receive an email, you should check your spam folder, because if there are any actions to perform and you don't, it could result in the domain being disabled.

      You can see the status of the domain registration by clicking on "requests" directly below "registered domains". The status will initially be listed as "in progress", and we need this to change to "successful". So pause the video, wait for this status to change, and then you're good to continue. Welcome back, in my case this took about 20 minutes to complete, but as you can see my domain is now registered. So if we go to registered domains you'll be able to see the domain name listed together with the expiration date, the auto renew status, and the status of the transfer lock. Now transfer lock is a security feature, it means the domain cannot be transferred away from route 53 without you disabling this lock.

      Now we're able to see additional details on the domain if we click on the domain name. Now obviously I've hidden my contact information. If you click on the DNSsecKeys tab then it's here where you can configure DNSsec on the domain. We won't be doing anything with that at this stage. One of the important points I want to draw your attention to is the name servers. So I've registered animalsforlife.io and it's these name servers that will be entered into the Animals for Life record within the .io top level domain zone. So these servers are the ones that the DNS system will point at. These currently are set to four Route 53 name servers. And because we've registered the domain inside Route 53, this process is automatic. So a hosted zone is created, four name servers are allocated to host this hosted zone And then those four name servers are entered into our domain records in our top level domain zone.

      This process end-to-end is all automatic. So the four name servers for the animalsforlife.io hosted zone. These are entered into the animalsforlife.io record within the .io top level domain zone. It's all automatic. So if we move to the hosted zone area of the console and then go inside AnimalsForLife.io and then expand the hosted zone details at the top These are the four name servers which are hosting this hosted zone And if you're paying attention You'll note these are the same four servers that are contained within the registered domains Area of the console and these are the same four servers which have been entered into the .io top level domain zone. Now if you ever delete and then recreate a hosted zone It's going to be allocated with four brand new name servers. These name servers will be different than the name servers for the zone which you deleted So if you delete and recreate a hosted zone You'll be given four brand new name servers. In order to stop any DNS problems you'll need to take these brand new name servers and update the items within the registered domains area of the console but again because you've registered the domain within route 53 this process has been handled for you end to end you won't need to worry about any of this unless you delete and recreate the host of zone.

      Now that's everything you need to do at this point if you followed this process throughout this demo lesson you now have an operational domain within the global DNS infrastructure that's manageable within Route 53. Now as I mentioned earlier this is an optional step for the course if you do have a domain registered then you will have the opportunity to use it within various demo lessons within the course. If you don't, don't worry, none of this is mandatory you can do the rest of the course without having a domain. At this point though that is everything I wanted you to do in this demo lesson. Go ahead and complete the video and when you're ready I'll look forward to you joining me in the next.

    1. eLife assessment

      This useful study reports on the discovery of an antimicrobial agent that kills Neisseria gonorrhoeae. Sensitivity is attributed to a combination of DedA assisted uptake of oxydifficidin into the cytoplasm and the presence of a oxydifficidin-sensitive RpIl ribosomal protein. Due to the narrow scope, the broader antibacterial spectrum remains unclear and therefore the evidence supporting the conclusions is incomplete with key methods and data lacking. This work will be of interest to microbiologists and synthetic biologists.

    2. Reviewer #1 (Public Review):

      Summary:

      Kan et al. report the serendipitous discovery of a Bacillus amyloliquefaciens strain that kills N. gonorrhoeae. They use TnSeq to identify that the anti-gonococcal agent is oxydifficidin and show that it acts at the ribosome and that one of the dedA gene products in N. gonorrhoeae MS11 is important for moving the oxydifficidin across the membrane.

      Strengths:

      This is an impressive amount of work, moving from a serendipitous observation through TnSeq to characterize the mechanism by which Oxydifficidin works.

      Weaknesses:

      (1) There are important gaps in the manuscript's methods.

      (2) The work should evaluate antibiotics relevant to N. gonorrhoeae.

      (3) The genetic diversity of dedA and rplL in N. gonorrhoeae is not clear, neither is it clear whether oxydifficidin is active against more relevant strains and species than tested so far.

    3. Reviewer #2 (Public Review):

      Summary:

      Kan et al. present the discovery of oxydifficidin as a potential antimicrobial against N. gonorrhoeae, including multi-drug resistant strains. The authors show the role of DedA flippase-assisted uptake and the specificity of RplL in the mechanism of action for oxydifficidin. This novel mode of action could potentially offer a new therapeutic avenue, providing a critical addition to the limited arsenal of antibiotics effective against gonorrhea.

      Strengths:

      This study underscores the potential of revisiting natural products for antibiotic discovery of modern-day-concerning pathogens and highlights a new target mechanism that could inform future drug development. Indeed there is a recent growing body of research utilising AI and predictive computational informatics to revisit potential antimicrobial agents and metabolites from cultured bacterial species. The discovery of oxydifficidin interaction with RplL and its DedA-assisted uptake mechanism opens new research directions in understanding and combating antibiotic-resistant N. gonorrhoeae. Methodologically, the study is rigorous employing various experimental techniques such as genome sequencing, bioassay-guided fractionation, LCMS, NMR, and Tn-mutagenesis.

      Weaknesses:

      The scope is somewhat narrow, focusing primarily on N. gonorrhoeae. This limits the generalizability of the findings and leaves questions about its broader antibacterial spectrum. Moreover, while the study demonstrates the in vitro effectiveness of oxydifficidin, there is a lack of in vivo validation (i.e., animal models) for assessing pre-clinical potential of oxydifficidin. Potential SNPs within dedA or RplL raise concerns about how quickly resistance could emerge in clinical settings.

    4. Reviewer #3 (Public Review):

      Summary:

      The authors have shown that oxydifficidin is a potent inhibitor of Neisseria gonorrhoeae. They were able to identify the target of action to rpsL and showed that resistance could occur via mutation in the DedA flippase and RpsL.

      Strengths:

      This was a very thorough and clearly argued set of experiments that supported their conclusions.

      Weaknesses:

      There was no obvious weakness in the experimental design. Although it is promising that the DedA mutations resulted in attenuation of fitness, it remains an open question whether secondary rounds of mutation could overcome this selective disadvantage which was untried in this study.

    5. Author response:

      eLife assessment

      This useful study reports on the discovery of an antimicrobial agent that kills Neisseria gonorrhoeae. Sensitivity is attributed to a combination of DedA assisted uptake of oxydifficidin into the cytoplasm and the presence of a oxydifficidin-sensitive RplL ribosomal protein. Due to the narrow scope, the broader antibacterial spectrum remains unclear and therefore the evidence supporting the conclusions is incomplete with key methods and data lacking. This work will be of interest to microbiologists and synthetic biologists.

      General comment about narrow scope: The broader antibacterial spectrum of oxydifficidin has been reported previously (S B Zimmerman et al., 1987). The main focus of this study is on its previously unreported potent anti-gonococcal activity and mode of action. While it is true that broad-spectrum antibiotics have historically played a role in effectively controlling a wide range of infections, we and others believe that narrow-spectrum antibiotics have an overlooked importance in addressing bacterial infections. Their advantage lies in their ability to target specific pathogens without markedly disrupting the human microbiota.

      We are troubled by the statement that our paper is narrow in scope and that evidence supporting our conclusions is incomplete. We do not feel the reviews as presented substantiate drawing this conclusion about our work.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      Kan et al. report the serendipitous discovery of a Bacillus amyloliquefaciens strain that kills N. gonorrhoeae. They use TnSeq to identify that the anti-gonococcal agent is oxydifficidin and show that it acts at the ribosome and that one of the dedA gene products in N. gonorrhoeae MS11 is important for moving the oxydifficidin across the membrane.

      Strengths:

      This is an impressive amount of work, moving from a serendipitous observation through TnSeq to characterize the mechanism by which Oxydifficidin works.

      Weaknesses:

      (1) There are important gaps in the manuscript's methods.

      The requested additions to the method describing bacterial sequencing and anti-gonococcal activity screening will be made. However, we do not think the absence of these generic methods reduces the significance of our findings.

      (2) The work should evaluate antibiotics relevant to N. gonorrhoeae.

      (1) It is not clear to us why reevaluating the activity of well characterized antibiotics against known gonorrhoeae clinical strains would add value to this manuscript. The activity of clinically relevant antibiotics against antibiotic-resistant N. gonorrhoeae clinical isolates is well described in the literature. Our use of antibiotics in this study was intended to aid in the identification of oxydifficidin’s mode of action. This is true for both Tables 1 and 2.

      (2) If the reviewer insists, we would be happy to include MIC data for the following clinically relevant antibiotics: ceftriaxone (cephalosporin/beta-lactam), gentamicin (aminoglycoside), azithromycin (macrolide), and ciprofloxacin (fluoroquinolone).

      (3) The genetic diversity of dedA and rplL in N. gonorrhoeae is not clear, neither is it clear whether oxydifficidin is active against more relevant strains and species than tested so far.

      (1) We thank the reviewer for this suggestion. We aligned the DedA sequence from strain MS11 with DedA proteins from 220 N. gonorrhoeae strains that have high-quality assemblies in NCBI. The result showed that there are no amino acid changes in this protein. Using the same method, we observed several single amino acid changes in RplL. This included changes at A64, G25 and S82 in 4 strains with one change per strain. These sites differ from R76 and K84, where we identified changes that provide resistance to oxydifficidin. Notably, in a similar search of representative Escherichia, Chlamydia, Vibrio, and Pseudomonas NCBI deposited genomes, we did not identify changes in RplL at position R76 or K84.

      (2) While the usefulness of screening more clinically relevant antibiotics against clinical isolates as suggested in comment 2 was not clear to us, we agree that screening these strains for oxydifficidin activity would be beneficial. We have ordered Neisseria gonorrhoeae strain AR1280, AR1281 (CDC), and Neisseria meningitidis ATCC 13090. They will be tested when they arrive.

      Reviewer #2 (Public Review):

      Summary:

      Kan et al. present the discovery of oxydifficidin as a potential antimicrobial against N. gonorrhoeae, including multi-drug resistant strains. The authors show the role of DedA flippase-assisted uptake and the specificity of RplL in the mechanism of action for oxydifficidin. This novel mode of action could potentially offer a new therapeutic avenue, providing a critical addition to the limited arsenal of antibiotics effective against gonorrhea.

      Strengths:

      This study underscores the potential of revisiting natural products for antibiotic discovery of modern-day-concerning pathogens and highlights a new target mechanism that could inform future drug development. Indeed there is a recent growing body of research utilizing AI and predictive computational informatics to revisit potential antimicrobial agents and metabolites from cultured bacterial species. The discovery of oxydifficidin interaction with RplL and its DedA-assisted uptake mechanism opens new research directions in understanding and combating antibiotic-resistant N. gonorrhoeae. Methodologically, the study is rigorous employing various experimental techniques such as genome sequencing, bioassay-guided fractionation, LCMS, NMR, and Tn-mutagenesis.

      Weaknesses:

      The scope is somewhat narrow, focusing primarily on N. gonorrhoeae. This limits the generalizability of the findings and leaves questions about its broader antibacterial spectrum. Moreover, while the study demonstrates the in vitro effectiveness of oxydifficidin, there is a lack of in vivo validation (i.e., animal models) for assessing pre-clinical potential of oxydifficidin. Potential SNPs within dedA or RplL raise concerns about how quickly resistance could emerge in clinical settings.

      (1) Spectrum/narrow scope: The broader antibacterial spectrum of oxydifficidin has been reported previously (S B Zimmerman et al., 1987). The focus of this study is on its previously unreported potent anti-gonococcal activity and its mode of action. While it is true that broad-spectrum antibiotics have historically played a role in effectively controlling a wide range of infections, we and others believe that narrow-spectrum antibiotics have an overlooked importance in addressing bacterial infections. Their advantage lies in their ability to target specific pathogens without markedly disrupting the human microbiota.

      (2) Animal models: We acknowledge the reviewer’s insight regarding the importance of in vivo validation to enhance oxydifficidin’s pre-clinical potential. However, due to the labor-intensive process needed to isolate oxydifficidin, obtaining a sufficient quantity for animal studies is beyond the scope of this study. Our future work will focus on optimizing the yield of oxydifficidin and developing a topical mouse model for subsequent investigations.

      (3) Potential SNPs: Please see our response to Reviewer #1’s comment 3. We acknowledge that potential SNPs within dedA and rplL raise concerns regarding clinical resistance, which is a common issue for protein-targeting antibiotics. Yet, as pointed out in the manuscript, obtaining mutants in the lab was a very low yield endeavor.

      Reviewer #3 (Public Review):

      Summary: The authors have shown that oxydifficidin is a potent inhibitor of Neisseria gonorrhoeae. They were able to identify the target of action to rplL and showed that resistance could occur via mutation in the DedA flippase and RplL.

      Strengths:

      This was a very thorough and clearly argued set of experiments that supported their conclusions.

      Weaknesses:

      There was no obvious weakness in the experimental design. Although it is promising that the DedA mutations resulted in attenuation of fitness, it remains an open question whether secondary rounds of mutation could overcome this selective disadvantage which was untried in this study.

    1. eLife assessment

      This study convincingly shows that aquaporins play a key role in blood vessel formation during zebrafish development. In particular, the paper implicates hydrostatic pressure and water flow as mechanisms controlling endothelial cell migration during angiogenic sprouting. This important study significantly advances our understanding of cell migration during morphogenesis. As such, this work will be of great interest to developmental and cell biologists working on organogenesis, angiogenesis, and cell migration.

    2. Reviewer #1 (Public Review):

      Summary:

      This paper details a study of endothelial cell vessel formation during zebrafish development. The results focus on the role of aquaporins, which mediate the flow of water across the cell membrane, leading to cell movement. The authors show that actin and water flow together drive endothelial cell migration and vessel formation. If any of these two elements are perturbed, there are observed defects in vessels. Overall, the paper significantly improves our understanding of cell migration during morphogenesis in organisms.

      Strengths:

      The data are extensive and are of high quality. There is a good amount of quantification with convincing statistical significance. The overall conclusion is justified given the evidence.

      Weaknesses:

      There are two weaknesses, which if addressed, would improve the paper.

      (1) The paper focuses on aquaporins, which while mediates water flow, cannot drive directional water flow. If the osmotic engine model is correct, then ion channels such as NHE1 are the driving force for water flow. Indeed this water is shown in previous studies. Moreover, NHE1 can drive water intake because the export of H+ leads to increased HCO3 due to the reaction between CO2+H2O, which increases the cytoplasmic osmolarity (see Li, Zhou and Sun, Frontiers in Cell Dev. Bio. 2021). If NHE cannot be easily perturbed in zebrafish, it might be of interest to perturb Cl channels such as SWELL1, which was recently shown to work together with NHE (see Zhang, et al, Nat. Comm. 2022).

      (2) In some places the discussion seems a little confusing where the text goes from hydrostatic pressure to osmotic gradient. It might improve the paper if some background is given. For example, mention water flow follows osmotic gradients, which will build up hydrostatic pressure. The osmotic gradients across the membrane are generated by active ion exchangers. This point is often confused in literature and somewhere in the intro, this could be made clearer.

    3. Reviewer #2 (Public Review):

      Summary:

      Directional migration is an integral aspect of sprouting angiogenesis and requires a cell to change its shape and sense a chemotactic or growth factor stimulus. Kondrychyn I. et al. provide data that indicate a requirement for zebrafish aquaporins 1 and 8, in cellular water inflow and sprouting angiogenesis. Zebrafish mutants lacking aqp1a.1 and aqp8a.1 have significantly lower tip cell volume and migration velocity, which delays vascular development. Inhibition of actin formation and filopodia dynamics further aggravates this phenotype. The link between water inflow, hydrostatic pressure, and actin dynamics driving endothelial cell sprouting and migration during angiogenesis is highly novel.

      Strengths:

      The zebrafish genetics, microscopy imaging, and measurements performed are of very high quality. The study data and interpretations are very well-presented in this manuscript.

      Weaknesses:

      Some of the findings and interpretations could be strengthened by additional measurements and further discussion. Also, a better comparison and integration of the authors' findings, with other previously published findings in mice and zebrafish would strengthen the paper.

    4. Reviewer #3 (Public Review):

      Summary:

      Kondrychyn and colleagues describe the contribution of two Aquaporins Aqp1a.1 and Aqp8a.1 towards angiogenic sprouting in the zebrafish embryo. By whole-mount in situ hybridization, RNAscope, and scRNA-seq, they show that both genes are expressed in endothelial cells in partly overlapping spatiotemporal patterns. Pharmacological inhibition experiments indicate a requirement for VEGR2 signaling (but not Notch) in transcriptional activation.

      To assess the role of both genes during vascular development the authors generate genetic mutations. While homozygous single mutants appear less affected, aqp1a.1;aqp8a.1 double mutants exhibit severe defects in EC sprouting and ISV formation.

      At the cellular level, the aquaporin mutants display a reduction of filopodia in number and length. Furthermore, a reduction in cell volume is observed indicating a defect in water uptake.

      The authors conclude, that polarized water uptake mediated by aquaporins is required for the initiation of endothelial sprouting and (tip) cell migration during ISV formation. They further propose that water influx increases hydrostatic pressure within the cells which may facilitate actin polymerization and formation membrane protrusions.

      Strengths:

      The authors provide a detailed analysis of Aqp1a.1 and Aqp8a.1 during blood vessel formation in vivo, using zebrafish intersomitic vessels as a model. State-of-the-art imaging demonstrates an essential role in aquaporins in different aspects of endothelial cell activation and migration during angiogenesis.

      Weaknesses:

      With respect to the connection between Aqp1/8 and actin polymerization/filopodia formation, the evidence appears preliminary and the authors' interpretation is guided by evidence from other experimental systems.

    1. Welcome back. And now that I've talked about the fundamentals of DNS from an abstract perspective, I want to bring this back to an AWS focus and talk about Route 53, which is AWS's managed DNS product.

      Okay, let's jump in and get started with a high level product basics, and then I'll talk about the architecture. Route 53 provides two main services. First, it's a service in AWS, which allows you to register domains. And second, it can host zone files for you on managed name servers, which it provides. Now Route 53 is a global service with a single database. It's one of very few AWS services which operates as a single global service. And as such, you don't need to pick a region when using it from the console UI. The data that Route 53 stores or manages is distributed globally as a single set and it's replicated between regions. And so it's a globally resilient service. Route 53 can tolerate the failure of one or more regions and continue to operate without any problems. Now it's one of the most important AWS products. It needs to be able to scale, stay highly performant, whilst remaining reliable, and continue working through failure.

      So let's look at exactly how Route 53 is architected and exactly what it does to provide these two main services. So the first service that I mentioned at the start of this lesson is that Route 53 allows you to register domains. And to do that, it has relationships with all of the major domain registries. Remember from the last lesson that these are the companies which manage the top level domains. They've been delegated this ability by IANA who manage the root zone for DNS. Now these registries, each manage one specific zone. One of them manages the .com zone and/or the .net zone, and another the .io zone, and so on.

      In the next lesson, I'll be demoing how to register a domain that I'll be using for the course scenario. And that domain will be a .org domain. And so one of these relationships is with the .org registry, an organization called PIR. Now, when a domain is registered, a few things happen. First, Route 53 checks with the registry for that top level domain if the domain is available. For this example to keep it simple, let's just assume it is. Then Route 53 creates a zone file for the domain being registered. And remember a zone file is just a database which contains all of the DNS information for a particular domain. In this case, animals4life.org. As well as creating the zone file, Route 53 also allocates name service for this zone. So these are servers which Route 53 creates and manages which are distributed globally and there are generally four of these for one individual zone.

      So it takes this zone file that it's created. And this is known as a hosted zone, using Route 53 terminology, and it puts that zone file onto these four managed name servers. And then as part of registering the domain it communicates with the .org registry. And this is PIR in this case, and liaising with that registry, it adds these name server records into the zone file for the .org top level domain. And the way that it does this is it uses name server records. So these name server records are how PIR delegate the admin of the domain tools. By adding the name server records to the org zone, they indicate that our four name servers are all authoritative for the domain. And that's how a domain is registered using Route 53.

      It's not a complicated process when you simplify it right down. It's simply the process of creating a zone file, creating a number of managed name servers, putting that zone file on those servers, and then liaising with the registry for the top level domain, and getting a name server records added to the top level domain zone, which point back at these servers. Remember, DNS is just a system of delegation.

      So next, let's quickly take a look at zones inside Route 53. So Route 53 provides DNS zones as well as hosting for those zones. It's basically DNS as a service. So it lets you create a manage zone files. And these zone files are called hosted zones in Route 53 terminology, because they're hosted on AWS managed name servers. So when a hosted zone is created, a number of servers are allocated and linked to that hosted zone. So they're essentially one and the same. From Route 53's perspective, every hosted zone also has a number of allocated managed name servers. Now a hosted zone can be public, which means that the data is accessible on the public internet. The name servers for a public hosted zone live logically in the AWS public zone. And this is accessible anywhere with the public internet connection. So they're part of the public DNS system.

      A hosted zone could also be private which means that it's linked to one or more VPCs and only accessible from within those VPCs. And you might use this type of zone if you want to host sensitive DNS records that you don't want to be publicly accessible. A hosted zone hosts DNS records, which I'll be talking about in an upcoming lesson in much more detail because there are many different types of records. Inside Route 53, you'll see records referred to as record sets. Now there is a tiny difference, but for now you can think of them as the same thing.

      Okay, so now it's time for a demo. I know that DNS has been a lot of theory. And so I wanted to show you a domain being registered and the domain that will be registered is the domain that I'll be using for the course scenario which is animals4life.org. So when you're ready to see that, go ahead, complete this video, and join me in the next.

    1. Welcome to this lesson where I'm going to be talking about high availability (HA), fault tolerance (FT), and disaster recovery (DR). It's essential that you understand all three of these to be an effective solutions architect and I want to make sure that you understand all of them correctly. Many of the best architects and consultants that I've worked with have misunderstood exactly what HA and FT mean. The best outcome of this misunderstanding is that you waste business funds and put a project at risk. Worst case, you can literally put lives at risk. So, let's jump in and get started and I promise to keep it as brief as possible, but this really is something you need to fully understand.

      Let's start with high availability. This is a term that most people think that they understand. Formally, the definition is that high availability aims to ensure an agreed level of operational performance, usually uptime, for a higher than normal period and I've highlighted the key parts of that definition. Most students that I initially teach have an assumption that making a system highly available means ensuring that the system never fails or that the user of a system never experiences any outages and that is not true. HA isn't aiming to stop failure, and it definitely doesn't mean that customers won't experience outages. A highly available system is one designed to be online and providing services as often as possible. It's a system designed so that when it fails, its components can be replaced or fixed as quickly as possible, often using automation to bring systems back into service. High availability is not about the user experience. If a system fails and a component is replaced and that disrupts service for a few seconds, that's okay. It's still highly available. High availability is about maximizing a system's online time and that's it.

      Let me give you an example. Let's say we have a system which has a customer, Winnie. Winnie is a data scientist and uses a bespoke application to identify complex data trends. Now, this application runs on a single server, let's say inside AWS. The application probably has other users in addition to Winnie. It's an important application to the business. If it's down, the staff can't work. If they can't work, they don't generate value to the business and of course, this costs the business money. If we have a failure, it means that the system is now suffering an outage, it's not available. System availability is generally expressed in the form of a percentage of uptime. So we might have 99.9 or three nines and this means that we can only have 8.77 hours of downtime per year. Imagine only being able to take a system down for 8.77 hours a year, that's less than one hour per month. It gets worse though, some systems need even higher levels of availability. We've got 99.999% availability or five nines and this only allows for 5.26 minutes per year of downtime. That means for all outages during a year, you have 5.26 minutes. That includes identifying that there's an outage, identifying the cause, devising a solution, and implementing a fix. An outage in this context is defined as something which impacts that server, so impacts your users.

      Now, fixing Winnie's application quickly can be done by swapping out the compute resource, probably a virtual server. Rather than using time to diagnose the issue, if you have a process ready to replace it, it can be fixed quickly and probably in an automated way, or you might improve this further by having two servers online constantly, one active and one standby. In the event of a failure, customers would move to the standby server with very close to zero downtime. But, and this is a key factor about high availability, when they migrate from the active server to the standby server, they might have to re-login or might have some small disruption. For high availability, user disruption, while not being ideal, is okay. It can happen because high availability is just about minimizing any outages.

      Now, this might explain it a little better. This is a real-world example of something which has high availability built in. It's a four by four. If you were driving in the desert with a normal urban-grade car and it got a flat tire, would you have a spare? Would you have the tools ready to repair it as quickly as possible? In a desert, an outage or delay could have major impacts. It's risky and it could impact getting to your destination. So an example of high availability is to carry a spare wheel and the tools required to replace it. You would of course, need to spend time changing the tire, which is a disruption, but it could be done and it minimizes the time that you're out of action. If you don't have a spare tire, then you'd need to call for assistance, which would substantially increase the time you're out of action. So, high availability is about keeping a system operational. It's about fast or automatic recovery of issues. It's not about preventing user disruption. While that's a bonus, a highly available system can still have disruption to your user base when there is a failure.

      Now, high availability has costs required to implement it. It needs some design decisions to be made in advance and it requires a certain level of automation. Sometimes, high availability needs redundant servers or redundant infrastructure to be in place ready to switch customers over to in the event of a disaster to minimize downtime.

      Now, let's take this a step further and talk about fault tolerance and how it differs from high availability. When most people think of high availability, they're actually mixing it up with fault tolerance. Fault tolerance in some ways is very similar to high availability, but it is much more. Fault tolerance is defined as the property that enables a system to continue operating properly in the event of a failure of some of its components, so one or more faults within the system. Fault tolerance means that if a system has faults, and this could be one fault or multiple faults, then it should continue to operate properly, even while those faults are present and being fixed. It means it has to continue operating through a failure without impacting customers.

      Imagine a scenario where we have somebody injured, so we've got Dr. Abbie and she's been told that she has an urgent case of an injured patient and we'll call this patient, Mike. Mike has been rushed to the hospital after injuring himself running. He's currently being prepped for a surgical procedure and is in the operating room and currently under general anesthetic. While he's unconscious, he's being monitored and this monitoring system indicates when to reduce or increase the levels of anesthetic that Mike gets. It's critical that this server is not to be interrupted ever. The system uses underlying infrastructure on-premises at the hospital. Now, in the event of a system failure, if it was just a highly available system, the server could be replaced or another server could be included in an active standby architecture. In either case, the swap between the servers would cause a system error, a disruption. However quick the fix, however small that disruption, in certain situations like this, any disruption can be life-threatening. This is an example of a situation where high availability isn't enough. Fault tolerance systems are designed to work through failure with no disruption. In this example, we might have the system's monitor communicating with two servers at the same time in an active, active configuration. The monitor is connected to both servers all of the time. So this is not just a simple fail-over configuration. If a server failed, it would drop down to just communicating with the remaining server and as long as one server remains active, the system is fully functional. Now, we could take this further adding a second monitoring system, itself with connections to both servers. That way, one monitor can fail, one server can fail and still the service would continue uninterrupted. We could even eliminate the human dependency in the system and add an extra surgeon, Dr. Abbie's twin.

      Most people think that HA means operating through failure, it's not. HA is just about maximizing uptime. Fault tolerance is what means to operate through failure. Fault tolerance can be expensive because it's much more complex to implement versus high availability. High availability can be accomplished by having spare equipment, so standby, physical or virtual components. As long as you automate things and have these spare components ready to go, you can minimize outages. With fault tolerance, it's about more than that. You first need to minimize outages, which is the same as HA, but then you also need to design the system to be able to tolerate the failure, which means levels of redundancy and system components, which can route traffic and sessions around any failed components.

      Now remember the example I used for high availability, the four by four in the desert. There are situations where we can't pull over to the side of the road and change a component. An example of this is a plane, which is in the air. A plane needs to operate through systems failure, so through an engine failure, for example. If an engine fails, the plane can't simply stop and effect repairs. So, a plane comes with more engines than it needs. It comes with duplicate electronic systems and duplicate hydraulic systems, so that when it has a problem, it just carries on running until it can safely land and effect repairs. AWS is no exception to this. Systems can be designed to only maximize uptime, which is high availability, or they can be designed for mission or life critical situations and so, designed to operate through that failure, which is fault tolerance.

      As a solutions architect, you need to understand what your customer requires. A customer might say that they need HA or fault tolerance while not understanding the difference. Fault tolerance is harder to design, harder to implement and costs much more. Implementing fault tolerance when you really needed high availability simply means you're wasting money. It costs more, and it takes longer to implement. But the reverse, implementing high availability when you need fault tolerance, means that you're potentially putting life at risk. A highly available plane is less than ideal. Understand the difference, if you don't, it can be disastrous.

      So, let's move on to the final concept, which is disaster recovery. The definition of disaster recovery is a set of policies, tools, and procedures to enable the recovery or continuation of vital technology infrastructure and systems following a natural or human-induced disaster. So, while high availability and fault tolerance are about designing systems to cope or operate through disaster, disaster recovery is about what to plan for and do when disaster occurs, which knocks out a system. So, if high availability and fault tolerance don't work, what then? What if your building catches fire, is flooded or explodes? Disaster recovery is a multiple-stage set of processes. So given a disaster, it's about what happens before, so the pre-planning and what happens afterwards, the DR process itself.

      The worst time for any business is recovering in the event of a major disaster. In that type of environment, bad decisions are made, decisions based on shock, lack of sleep, and fear of how to recover. So, a good set of DR processes need to preplan for everything in advance. Build a set of processes and documentation, plan for staffing and physical issues when a disaster happens. If you have a business premises with some staff, then part of a good DR plan might be to have a standby premises ready and this standby premises can be used in the event of a disaster. That way, done in advance, your staff unaffected by the disaster, know exactly where to go. You might need space for IT systems or you might use a cloud platform, such as AWS as a backup location, but in any case, you need the idea of a backup premises or a backup location that's ready to go in the event of a disaster.

      If you have local infrastructure, then make sure you have resilience. Make sure you have plans in place and ready during a disaster. This might be extra hardware sitting at the backup site ready to go, or it might be virtual machines or instances operating in a cloud environment ready when you need them. A good DR plan means taking regular backups, so this is essential. But the worst thing you can do is to store these backups at the same site as your systems, it's dangerous. If your main site is damaged, your primary data and your backups are damaged at the same time and that's a huge problem. You need to have plans in place for offsite backup storage. So, in the event of a disaster, the backups can be restored at the standby location. So, have the backups of your primary data offsite and ready to go and make sure that all of the staff know the location and the access requirements for these backups.

      Effective DR planning isn't just about the tech though, it's about knowledge. Make sure that you have copies of all your processes available. All your logins to key systems need to be available for the staff to use when they're at this standby site. Do this in advance and it won't be a chaotic process when an issue inevitably occurs. Ideally, you want to run periodic DR testing to make sure that you have everything you need and then if you identify anything missing, you can refine your processes and run the test again. If high availability is a four-by-four, if fault tolerance are the resilient systems on large planes, then effective DR processes are pilot or passenger ejection systems. DR is designed to keep the crucial and non-replaceable parts of your system safe, so that when a disaster occurs, you don't lose anything irreplaceable and can rebuild after the disaster. Historically, disaster recovery was very manual. Because of cloud and automation, DR can now be largely automated, reducing the time for recovery and the potential for any errors.

      As you go through the course, I'm going to help you understand how to implement high availability and fault tolerance systems in AWS using AWS products and services. So, you need to understand both of these terms really well and disaster recovery. So in summary, high availability is about minimizing any outages, so maximizing system availability. Fault tolerance extends this, building systems which operate through faults and failures. Don't confuse the two. Fault tolerance is much more complex and expensive. It takes a lot more time and effort to implement and manage. I'll help you as we go through the course by identifying how to implement systems which are highly available and how to implement systems which are fault tolerant. AWS provides products and services which help with both of those or just help with one or the other and you need to know the difference. Disaster recovery is how we recover. It's what we do when high availability and fault tolerance don't work and AWS also has many systems and features which help with disaster recovery and one of the things that the exam tests will be your knowledge of how quickly you can recover and how best to recover, given the various different products and services and I'll highlight all of this as we go through the course. At this point, that's everything I wanted to cover, so thanks for listening. Go ahead, complete this video and when you're ready, I'll see you in the next.

    1. Reviewer #3 (Public Review):

      Nitta et al. use a fly model of autosomal dominant optic atrophy to provide mechanistic insights into distinct disease-causing OPA1 variants. It has long been hypothesized that missense OPA1 mutations affecting the GTPase domain, which are associated with more severe optic atrophy and extra-ophthalmic neurologic conditions such as sensorineural hearing loss (DOA plus), impart their effects through a dominant negative mechanism, but no clear direct evidence for this exists particularly in an animal model. The authors execute a well-designed study to establish their model, demonstrating a mitochondrial phenotype and optic atrophy measured as axonal degeneration. They leverage this model to provide the first direct evidence for a dominant negative mechanism for 2 mutations causing DOA plus by expressing these variants in the background of a full hOPA1 complement.

      Strengths of the paper include well-motivated objectives and hypotheses, and overall solid design and execution. There is a thorough discussion of the interpretation and context of the findings. The results technically support their primary conclusions with minor limitations. First, while only partial rescue of the most clinically relevant metric for optic atrophy in this model is now acknowledged, the result nevertheless hamstrings the mechanistic experiments that follow. Second, the results statistically support a dominant negative effect of DOA plus-associated variants, yet the data show a marginal impact on axonal degeneration for these variants. In added experiments, the ability of WT hOPA1 and I382M but not 2708del, D438V or R445H to rescue ROS levels or mitophagy in the context of dOPA1 knockdown serves to support axonal number as a valid measure of mitochondrial function in this context. However, the critical experiment demonstrating a dominant negative effect was performed in the context of expressing WT hOPA1 along with a pathogenic variant, in which no differences in ROS, COXII expression or mitophagy were seen. This makes it difficult to conclude that the dominant negative effect of D438V and R445H on axon number is related to mitochondrial function.

      As an animal model of DOA that may serve for rapid assessment of suspected OPA1 variants, the results overall support utility of this model in identifying pathogenic variants but not in distinguishing haploinsufficiency from dominant negative mechanisms among those variants. The impact of this work in providing the first direct evidence of a dominant negative mechanism is under-stated considering how important this question is in development of genetic treatments for dominant optic atrophy.

      Comments on revised version:

      The authors have addressed the comments in my initial review. Through these modification and those related to the comments from the other reviewers, the manuscript is strengthened.

      Comments on author responses to each of the reviews:

      Reviewer 1:

      Interpretation of data has been appropriately reorganized in the discussion.

      Quantified mitochondria in the model show no difference in number. There is reduced size and structural abnormalities on electron microscopy.

      Application of mito-QC revealed increased mitophagy.

      Regarding partial rescue of axonal number in the mutant model, statistical significance between control and rescue is still not depicted in Figure 4D. Detailing possible explanations for this has been addressed in the discussion. However, only partial rescue of the most clinically relevant metric for optic atrophy in this model hamstrings subsequent mechanistic experiments that follow.

      Discussion regarding variant I382M has been improved.

      While reviewer 1's concerns about axonal number as a biomarker for OPA1 function are valid, it is worth noting that this is the most clinically relevant marker in the context of DOA. That said, I agree that the mechanistic DN/HI studies needed support using other measures of mitochondrial function, and the authors have done this. The ability of WT hOPA1 and I382M but not 2708del, D438V or R445H to rescue ROS levels or mitophagy in the context of dOPA1 knockdown serves to support axonal number as a valid measure of mitochondrial function in this context. However, the critical experiment demonstrating a dominant negative effect was performed in the context of expressing WT hOPA1 along with a pathogenic variant, in which no differences in ROS, COXII expression or mitophagy were seen. This makes it difficult to conclude that the (marginal) DN effect of D438V and R445H on axon number is related to mitochondrial function, and serves as a minor weakness of the paper.

      Which exons are included in the transcript, and therefore, which isoforms are expressed in the model, has been addressed.

      Reviewer 2:

      The authors have addressed the need to include greater methodological details.

      Language concerning the clinical utility of the model in informing treatment decisions has been appropriately modified. As pointed out by Reviewer 1, additional studies were needed to better establish the potential clinical utility of this model in screening DOA variants. The authors have completed those experiments, and the results overall support utility of this model in identifying pathogenic variants but not in distinguishing HI/DN mechanisms among those variants.

      Reviewer 3:

      The author has addressed the partial rescue effect as above.

      The authors have not modified the text to acknowledge the marginal effect sizes in the critical experiment of the study that demonstrates a DN effect. Statistically, the results indeed support a dominant negative effect of DOA plus-associated variants, yet the data show a marginal impact on axonal degeneration for these variants. This remains a weakness of the study.

    1. Welcome back. In this lesson, I'm going to be covering something that will make complete sense by the end of the course. I'm introducing it now because I want you to be thinking about it whenever we're talking about AWS products and services. The topic is the shared responsibility model. The easiest way to explain this is visually, so let's jump in.

      Remember earlier in the course when I talked about the various different cloud service models? In each of these models, there were parts of the infrastructure stack that you were responsible for as the customer, and parts of the infrastructure stack that the vendor or provider were responsible for. With IaaS, for example, the company providing the IaaS product, so AWS in the case of EC2, they're responsible for the facilities, the AWS data centers, the infrastructure, so storage and networking, the servers, so EC2 hosts, and the hypervisor that allows physical hardware to be carved up into independent virtual machines. You as the customer manage the operating system, any containers, any run times, the data on the instance, the application, and any ways in which it interfaces with its customers. This is an example of a set of shared responsibilities. Part of the responsibilities lie with the vendor, and part lie with you as the customer.

      The AWS shared responsibility model is like that, only applying to the wider cloud platform from a security perspective. It's AWS' way of making sure that it's clear and that you understand fully which elements you manage and which elements it manages. At a high level, AWS are responsible for the security of the cloud. You as a customer are responsible for the security in the cloud. Now let's explore this in a little bit more detail because it will help you throughout the course and definitely for the exam.

      Now I've covered the AWS infrastructure at a high level in a previous lesson. AWS provides these to you as a service that you consume. So AWS are responsible for managing the security of the AWS regions, the Availability Zones, and the edge locations. So the hardware and security of the global infrastructure. You have no control over any of that and you don't need to worry about it. It's the "of the cloud" part, and so it's AWS' responsibility. The same holds true for the compute storage databases and networking which AWS also provide to you. AWS manage the security of those components. In addition, any software which assists in those services, AWS manage all of this part of the stack. So the hardware, the regions, the global network, the compute storage database, and networking services, and then any software that is used to provide that service, AWS manage that end-to-end.

      If you consume a service from AWS, they handle the provisioning and the security of that thing. So take EC2 as an example. The region and the Availability Zone that the instance run in, that's AWS' responsibility. The compute, the storage, the underlying databases and networking for that service, from a security perspective, that's AWS' responsibility. The software, so the user interface, the hypervisor, that's handled by AWS. Now you accept responsibility for the operating system upwards. What does that include? It means things like the client-side data encryption, integrity and authentication; server-side encryption; network traffic protection. If your application encrypts its data, you manage that. If your server uses SSL certificates, you manage those. If you encrypt server-to-server communications, then you also handle that. You're also responsible for the operating system, networking, and any local firewall configuration. You're responsible for applications, identity and access management to things that you will need to implement, manage and control. And then any customer data. So any data that runs in this stack, you need to manage it, secure it, and ensure that it's backed up.

      This might seem like a pretty abstract concept. You might be wondering, does it actually benefit you in the exam? I'd agree with you to a point. When I was doing my AWS studies, I actually didn't spend much time on the shared responsibility model. But what I found is when I sat the exam, I did feel as though it could have benefited me to start learning about it early on when I was first starting my studies. If you keep the shared responsibility in mind as we're going through the various different AWS products, you'll start building up an idea of which elements of that product AWS manage, and which elements you're responsible for. When it comes to deploying an EC2 instance into a VPC or using the Relational Database Service to deploy and manage a database inside a VPC, you need to know which elements of that you manage and which elements AWS manage.

      I'll be referring back to this shared responsibility model fairly often as we go through the course, so you build up this overview of which elements you need to worry about and which are managed by AWS. If possible, I would suggest that you either print out the shared responsibility model and put it on your desk as you're studying, or just make sure you've got a copy that you can refer back to. It becomes important to understand it at this high level. I'm not going to use any more of your time on this topic. I just wanted to introduce it. I promise you that I'm not going to be wasting your time by talking about things which don't matter. This will come in handy. This is definitely something that will help you answer some questions.

      That's all I wanted to cover for now. It's just a foundation, and I don't want to bore you with too much isolated theory. Try to keep this in mind as you go through the rest of the course. For now, this is the level of detail that you need. That's everything I wanted to cover. Go ahead, complete this lesson. When you're ready, move on to the next.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      [...] Strengths:

      The authors have generated a novel transgenic mouse line to specifically label mature differentiated oligodendrocytes, which is very useful for tracing the final destiny of mature myelinating oligodendrocytes. Also, the authors carefully compared the distribution of three progenitor cre mouse lines and suggested that Gsh-cre also labeled dorsal OLs, contrary to the previous suggestion that it only marks LGE-derived OPCs. In addition, the author also analyzed the relative contributions of OLs derived from three distinct progenitor domains in other forebrain regions (e.g. Pir, ac). Finally, the new transgenic mouse lines and established multiple combinatorial genetic models will facilitate future investigations of the developmental origins of distinct OL populations and their functional and molecular heterogeneity.

      Weaknesses:

      Since OpalinP2A-Flpo-T2A-tTA2 only labels mature oligodendrocytes but not OPCs, the authors can not suggest that the lack of LGE/CGE-derived-OLs in the neocortex is less likely caused by competitive postnatal elimination, but more likely due to limited production and/or allocation (line 118-9). It remains possible that LGE/CGE-derived OPCs migrate into the cortex but are later eliminated.

      We are glad that the reviewer appreciates our work and are grateful for the positive comments and the constructive suggestion. We agree with the reviewer that our methodology by itself cannot suggest whether the lack of LGE/CGE-derived-OLs in the neocortex is caused by competitive postnatal elimination or not. That is why we cited a parallel work by Li et al. (ref [17] in the original manuscript; ref [19] in the revised manuscript), in which in utero electroporation (IUE) failed to label LGE-derived OL lineage cells in both embryonic and early postnatal brains. Although they did not directly explore CGE using IUE, their fate mapping results using Emx1-Cre; Nkx2.1-Cre; H2B-GFP at P0 and P10 revealed very low percentage of LGE/CGE-derived OL lineage cells. The lack of adult labeling in our study together with the lack of developmental labeling in the other study prompted us to hypothesize that the lack of LGE/CGE-derived-OLs in the neocortex is less likely caused by competitive postnatal elimination, but more likely due to limited production and/or allocation. In the revised manuscript, we have expanded the discussion to explain this point more clearly.

      Reviewer #2 (Public Review):

      [...] Strengths:

      The strength and novelty of the manuscript lies in the elegant tools generated and used and which have the potential to elegantly and accurately resolve the issue of the contribution of different progenitor zones to telencephalic regions.

      We are glad that the reviewer appreciates our work and are grateful for the overall positive comments.

      Weaknesses:

      (1) Throughout the manuscript (with one exception, lines 76-78), the authors quantified OL densities instead of contributions to the total OL population (as a % of ASPA for example). This means that the reader is left with only a rough estimation of the different contributions.

      We thank the reviewer for this constructive suggestion. We have replaced the density quantification (Figure 2F and 3D in the original manuscript) with contributions to the total OL population (% of ASPA) (Figure 2J and 2N in the revised manuscript).

      (2) All images and quantifications have been confined to one level of the cortex and the potential of the MGE and the LGE/CGE to produce oligodendrocytes for more anterior and more posterior cortical regions remains unexplored.

      The quantifications were not confined to one level of the cortex but were performed in brain sections ranging from Bregma +1.94 to -2.80 mm, as shown in Supplementary Figure 2A-B in the original manuscript. We apologize for not having stated and presented this information clearly enough, and for the confusions it may have caused. In the revised manuscript, we have added relevant descriptions in the “Material and Methods” section (line 199-200*) and schematics along with representative images of more anterior and more posterior cortical regions (Supplementary Figure 2A-D).

      (3) Hence, the statement that "In summary, our findings significantly revised the canonical model of forebrain OL origins (Figure 4A) and provided a new and more comprehensive view (Figure 4B )." (lines 111, 112) is not really accurate as the findings are neither new nor comprehensive. Published manuscripts have already shown that (a) cortical OLs are mostly generated from the cortex [Tripathi et al 2011 (https://doi.org/10.1523/JNEUROSCI.6474-10.2011), Winker et al 2018 (https://doi.org/10.1523/JNEUROSCI.3392-17.2018) and Li et al (https://doi.org/10.1101/2023.12.01.569674)] and (b) MGE-derived OLs persist in the cortex [Orduz et al 2019 (https://doi.org/10.1038/s41467-019-11904-4) and Li et al 2024 (https://doi.org/10.1101/2023.12.01.569674)]. Extending the current study to different rostro-caudal regions of the cortex would greatly improve the manuscript.

      As explained in the response to comment (2), our original quantifications included different rostro-caudal regions of the cortex. In the revised manuscript, we have added more schematics and representative images in the Supplementary Figure 2 for better illustration to resolve the concern of comprehensiveness.

      We thank the reviewer for listing and summarizing highly relevant published researches along with the parallel study by Li et al. submitted to eLife. We apologize for the omission of the first two references in our original manuscripts and have cited them in appropriate places (ref [10] and ref [11] in the revised manuscript). However, we believe these works do not compromise the novelty and significance of our work for the following reasons:

      (1) Tripathi et al. 2011 (ref [10] in the revised manuscript) analyzed OL lineage cells in the corpus callosum and the spinal cord, but not in the cortex and anterior commissure. Their analysis was performed in juvenile mice (P12/13), not in adulthood. Most importantly, their analysis of ventrally derived OL lineage cells relied on lineage tracing using Gsh2Cre, which in fact also label OLs derived from Gsh2+ dorsal progenitors. In contrast, we analyzed mature OLs in the cortex, corpus callosum and anterior commissure in 2-month-old adult mice. We used intersectional and subtractive strategy to label OLs derived from dorsal, LGE/CGE and MGE/POA origins. Our strategy differentiated the two different ventral lineages (LGE/CGE vs. MGE/POA) and avoided mixed labeling of OLs from ventral and dorsal Gsh2+ progenitors.

      (2) Winkler et al. 2018 (ref [11] in the revised manuscript) analyzed OLs derived from dorsal progenitors but only quantified those in the gray matter and the white matter of somatosensory cortex. Their quantification relied on co-staining with Olig2/Sox10, and thereby included both oligodendrocyte precursors (OPCs) and OLs. In contrast, we analyzed mature OLs from three origins and quantified not only neocortical regions (Mo and SS) but also an archicortical region (Pir). Our analysis revealed that although dorsally derived OLs dominate neocortex, ventrally derived OLs, especially the LGE/CGE-derived ones, dominate piriform cortex.

      (3) Orduz et al. 2019 (ref [7] in the original manuscript and the revised manuscript) mainly focused on POA-derived OLs in the somatosensory cortex. Although they performed limited analysis on MGE/POA-derived OPCs at postnatal day 10 and 19, no quantification of MGE/POA-derived OLs was performed in terms of their density, contribution to the total OL population and spatial distribution in the cortex. In contrast, we performed systematic quantification on these aspects to demonstrate that MGE/POA-derived OLs make small but sustained contribution to cortex with a distribution pattern distinctive from those derived from the dorsal origin.

      (4) Li et al. 2024 (ref [17] in the original manuscript and [19] in the revised manuscript) is a parallel study submitted to eLife. Their and our independent discoveries nicely complemented each other. Using different sets of techniques and experiments but some shared genetic mouse models, we both found that LGE/CGE made minimum contribution to neocortical OLs. Their analysis in the prenatal and early postnatal stages together with our analysis in the adult brain painted a more comprehensive picture of cortical oligodendrogenesis. The uniqueness of our work is that we performed systematic quantification of all three origins and uncovered the differential contributions to neocortex, piriform cortex, corpus callosum and anterior commissure.

      In summary, our work developed novel strategies to faithfully trace OLs from the three different origins and performed systematic analysis in the adult brain. Our data uncovered their differential contributions to neocortex, piriform cortex and the two commissural white matter tracts, which significantly differ not only from the canonical view but also from other previous studies in aspects discussed above. We believe our discoveries did significantly revise the canonical model of forebrain OL origins and provided a new and more comprehensive view.

      Reviewer #3 (Public Review):

      [...] Intriguingly, by using an indirect subtraction approach, they hypothesize that both Emx1-negative and Nkx2.1-negative cells represent the progenitors from lateral/caudal ganglionic eminences (LC), and conclude that neocortical OLs are not derived from the LC region.The authors claim that Gsh2 is not exclusive to progenitor cells in the LC region (PMID: 32234482). However, Gsh2 exhibits high enrichment in the LC during early embryonic development. The presence of a small population of Gsh2-positive cells in the late embryonic cortex could originate/migrate from Gsh2-positive cells in the LC at earlier stages (PMID: 32234482). Consequently, the possibility that cortical OLs derived from Gsh2+ progenitors in LC could not be conclusively ruled out. Notably, a population of OLs migrating from the ventral to the dorsal cortical region was detected after eliminating dorsal progenitor-derived OLs (PMID: 16436615).

      The indirect subtraction data for LC progenitors drawn from the OpalinFlp-tdTOM reporter in Emx1-negative and Nkx2.1-negative cells in the OpalinFlp::Emx1Cre::Nkx2.1Cre::RC::FLTG mouse line present some caveats that could influence their conclusion. The extent of activity from the two Cre lines in the OpalinFlp::Emx1Cre::Nkx2.1Cre::RC::FLTG mice remains uncertain. The OpalinFlp-tdTOM expression could occur in the presence of either Emx1Cre or Nkx2.1Cre, raising questions about the contribution of the individual Cre lines. To clarify, the authors should compare the tdTOM expression from each individual Cre line, OpalinFlp::Emx1Cre::RC::FLTG or OpalinFlp::Nkx2.1Cre::RC::FLTG, with the combined OpalinFlp::Emx1Cre::Nkx2.1Cre::RC::FLTG mouse line. This comparison is crucial as the results from the combined Cre lines could appear similar to only one Cre line active.

      Overall, the authors provided intriguing findings regarding the origin and fate of oligodendrocytes from different progenitor cells in embryonic brain regions. However, further analysis is necessary to substantiate their conclusion about the fate of LC-derived OLs convincingly.

      We thank the reviewer for these thoughtful comments. We agree with the reviewer that the presence of Gsh2-positive cells in the late embryonic cortex by itself could not rule out the possibility that they originate/migrate from Gsh2-positive cells in the LC at earlier stages. Staining dorsal-lineage intermediate progenitors with Gsh2, or performing intersectional lineage tracing using Gsh2Cre along with a dorsal-specific Flp driver, would provide more direct evidence on this issue. Nonetheless, as our lineage tracing of LGE/CGE-derive OLs did not employ Gsh2Cre, the doubt on the identity of Gsh2+ cortical progenitors should not affect the interpretation of our data.

      Regarding the subtractional LCOL labeling strategy used in our study, we wonder if there was any misunderstanding by the reviewer. As stated in our manuscript (line 59-61) and reiterated by the reviewer, OpalinFlp::Emx1Cre::Nkx2.1Cre::RC::FLTG labels OLs derived from progenitors that express neither Emx1Cre nor Nkx2.1Cre. As these two progenitor pools do not overlap with each other, there is a purely additive effect of their actions. If there is any concern about efficiency and specificity, it would be non-adequate Cre-mediated recombinations that lead to mislabeling of dOLs or MPOLs as LCOLs (i.e., OLs derived from Emx1 or Nkx2.1-expressing progenitors were not successfully “subtracted” and thereby “wrongly” retained RFP expression). Therefore, the bona-fide LGE/CGE-derive OLs would only be fewer but not more than RFP+ LCOLs labeled by our subtractional strategy, even if any of the Cre lines did not work efficiently enough. In any case, this would not affect our conclusion that LGE/CGE-derive OLs make a minimal contribution to neocortex, as the “ground truth” contribution by LGE/CGE could only be less but not more than what we have observed using the current strategy.

      In support of our conclusion, a parallel study by Li et al. 2024 (ref [17] in the original manuscript; ref [19] in the revised manuscript) also provided independent experimental evidence that “any contribution of oligodendrocyte precursors to the developing cortex from the lateral ganglionic eminence is minimal in scope (quoted from its eLife assessment).” In addition, in their revision, they performed Gsh2 immunostaining in P0 Emx1Cre::HG-loxP mouse and found nearly all Gsh2+ cells in the cortical SVZ were derived from the Emx1+ lineage. We are glad that this additional piece of evidence further clarified the case, but still want to emphasize that the subtractional strategy we took was designed purposefully to avoid the potential uncertainty of Gsh2Cre and to more faithfully label LGE/CGE-derived OLs. Therefore, the validity of our conclusion about the fate of LC-derived OLs should be independent from the question on the identity of Gsh2+ cortical progenitors and stands well by itself.

      We hope that these explanations have adequately addressed the reviewer’s concerns. 

      Recommendations for the authors:

      Reviewer #2 (Recommendations For The Authors):

      In Figures 2C, 2D, 2E and 3D, the authors should provide counts of labelled cells as a % of ASPA+ cells. This will give an accurate picture of the contribution of the different progenitor regions to OLs.

      The graphs in Figure 2F are unnecessary since they are simply repeats of C-E but re-arranged.

      We thank the reviewer for the valuable suggestions. These two recommendations are sort of related, and thereby we made the following changes. We replaced the density quantification in Figure 2F and 3D with % of ASPA (Figure 2J and 2N in the revised manuscript) to give an accurate picture of the contribution of the different progenitor regions to OLs, as suggested by the reviewer. We still retained the density counts in Figure 2C-E (Figure 2G-I in the revised manuscript). Together with quantifications of rotral-caudal and larminar distributions presented in Supplementary Figure 2, these data demonstrated that OLs from differential origins display distinct spatial distribution patterns.

      At what ages were the quantifications performed in all the figures?

      We apologize for the omission of this information in the original manuscript. All quantifications were performed in 2-month-old adult mice. We have added this information in the “Material and Methods” section of the revised manuscript.

      In 2D, and 3B the GFP should have been activated but the authors do not show it or quantify it presumably because GFP would flood the sections in the presence of Emx1Cre. Nevertheless, since eGFP is shown in the diagram in 2B, the authors should mention why they chose not to show it.

      We thank the reviewer for the helpful comment and the suggestion. We have modified the schematic in Figure 2B and added explanation in the figure legend (line 308-313). We also added a schematic in Supplementary Figure 1A along with images of GFP channel in Supplementary Figure 1D (line 338-350).

      All the main figures and supplementary figures are too small to see properly.

      We are sorry that there was severe compression of images in the combined manuscript file at the conversion step during the initial submission. We apologize for the compromised image quality and have re-uploaded full-size figures as individual files on BioRxiv soon after receiving the reviews. For the revised manuscript, we also take care to upload full-size figures at high resolution as individual files to ensure their quality of presentation.

      Supplementary Figure 2E is unnecessary and perhaps misleading the reader that cortical-derived OLs have a preference for the lower layers whereas the distribution may simply reflect the distribution of OLs in the cortex.

      We thank the reviewer for the helpful comment and the suggestion. We have removed this panel and replaced it with quantifications of relative laminar distributions of the total (ASPA+) OLs along with those from the three different origins (Supplementary Figure 2G in the revised manuscript). Indeed, the preference for the lower layers of dorsally-derived OLs mirrored the distribution of total OLs in the cortex, while the MGE/POA-derived OLs deviate significantly from others and exhibit higher preference towards layer 4.

      Quantification of labelled cells as a % of ASPA should also be performed in Supplementary Figure 3.

      We thank the reviewer for this suggestion. In the revised manuscript, we have included quantifications of labelled cells as % of ASPA for both OpalinFlp::Emx1Cre::Ai65 and  OpalinFlp::Nkx2.1Cre::Ai65 (Figure 2J and N). The sum of the these two data sets will be equivalent to those of OpalinFlp::Emx1Cre::Nkx2.1Cre::Ai65 shown in Supplementary Figure 3, and thereby we did not perform additional quantifications to avoid redundant efforts.

      Imaging and quantification should be extended to more posterior regions of the cortex to find out whether the contribution is different from the areas already examined.

      We thank the reviewer for the suggestion on imaging and apologize for the confusion about the range of quantification. As explained in the response to comment (2) of weakness, the quantifications were not confined to one level of the cortex but were performed in brain sections ranging from Bregma +1.94 to -2.80 mm, as shown in Supplementary Figure 2A-B in the original manuscript. In the revised manuscript, we have added relevant descriptions in the “Material and Methods” section (line 199-200) and schematics along with representative images of more anterior and more posterior cortical regions (Supplementary Figure 2A-D).

      Reviewer #3 (Recommendations For The Authors):

      (1) The authors should provide Opalin reporter expression data across various brain regions at different developmental stages to clarify the expression pattern of the reporter.

      We appreciate the reviewer’s comment. We chose to performed all quantifications in adult mice as Opalin is a well-established marker for differentiated OLs and the recombinase-dependent reporter expression is accumulative and irreversible. If there is any non-specific labeling in any earlier developmental stage, it would be retained and manifested at the timepoint we examined as well. In another word, the fact that we did not detect any non-specific labeling in the current dataset but only confined labeling in mature OLs ensured that no non-OL labeling was present in earlier timepoint. As shown in Figure 1D-F, reporter expression activated by the Opalin driver is presented at high OL specificity in all analyzed brain regions. This is further corroborated by results from combinatorically labeled samples (Figure 2 and Supplementary Figure 2), in which only OLs but not any other cell types were labeled in all analyzed brain regions too. Following the reviewers’ suggestions, we have added representative images of more rostral and more caudal cortical regions (Supplementary Figure 2B-D), which also showed highly specific OL labeling.  

      (2) In Figure 1D, please specify the developmental stage of the mice used for staining.

      We apologize for the omission of this information in the original manuscript. All quantifications were performed in 2-month-old adult mice. We have added this information in the “Material and Methods” section (line 199-200) of the revised manuscript.

      (3) The authors should clarify if the Opalin reporter expressed in OPCs and astrocytes at developmental stages of mice, such as P0, P7, and P30.

      We appreciate the reviewer’s comment, but as explained in response to comment (1), Opalin is a well-established marker for differentiated OLs which is not expressed in OPCs or astrocytes. As shown in Figure 1D-E, reporter expression is confined to CC1+ differentiated OLs with no colocalization with Sox9 (astrocyte marker). In support with this observation, only ASPA+ differentiated OLs but no OPC or astrocyte were labeled in any of the combinatorial lineage tracing samples generated using this line combined with progenitor-Cre lines. In addition to marker staining, we also did not observe any RFP+ cells with OPC or astrocyte morphology. As the recombinase-dependent reporter expression is accumulative and irreversible, the fact no non-specific labeling was observed in adult brain retrospectively proved the specificity of Oplain-Flp in earlier developmental stages.

      (4) In Figure 1E, authors should address why the efficiency of the tdTomato line is notably lower compared to that of H2B-GFP and whether the stability of reporters could impact the conclusions drawn.

      The difference in reporting efficiency is mainly caused by differences inherent to the two reporting systems. The TRE-RFP reporter is derived from Ai62, composed of a Tet response element and tdTomato inserted into the T1 TIGRE locus. The tdTomato expression is driven by tTA-TRE transcriptional activation. The HG-loxP reporter is derived from HG-Dual, composed of a CAG promoter, a frt-flanked STOP cassette, and H2B-GFP inserted into the Rosa26 locus. The H2B-GFP expression is driven by CAG promoter after Flp-mediated removal of the STOP cassette. A Flp-dependent tdTomato reporter designed in the same way as the HG-FRT reporter would have similar efficiency. In fact, the RC::FLTG reporter can be viewed as such a reporter in the absence of Cre, which did show similarly high efficiency as HG-FRT and supported efficient subtractive labeling of LGE/CGE-derived OLs. We apologize for a typo in the title of the Y-axis of the right panel in the original Figure 1F which may have caused potential misunderstanding. The “RFP+CC1+/CC1” should be “XFP+CC1/CC1”. We have corrected this mistake and revised the figure legend for clearer description of the data (Line 293-302 in the revised manuscript).

      (5) In Figure 2, please clarify the developmental stage of the mice used for staining. Authors should present the eGFP image in addition to tdTOM.

      We apologize for the omission of the age information in the original manuscript. All quantifications were performed in 2-month-old adult mice. We have added this information in the “Material and Methods” section (line 199-200) of the revised manuscript. We thank the reviewer for the suggestion on eGFP image and have presented it in supplementary Figure 1 in the revised manuscript.

      (6) in Figure 2D, authors should display the eGFP image alongside the tdTomato image. It is difficult to assess the efficiency of Emx-Cre and Nkx2.1-Cre.

      We thank the reviewer for the suggestion on eGFP image and have presented eGFP image in Supplementary Figure 1D in the revised manuscript. There are two reasons why we chose to present it in the supplementary figure instead of main figure. First, we added ASPA staining in the green channel along with quantifications of RFP cells as % of ASPA in Figure 2 in the revised manuscript, following reviewer #2’s suggestion. Second, as pointed out by reviewer #2, GFP would flood the sections in the presence of Emx1Cre and could be quite distractive if it was shown together with RFP.

      We were not entirely sure what exactly the reviewer means by “assess the efficiency of Emx-Cre and Nkx2.1-Cre”, but we believe that the quantifications of RFP cells as % of ASPA clarified the contribution of each origin to the total OLs (Figure 2J and 2N in the revised manuscript).

      (7) Figure 3 depicts the entire brain, replicating the image presented in Figure 2. It would be beneficial to consolidate Figures 2 and 3, as they showcase identical brain scans of different regions.

      We thank the reviewer for the constructive suggestion and have consolidated Figures 2 and 3 in the original manuscript into Figure 2 in the revised manuscript.

    2. Reviewer #2 (Public Review):

      In this manuscript, Cai et al use a combination of mouse transgenic lines to re-examine the question of the embryonic origin of telencephalic oligodendrocytes (OLs). Their tools include a novel Flp mouse for labelling mature oligodendrocytes and a number of pre-existing lines (some previously generated by the last author in Josh Huang's lab) that allowed combinatorial or subtractive labelling of oligodendrocytes with different origins. The conclusion is that cortically-derived OLs are the predominant OL population in the motor and somatosensory cortex and underlying corpus callosum, while the LGE/CGE generates OLs for the piriform cortex and anterior commissure rather than the cerebral cortex. Small numbers of MGE-derived OLs persist long-term in the motor, somatosensory and piriform cortex.

      Strengths:

      The strength and novelty of the manuscript lie in the elegant tools generated and used. These have enabled the resolution of the issue regarding the contribution of different telencephalic progenitor zones to the cortical oligodendrocyte population.

      Comments on latest version:

      The revised manuscript by Cai et al has addressed all the issues raised. I have some minor comments:

      Figure 2: The y axis in figure 2L should be the same as the y axis in 2M to make the contribution to Mo and SS more clear.

      Figure 3: Although this is clear in the figure, A an B should be labelled as classical model and new model to help the reader understand immediately what the two figures show.

      Suppl Fig 2: It is not clear what 1-7 represent. It should be made clear in the legend which areas have been pooled into the different bins. The X axis should be labelled.

    3. eLife assessment

      In this study the authors revisited the question of the embryonic origin of telencephalic oligodendrocytes using some new and powerful genetic tools. There is convincing evidence to support previous suggestions of a predominantly cortical origin of oligodendrocytes in the cerebral cortex, however the new studies suggest that LGE/CGE-derived oligodendrocytes make a modest contribution in some areas, while MGE/POA-derived oligodendrocytes make a small but enduring contribution. The findings are valuable and should be of interest to developmental and myelin biologists.

    1. Author response:

      Public Reviews:

      Reviewer #1 (Public Review): 

      [...] Strengths: 

      The method the authors propose is a straightforward and inexpensive modification of an established split-pool single-cell RNA-seq protocol that greatly increases its utility, and should be of interest to a wide community working in the field of bacterial single-cell RNA-seq. 

      Weaknesses: 

      The manuscript is written in a very compressed style and many technical details of the evaluations conducted are unclear and processed data has not been made available for evaluation, limiting the ability of the reader to independently judge the merits of the method. 

      Thank you for your thoughtful and constructive review of our manuscript. We appreciate your recognition of the strengths of our work and the potential impact of our modified PETRI-seq protocol on the field of bacterial single-cell RNA-seq. We are grateful for the opportunity to address your concerns and improve the clarity and accessibility of our manuscript.

      We acknowledge your feedback regarding the compressed writing style and lack of technical details,which are constrained by the requirements of the Short Report format in eLife. We will addresse these issues in our revised manuscript as follows:

      (1) Expanded methodology section: We will provide a more comprehensive description of our experimental procedures, including detailed protocols for the ribosomal depletion step and data analysis pipeline. This will enable readers to better understand and potentially replicate our methods.

      (2) Clarification of technical evaluations: We will elaborate on the specifics of our evaluations, including the criteria used for assessing the efficiency of ribosomal depletion and the methods employed for identifying and characterizing subpopulations within the E. coli biofilm model.

      (3) Data availability: We apologize for the oversight in not making our processed data readily available. We have deposited all relevant datasets, including raw and source data, in appropriate public repositories (GEO number: GSE260458) and provide clear instructions for accessing this data in the revised manuscript.

      (4) Supplementary information: To maintain the concise nature of the main text while providing necessary details, we will inculde additional supplementary information. This will cover extended methodology, detailed statistical analyses, and comprehensive data tables to support our findings.

      (5) Discussion of limitations: We will include a more thorough discussion of the potential limitations of our modified protocol and areas for future improvement.

      ​We believe these changes will significantly improve the clarity and reproducibility of our work, allowing readers to better evaluate the merits of our method.

      Reviewer #2 (Public Review): 

      [...] Strengths: 

      The introduced rRNA depletion method is highly efficient, with the depletion for E.coli resulting in over 90% of reads containing mRNA. The method is ready to use with existing PETRI-seq libraries which is a large advantage, given that no other rRNA depletion methods were published for split-pool bacterial scRNA-seq methods. Therefore, the value of the method for the field is high. There is also evidence that a small number of cells at the bottom of a static biofilm express PdeI which is causing the elevated c-di-GMP levels that are associated with persister formation. Given that PdeI is a phosphodiesterase, which is supposed to promote hydrolysis of c-di-GMP, this finding is unexpected. 

      Weaknesses: 

      With the descriptions and writing of the manuscript, it is hard to place the findings about the PdeI into existing context (i.e. it is well known that c-di-GMP is involved in biofilm development and is heterogeneously distributed in several species' biofilms; it is also known that E.coli diesterases regulate this second messenger, i.e. https://journals.asm.org/doi/full/10.1128/jb.00604-15). <br /> There is also no explanation for the apparently contradictory upregulation of c-di-GMP in cells expressing higher PdeI levels. Perhaps the examination of the rest of the genes in cluster 2 of the biofilm sample could be useful to explain the observed association. 

      Thank you for your thoughtful and constructive review of our manuscript. We are pleased that the reviewer recognizes the value and efficiency of our rRNA depletion method for PETRI-seq, as well as its potential impact on the field. We would like to address the points raised by the reviewer and provide additional context and clarification regarding the function of PdeI in c-di-GMP regulation.

      We acknowledge that c-di-GMP’s role in biofilm development and its heterogeneous distribution in bacterial biofilms are well studied. We appreciate the reviewer's observation regarding the seemingly contradictory relationship between increased PdeI expression and elevated c-di-GMP levels. This is indeed an intriguing finding that warrants further explanation.

      PdeI was predicted to be a phosphodiesterase responsible for c-di-GMP degradation. This prediction is based on sequence analysis where PdeI contains an intact EAL domain known for degrading c-di-GMP. However, it is noteworthy that PdeI also contains a divergent GGDEF domain, which is typically associated with c-di-GMP synthesis. This dual-domain architecture suggests a potential for complex regulatory roles. As reported, the knockout of the major phosphodiesterase PdeH in E. coli leads to the accumulation of c-di-GMP. Further, a point mutation on PdeI's divergent GGDEF domain (G412S) in this PdeH knockout strain resulted in decreased c-di-GMP levels, implying that the wild-type GGDEF domain in PdeI has a role in maintaining or increasing c-di-GMP levels in the cell. Additionally, PdeI contains a CHASE (cyclases/histidine kinase-associated sensory) domain. Combined with our experimental results demonstrating that PdeI is a membrane-associated protein, we predict that PdeI functions as a sensor that integrates environmental signals with c-di-GMP production under complex regulatory mechanisms. The experimental evidence, along with domain analysis, suggests that PdeI could contribute to c-di-GMP synthesis, rebutting the notion that it solely functions as a phosphodiesterase. Furthermore, our single-cell experiments showed a positive correlation between PdeI expression levels and c-di-GMP levels (Fig. 2J). HPLC LC-MS/MS analysis further confirmed that PdeI overexpression (induced by arabinose) upregulated c-di-GMP levels (Fig. 2K). Importantly, in our HPLC LC-MS/MS analysis, we compared the PdeI overexpression strain with the wild-type MG1655 strain, thereby excluding the influence of other genes in cluster 2. In summary, while PdeI is predicted to be a phosphodiesterase based on its sequence and the presence of an EAL domain, the additional presence of a divergent GGDEF domain and experimental evidence suggests that PdeI has a function in upregulating c-di-GMP levels. These findings support the hypothesis that PdeI may have both synthetic and regulatory roles in c-di-GMP metabolism.

    1. Welcome back and in this demo lesson you're going to get some experience interacting with CloudWatch. So you're going to create an EC2 instance, you're going to cause that instance to consume some CPU capacity and then you're going to monitor exactly how that looks within CloudWatch. Now to do this in your own environment you'll just need to make sure that you're logged into the general AWS account as the IAM admin user and as always make sure that you have the Northern Virginia region selected which is US-East-1. Once you've got those set correctly then click in the search box at the top and type EC2, find the EC2 service and then just go ahead and open that in a brand new tab.

      Now we're going to skip through the instance creation process because you've done that in a previous demo lesson. So just go ahead and click on instances and then Launch Instance. Under Name, I just want you to put CloudWatch Test as the instance name. Then scroll down and then under the Amazon Machine image to use, go ahead and select Amazon Linux. We're going to pick the Amazon Linux 2023 version, so that's the most recent version of this AMI. It should be listed as Free Tier Eligible, so just make sure that's the case. We'll leave the architecture set to 64-bit x86 and scroll down. It should already be set to an instance type which is free tier eligible, in my case t2.micro. We'll be connecting to this instance using ec2 instance connect so we won't be using an SSH key pair. So in this drop down just click and then say proceed without a key pair. We won't need one because we won't be connecting with a local SSH client. Scroll down further still and under Network Settings click on Edit and just make sure that the default VPC is selected. There should only be one in this list but just make sure that it's set as default. Under Subnet we can leave this as No Preference because we don't need to set one. We will need to make sure that Auto Assign Public IP is set to Enable.

      Under create security group for the name and for the description just go ahead and type CloudWatch SG so CloudWatch SG for both the security group name and the description now the default for security group rule should be fine because it allows SSH to connect from any source location and that's what we want scroll down further still and we'll be leaving storage as default remember this is set from the AMI that we pick. Now because this is a CloudWatch lesson, we're going to set something a little bit different. So expand Advanced Details and then scroll down and look for Detailed CloudWatch Monitoring. Now this does come at an additional cost, so you've got a couple of options. You can just watch me do this or you can do this demo without Detailed Monitoring enabled. And if you don't enable this, it will be entirely free, but you might need to wait a little bit longer for things to happen in the demo lesson so keep that in mind.

      What I'm going to do is I'm going to enable detailed CloudWatch monitoring and if we click on info here we can see some details about exactly what that does and we can also open this in a new tab and explore what additional charges apply if we want to enable it. Now in this case I'm going to enable it you don't have to it's not a huge charge but I think for me demoing this to you it's good that I enable it you don't have to you might just have to wait a little bit longer for things to happen in the demo. Now once all of that set just scroll all the way down to the bottom and go ahead and click launch instance. Now this might take a few minutes to create we're first waiting for this success dialog and once that shows we can go ahead and click on view all instances. Go ahead and click refresh until you see the instance it will start off in a pending state with nothing listed under status check. After a few moments this will change status we'll see that it's in a running state and then we need to wait for this to change to two of two status checks before we continue. So go ahead and pause the video wait for your status check to update and once it does we're good to continue.

      Okay so now this has changed to two out of two checks passed and that's good that's what we want so so it should display running on the instant state and then two out of two checks passed under status check. Once this is the case, go ahead and click in the search box at the top and just type CloudWatch, locate the CloudWatch service, and then open that in a brand new tab. This is the CloudWatch console, and it's here where we're going to create a CloudWatch alarm. Now if you see anything about a new UI or new features, you can just go ahead and close down that dialog. Once we're here, go ahead and click on Alarms on the left and then click on all alarms. This will show a list of all the alarms that you've configured within CloudWatch, and currently there aren't any. What we're going to do is to create an alarm. So click on create alarm, and then click on select metric. Once we're on this screen, scroll down, and we're going to be looking for an EC2 metric, because we need to find the CPU utilization metric, which is inside the EC2 namespace. In other words, it comes from the EC2 service. So go ahead and click on EC2, and then we're looking for per instance metrics. So click on per instance metrics, and this will show all of the EC2 instance metrics that we currently have. Now if I scroll through this list, what you'll see is that I have two different instance IDs, because I'm using this account to create all of these demo lessons. In my case, I see previous instances. Now if you're doing this in your account, if you go back to the EC2 Management Console, you can see your instance ID here. Just remember the last four digits of this instance ID, and then go back to the CloudWatch Console. If you have more than one instance listed in CloudWatch, look for the instance ID that ends with the four digits that you just noted down, and then from that list you need to identify CPU utilization. And so I'm going to check the box next to this metric. Now this is the metric that monitors, as the name suggests, CPU utilization on this specific instance ID, which is our CloudWatch test instance. If I scroll up, I'm able to see any data that's already been gathered for this specific instance. And as you can see, it's not a great deal at the moment because we've only just launched this instance. So I'm gonna go ahead and click on Select Metric, and then because we're creating an alarm, it's going to ask us for what metric and conditions we want to evaluate.

      So I'm going to scroll down, and under Conditions, I'm going to pick Static, because I want this alarm to go into an alarm state when something happens to the CPU utilization. So I'm going to ask CloudWatch that whenever the CPU utilization is greater or equal to a specific value than to go into an alarm state. So that value is going to be 15%. So whenever the CPU utilization on this EC2 instance is greater or equal to 15%, then this alarm will go into the alarm state. So I'm gonna go ahead and click on Next. Now you can set this up so that if this alarm goes into an alarm state, it can notify you using SNS. Now that's useful if this is in production usage, but in this case we're not using it in production, so I'm going to go ahead and click on remove. Scroll down to the bottom, there's also other things that you could pick, so you could do an auto scaling action, an EC2 action, or a systems manager action. But we're going to be talking about these in much more detail as we move through the course. For now we're going to keep this simple, it's just going to be a basic alarm which goes into an alarm state or not. So click on next and then under alarm name I'm going to put CloudWatch test and then high CPU and you should do the same. So type that, click on next, scroll down to the bottom and create that alarm.

      Now initially this alarm state will be insufficient data because CloudWatch hasn't yet gathered enough data on the CPU utilization to generate the state. That's fine because we've we've got another thing that we need to do first. So now move back to the EC2 console and we're going to connect into this instance using EC2 Instance Connect. Remember, that's the web-based way to get access to this instance. So over the top of the CloudWatch Test instance, right click and go to Connect. Make sure that EC2 Instance Connect is selected, so click that tab. You can leave everything as default and click on Connect and that will connect you to this EC2 instance. Now at this point, we need to install an application called stress on this EC2 instance. And stress is an application which will put artificial CPU load onto a system. And that's what we want to do in order to see how CloudWatch reacts. To install stress, we're going to run this command. And this next command will use the yum package manager to install the stress utility. So go ahead and run this command and then clear the screen again. Now the stress command can be run by typing stress and what we're going to do is do a double hyphen help just to get the help for this command. So what we're going to do is we're going to run stress and we're going to specify the number of CPUs to use and we want that number to be the same number of virtual CPUs that this instance has. Now a t2.micro has one virtual CPU and so the command that we need to run is stress space hyphen c space 1 and then space and then we're going to use hyphen t which is the timeout command and this specifies how long we want to run this for. So we're going to specify 3600 so hyphen t and then a space 3600 and this will run the stress for 3600 seconds and that's plenty for us to see how this affects the metrics which are being monitored by CloudWatch.

      Now what I want to do before we do that is go back to the CloudWatch console. You might need to refresh if you haven't seen the state update yet. In my case it's already showing as okay. So this means that it's now got access to some data. So click on this alarm and you'll be able to see that currently the CPU started off at very low levels and then it spiked up and potentially in my case that's because we've just installed some software. But note here this red line which indicates the alarm level for this alarm. So if the CPU utilisation, which is in blue, exceeds this red line then this alarm will move from OK to ALARM. And that's what we want to simulate. So go back to the instance and press Enter to run this stress command. And that's going to begin placing high levels of CPU load on this instance and what we'll see over the next few minutes is CloudWatch will detect this additional CPU load and it will cause this alarm to go from OK into an alarm state. So move back to the CloudWatch console and just keep hitting refresh until you see a change in the alarm state. Again this might take a few minutes. What I suggest you do is pause the video and wait for your alarm to change away from OK and then you're good to continue.

      Now in my case this only took a few minutes and as you can see the CPU load reported by this alarm in CloudWatch went from this value here and spiked all the way up to this value which is well above the 15% of the alarm threshold. So the alarm changed from OK to IN alarm based on this excessive CPU and if we keep monitoring this over time you'll see that this trend continues because this CPU is under extremely high load because it's been artificially simulated using the stress utility. Now if we go back to this EC2 instance and press ctrl and C at the same time this will exit out of the stress utility and at this point the artificial CPU load has been removed and the instance will gradually move back down to its normal levels which is very close to zero. So again what you'll see is this may take a few minutes to be reflected inside CloudWatch. So keep refreshing this once you've cancelled the stress utility and wait for the reported CPU utilization to move back down below the alarm value. Again that might take a few minutes so go ahead and pause the video and wait for this blue line to move back under the red line and once it does you should see that the alarm state changes from in alarm to OK again.

      In my case it took a few minutes for the blue line to move below the alarm threshold and then a few more minutes afterwards for the alarm to change from in alarm to OK. But as you can see at this point that's exactly what's happened once the CPU usage goes below the configured threshold value then the alarm changes back to an OK state. And at this point that's everything that I wanted to cover in this demo lesson on CloudWatch. CloudWatch is a topic that I'm going to be going into much more detail later on in the course. This has just been a really brief introduction to the product and how it interacts with EC2. Now at this point the only thing left is to clear up the account and put it back into the same state as it was at the start of this lesson. So to do that go ahead and click on All Alarms, select the CloudWatch Test High CPU Alarm that you created, click on the actions dropdown, select delete, and then confirm that deletion. Then go back to EC2, go to the instances overview, right click on the CloudWatch test instance, making sure that it is the correct instance, so CloudWatch test, and then select terminate instance and confirm that termination. Now that's going to move through a few states, it will start with shutting down, and you need to wait until that instance is in a terminated state. Go ahead and pause the video and wait for your instance to change into terminated.

      Okay so once your instance has terminated on the menu on the left scroll down go to security groups select the CloudWatch SG security group making sure that you do pick the correct one so CloudWatch SG click on actions scroll down delete security groups and click on delete and at that point the account is back in the same state as it was at the start of this demo lesson. So thanks for watching this video. I hope you gained some experience of the CloudWatch product and again we're going to be talking about it in much more detail later in the course. At this point though go ahead and complete this video and when you're ready I'll look forward to you joining me in the next.

    1. Author response

      Reviewer #1 (Public Review):

      […] Weaknesses:

      This work explores an interesting question on regulating myoD+ progenitors and the defects of this process in skeletal muscle differentiation by SRFS2 but spreads out in many directions rather than focusing on the key defects. A number of approaches are used, but they lack the robust mechanistic analysis of the defects that result in muscle differentiation. Specifically, the role of SRFS2 on splicing appears to be a misfit here and does not explain the primary defects in the migration of myoD+ progenitors. There are concerns about the scRNA-seq and many transcripts in muscle biology that are not expressed in muscle cells. Focusing on main defects and additional experimental evidence to clear the fusion vs. precocious differentiation vs. reduced differentiation will strengthen this work.

      (1) The analysis of RNA-seq data (Figure 2) is limited, and it is unclear how it relates to the work presented in this MS. The Go enrichment analysis is combined for both up and down-regulated DEG, thus making it difficult to understand the impact differently in both directions. Stac2 is a predominant neuronal isoform (while Stac3 is the muscle), and the Symm gene is not found in the HGNC or other databases. Could the authors provide the approved name for this gene? The premise of this work is based on defects in ECM processes resulting in the mis-targeting of the muscle progenitors to the nonmuscle regions. Which ECM proteins are differentially expressed?

      The GO enrichment analysis (Figure 2B) indicates that genes involved in skeletal muscle construction and function were significantly dysregulated, with both up-regulated and down-regulated genes observed, consistent with the phenotype analysis presented in Figure 1.

      We agree with the reviewer’s comments that Stac3 is the predominant muscle isoform with high expression in skeletal muscle tissues, while stac2 is expressed at low levels in these tissues. Therefore, we decided to delete the Stac2 data from the Figure 2C and will modify the text accordingly. We apologize for our errors.

      In response to the reviewer's comment regarding the Symm gene not being found in the HGNC or other databases, we carefully re-examined the genes presented in Figure 2C. We discovered that one of the genes is actually Synm, which encodes synemin, an intermediate filament protein. We will correct this in the manuscript.

      scRNA-seq analysis revealed defects in ECM processes in SRSF2-deficient myoblasts, which we believe likely resulted in the mis-targeting of muscle progenitors to non-muscle regions. However, comparing RNA-seq results from whole muscle tissues with scRNA-seq results is challenging.

      (2) Could authors quantify the muscle progenitors dispersed in nonmuscle regions before their differentiation? Which nonmuscle tissues MyoD+ progenitors are seen? Most of the tDT staining in the enlarged sections appears to be punctate without any nuclear staining seen in these cells (Figure 3 B, D E-F). Could authors provide high-resolution images? Also, in the diaphragm cross-sections in mutants, tdT labeling appears to be missing in some areas within the myofibers defined as cavities by the authors (marked by white arrows, Figure 3H). Could this polarized localization of tDT be contributing to specific defects?

      tdT staining revealed a substantial presence of MyoD-derived cells distributed beyond the muscle regions, as shown in Figure 3B. Quantify the number of MyoD+ progenitors dispersed in non-muscle regions is not meaningful.

      tdT+ cells also include those that previously expressed MyoD but have since differentiated into myotubes and myofibers, which is why many tdT+ staining is not nuclear.

      MyoD+ cells deficient in SRSF2 either undergo apoptosis or premature differentiation. Consequently, tdT staining in SRSF2-KO muscles showed many irregularities in the muscle fibers.

      (3) Is there a difference in the levels of tDT in the myoD" muscle progenitors that are mis-targeted vs the others that are present in the muscle tissues?

      tdT+ cells include those that previously expressed MyoD but have since differentiated into myotubes and myofibers, which are no longer MyoD+ cells. Additionally, tdT+ also include those currently expressing MyoD, which are MyoD+ cells.

      The fiber differences between WT and SRSF2-KO mice are easily discernible through tdT staining (Figure 2D and 3D), however, comparing the levels of tdT staining between the two groups is not meaningful.

      (4) scRNA is unsuitable for myotubes and myofibers due to their size exclusion from microfluidics. Could authors explain the basis for scRNA-seq vs SnRNA-seq in this work? How are SKM defined in scRNA-data in Figure 4? As the myofibers are small in KO, could the increased level of late differentiation markers be due to the enrichment of these small myotubes/myofibers in scRNA? A different approach, such as ISH/IF with the myogenic markers at E9.5-10.5, may be able to resolve if these markers are prematurely induced.

      SRSF2 is highly expressed in proliferative myoblasts, but its levels declined once differentiation begins. In our study, we used Myod1-Cre to delete the SRSF2 gene and performed the scRNA-seq analysis to examine the effects of SRSF2 deletion on the proliferation and differentiation of MyoD cells. Our analysis revealed that SRSF2 deletion caused proliferation defects and premature differentiation of MyoD cells (Figure 5G), leading to myofiber abnormalities.

      We determined that snRNA-seq analysis is not suitable for our study.

      Additionally, skeletal muscle cells (SKM) were defined based on the expression of skeletal muscle markers, as shown in Figure 4C.

      (5) TNC is a marker for tenocytes and is absent in skeletal muscle cells. The authors mentioned a downregulation of TNC in the KO SKM derived clusters. This suggests a contamination of the tenocytes in the control cells. In spite of the downregulation of multiple ECM genes showed by scRNA-seq data, the ECM staining by laminin in KO in Figure 3 appears to be similar to controls.

      Tenascin-C (Tnc) is also part of the extracellular matrix (ECM) family. scRNA-seq analysis revealed that multiple ECM genes were downregulated in SRSF2-KO myoblasts, however, this did not indicate that laminin was downregulated in the SRSF2-KO muscles.

      (6) The expression of many fusion genes, such as myomaker and myomerger, is reduced in KO, suggesting a primary fusion defect vs a primary differentiation defect. Many mature myofiber proteins exhibit an increased expression in disease states, suggesting them as a compensatory mechanism. Authors need to provide additional experimental evidence supporting precocious differentiation as the primary defect.

      Our analysis revealed that the deletion of SRSF2 caused premature differentiation of MyoD cells (Figure 5G), leading to abnormalities of myofiber formation. SRSF2 is highly expressed in proliferative myoblasts, but its expression declines quickly in myotubes. Therefore, it is unlikely that the low expression of SRSF2 in myotubes caused the primary fusion defect.

      (7) The fusion defects in KO are also evident in siRNA knockdown for SRSF2 and Aurka in C2C12, which mostly exhibits mononucleated myocytes in knockdowns. Also, a fusion index needs to be provided.

      SRSF2 knockdown and Aurka knockdown caused differentiation defects, including fusion defects. We quantified the percentages of both MyoG+ and MHC+ cells in the differentiation assay.

      (8) The last section of the role of SRSF2 on splicing appears to be a misfit in this study. Authors describe the Bin1 isoforms in centronuclear myopathy, but exon17 is not involved in myopathy. Is exon17 exclusion seen in other diseases/ splicing studies?

      Our study is the first to report that exon 17 inclusion of Bin1 is regulated by SRSF2. Specifically, the knockdown of Bin1 exon 17 caused severe differentiation defects in C2C12 myoblasts. The involvement of Bin1 exon 17 in myopathy requires further validation using clinical samples.

      Reviewer #2 (Public Review):

      […] Weaknesses: Although unbiased sequencing methods were used, their findings about SRSF2 served as a transcriptional regulator and functioned in alternative splicing events are not novel. The introductions and discussion is not clearly written. The authors did not raise clear scientific questions in the introduction part. The last paragraph is only copy-paste of the abstract. The discussion part is mainly the repeat of their results without clear discussion.

      While the role of SRSF2 as a transcriptional regulator involved in alternative splicing events is not novel, the specific SRSF2-regulated alternative splicing events and targeted genes in skeletal muscle have not been reported in other publications. We believe our interpretation of the data and comparison with related published studies are well presented in the Discussion section.

    1. These days, accounting is still based on thisdouble-entry bookkeeping technique—at least in most firms and jurisdictions

      Not modifications. Though there are other ways of accounting withouth double entry. Look for the name.

    2. Since TEA focusesmainly on the bookkeeping system, the focus of this paper is on financial accounting.

      Tea is bookeeping? Either way not only financial accounting would benefit of this type of bookepping.

    Annotators

    1. Welcome back. In this lesson, I want to talk about CloudWatch, a core product inside AWS used for operational management and monitoring. CloudWatch performs three main jobs: it collects and manages operational data, monitors metrics, and performs actions based on these metrics.

      CloudWatch collects and manages operational data generated by an environment, including performance details, nominal operations, and logging data. It can be considered three products in one: CloudWatch, CloudWatch Logs, and CloudWatch Events.

      Firstly, CloudWatch allows the collection, monitoring, and actions based on metrics related to AWS products, applications, or on-premises systems. Metrics include data such as CPU utilization, disk space usage, or website traffic. CloudWatch can gather metrics from AWS, on-premises environments, or other cloud platforms using a public internet connection. Some metrics are gathered natively by CloudWatch, while others require the CloudWatch Agent, especially for monitoring non-AWS environments or specific processes on AWS instances.

      CloudWatch provides a user interface, command line interface, or API to access and manage this data. The second part of CloudWatch, CloudWatch Logs, handles the collection, monitoring, and actions based on logging data from various sources like Windows event logs, web server logs, and more. For custom logs or non-AWS systems, the CloudWatch Agent is also needed.

      The third part is CloudWatch Events, which functions as an event hub. It generates events based on AWS service actions (e.g., starting or stopping an EC2 instance) and can also create scheduled events for specific times or days.

      The core concepts of CloudWatch include namespaces, metrics, datapoints, and dimensions. A namespace is a container for monitoring data, helping to organize and separate different areas of data. AWS uses a reserved namespace format (e.g., AWS/EC2 for EC2 metrics), while you can create custom namespaces for your data. Metrics are collections of related data points in a time-ordered structure, such as CPU utilization. Each datapoint includes a timestamp and value. Dimensions, which are name-value pairs, help separate and identify data within a metric, like distinguishing datapoints from different EC2 instances.

      CloudWatch also uses alarms to take actions based on metrics. Alarms can be in an OK state (indicating no issues), an ALARM state (indicating a problem), or an INSUFFICIENT_DATA state (indicating not enough data to assess). Actions could include notifications or more complex responses. You’ve already seen an example of this with the billing alarm created at the start of the course.

      In the next demo lesson, we’ll provision an EC2 instance, let it run, and then create an alarm to monitor CPU usage, providing practical exposure to how CloudWatch works.

      Thanks for watching. Complete this video and join me in the demo when you’re ready.

    1. some tables and chairs, two second-hand coffee machines, cups and mugs, cleaning items, etc.

      Capital: factors of product

      Productive Assets (e.g. the equipment and furniture)

    1. Welcome back. In this demo lesson, I want to quickly demonstrate how to use CloudFormation to create some simple resources. So before we start, just make sure you're logged in to the general AWS account and that you've got the Northern Virginia region selected. Once you've got that, just move across to the CloudFormation console.

      So this is the CloudFormation console, and as I discussed in the previous lesson, it works around the concepts of stacks and templates. To get started with CloudFormation, we need to create a stack. When you create a stack, you can use a sample template, and there are lots of different sample templates that AWS makes available. You can create a template in the Designer or upload a ready-made template, and that's what I'm going to do. Now, I've provided a template for you to use, linked to this lesson. So go ahead and click on that link to download the sample template file.

      Once you've downloaded it, you'll need to select 'Upload a template file' and then choose 'File'. Locate the template file that you just downloaded; it should be called 'ec2instance.yaml'. Select that and click on 'Open'. Whenever you upload a template to CloudFormation, it's actually uploading the template directly to an S3 bucket that it creates automatically. This is why, when you're using AWS, you may notice lots of buckets with the prefix CF that get created in a region automatically. You can always go ahead and delete these if you want to keep things tidy, but that's where they come from.

      Now, before we upload this, I want to move across to my code editor and step through exactly what this template does. The template uses three of the main components that I've talked about previously. The first one is parameters. There are two parameters for the template: latest AMI and SSH and web location. Let's quickly talk about the latest AMI ID because this is an important one. The type of this parameter is a special type that's actually a really useful feature. What this allows us to do is rather than having to explicitly provide an AMI ID, we can say that we want the latest AMI for a given distribution. In this case, I'm asking for the latest AMI ID for Amazon Linux 2023 in whichever region you apply this template in. By using this style of parameter, the latest AMI ID gets set to the AMI of the latest version of this operating system.

      The final parameter that this template uses is SSH and web location, which is where we can just specify an IP address range that we want to be able to access this EC2 instance. So that's parameters—nothing special, and you'll get more exposure to these as we go through the course. Now we've also got outputs, and outputs are things that are set when the template has been applied successfully. When a stack creates, when it finishes that process, it will have some outputs. I've created outputs so that we get the instance ID, the availability zone that the instance uses—remember EC2 is an AZ service. It’ll also provide the public DNS name for the instance, as well as the public IP address. The way that it sets those is by using what's known as a CloudFormation function.

      So this is ref, and this is going to reference another part of the CloudFormation template. In this case, it's going to reference a logical resource, the EC2 instance resource. Now, get attribute or get att is another function that's a more capable version of ref. With get attribute, you still refer to another thing inside the template, but you can pick from different data that that thing generates. An EC2 instance, by default, the default thing that you can reference is the instance ID, but it also provides additional information: which availability zone it's in, its DNS name, and its public IP. I’ll make sure to include a link in the lesson that details all of the resources that CloudFormation can create, as well as all of the outputs that they generate.

      The main component of course of this template is the resources component. It creates a number of resources. The bottom two, you don’t have to worry about for now. I’ve included them so I can demonstrate the Session Manager capability of AWS. I'll be talking about that much more later in the course, but what I'm doing is creating an instance role and an instance role profile. You won't know what these are yet, but I’ll be talking about them later in the course. For now, just ignore them. The main two components that we're creating are an EC2 instance and a security group for that instance.

      We’re creating a security group that allows two things into this instance: port 22, which is SSH, and port 80, which is HTTP. So it’s allowing two different types of traffic into whatever the security group is attached to. Then we’re creating the EC2 instance itself. We’ve got the EC2 instance, which is a logical resource, the type being AWS::EC2::Instance, and then the properties for that logical resource, such as the configuration for the instance. We’re setting the type and size of the instance, t2.micro, which will keep it inside the free tier. We’re setting the AMI image ID to use, and it's referencing the parameter, and if you recall, that automatically sets the latest AMI ID. We’re setting the security group, which is referencing the logical resource that we create below, so it creates this security group and then uses it on the instance. Finally, we’re setting the instance profile. Now, that’s related to these two things that I’m not talking about at the bottom. It just sets the instance profile, so it gives us the permission to use Session Manager, which I’ll demonstrate shortly after we implement this.

      There’s nothing too complex about that, and I promise you by the end of the course, and as you get more exposure to CloudFormation, this will make a lot more sense. For now, I just want to use it to illustrate the power of CloudFormation. So I’m going to move back to the console. Before I do this, I’m going to go to services and just open EC2 in a new tab. Once you’ve done that, return to CloudFormation and click on next. We’ll need to name the stack. I’m just going to call it CFN demo one for CloudFormation demo one. Here’s how the parameters are presented to us in the UI. The latest AMI ID is set by default to this value because, if we look at the parameters, it’s got this default value for this parameter. Then SSH and web location also has a default value which is set in the template, and that’s why it’s set in the UI. Leave these two values as default. Once you’ve done that, click on next.

      I’ll be talking more about all of these advanced options later on in the course when I talk about CloudFormation. For now, we’re not going to use any of these, so click on next. On this screen, we need to scroll down to the bottom and check this capabilities box. For certain resources that you can create within CloudFormation, CloudFormation views them as high-risk. In this case, we're creating an identity, an IAM role. Don't worry, I'll be talking a lot more about what an IAM role is in the next section of the course. Because it's an identity, because it's changing something that provides access to AWS, CloudFormation wants us to explicitly acknowledge that we’re to create this resource. So it’s prompting us for this capability to create this resource. Check this box, it’s fine, and then click on submit. The stack creation process will begin and the status will show create in progress.

      This process might take a few minutes. You’re able to click on refresh here, so this icon on the top right, and this will refresh the list of events. As CloudFormation is creating each physical resource that matches the logical resources in the template, it’s going to create a new event. For each resource, you’ll see a create in progress event when the creation process starts, and then you’ll see another one create complete when it creates successfully. If there are any errors in the template, you might see red text, which will tell you the nature of that error. But because this is a CloudFormation template that I’ve created, there’ll be no errors. After a number of minutes, the stack itself will move from Create in Progress to Create Complete.

      I refreshed a couple more times and we can see that the Session Manager instance profiles moved into the Create Complete status and straight after that it started to create the EC2 instance. We’ve got this additional event line saying Create in Progress, and the resource creation has been initiated. We’re almost at the end of the process now; the EC2 instance is going to be the last thing that the stack will create. At this point, just go ahead and pause the video and wait until both the EC2 instance and the stack itself move into Create Complete. Once both of those move into Create Complete, then you can resume the video and we’re good to continue.

      Another refresh, and we can see that the EC2 instance has now moved into a Create Complete status. Another refresh and the entire stack, CFN demo 1, is now in the create complete state, which means that the creation process has been completed and for every logical resource in the template, it’s created a physical resource. I can click on the outputs tab and see a list of all the outputs that are generated from the stack. You’ll note how they perfectly match the outputs that are listed inside the template. We’ve got instance ID, AZ, public DNS, and public IP. These are exactly the same as the outputs listed inside the CloudFormation template. You’ll see that these have corresponding values: the instance ID, the public DNS of the instance, and the public IP version 4 address of the instance.

      If I click on the resources tab, we’ll be able to see a list of the logical resources defined in the template, along with their corresponding physical resource IDs. For the EC2 instance logical resource, it’s created an instance with this ID. If you click on this physical ID, it will take you to the actual resource inside AWS, in this case, the EC2 instance. Now, before we look at this instance, I’m going to click back on CloudFormation and just click on the stacks clickable link at the top there. Note how I’ve got one stack, which is CFN demo one. I could actually go ahead and click on create stack and create stack with new resources and apply the same template again, and it would create another EC2 instance. That’s one of the powerful features of CloudFormation. You can use the same template and apply it multiple times to create the same set of consistent infrastructure.

      I could also take this template because it's portable, and because it automatically selects the AMI to use, I could apply it in a different region and it would have the same effect. But I’m not going to do that. I’m going to keep things simple for now and move back to the EC2 tab. Now, the one thing I want to demonstrate before I finish up with this lesson is Session Manager. This is an alternative to having to use the key pair and SSH to connect to the instance. What I’m able to do is right-click and hit Connect, and instead of using a standalone SSH client, I can select to use Session Manager. I’ll select that and hit Connect, and that will open a new tab and connect me to this instance without having to use that key pair.

      Now, it connects me using a different shell than I'm used to, so if I type bash, which is the shell that you normally have when you log into an EC2 instance, that should look familiar. I’m able to run normal Linux commands like df -k to list all of the different volumes on the server, or dmesg to get a list of informational outputs for the server. This particular one does need admin permission, so I’ll need to rerun this with sudo and then dmesg. These are all commands that I could run in just the same way if I was connected to the instance using an SSH client and the key pair. Session Manager is just a better way to do it, but it requires certain permissions to be given to the instance. That’s done with an instance role that I’ll be talking all about later on in the course. That is the reason why my CloudFormation template has these two logical resources, because these give the instance the permission to be able to be connected to using Session Manager. It makes it a lot easier to manage EC2 instances.

      So that’s been a demo of how easy it is to create an EC2 instance using CloudFormation. Throughout the course, we'll be using more and more complex examples of CloudFormation. I’ll be using that to show you how powerful the tool is. For now, it’s a really simple example, but it should show how much quicker it is to create this instance using CloudFormation than it was to do it manually. To finish up this lesson, I’m going to move back to the CloudFormation console. I’m going to select this CloudFormation stack and click on Delete. I need to confirm that I want to do this because it’s telling me that deleting this stack will delete all of the stack resources.

      What happens when I do this is that the stack deletes all of the logical resources that it has, and then it deletes all of the corresponding physical resources. This is another benefit of CloudFormation in that it cleans up after itself. If you create a stack and that creates resources, when you delete that stack, it cleans up by deleting those resources. So if I click on Delete Stack Now, which I will do, it starts a delete process, and that’s going to go ahead and remove the EC2 instance that it created. If I select this stack now, I can watch it do that. I can click on Events, and it will tell me exactly what it’s doing. It’s starting off by deleting the EC2 instance. If I move back to the EC2 console and just hit Refresh, we can see how the instance state has moved from running to shutting down.

      Eventually, once the shutdown is completed, it will terminate that instance. It’ll delete the storage, it will stop using the CPU and memory resources. At that point, the account won’t have any more charges. It wouldn’t have done anyway because this demo has been completely within the free tier allocation because I was using a t2.micro instance. But there we go. We can see the instance state has now moved to terminated. Go back to CloudFormation and just refresh this. We’ll see that it’s completed the deletion of all the other resources and then finished off by deleting the stack itself. So that’s the demonstration of CloudFormation. To reaffirm the benefits, it allows us to do automated, consistent provisioning. We can apply the same template and always get the same results. It’s completely automated, repeatable, and portable. Well-designed templates can be used in any AWS region. It’s just a tool that really does allow us to manage infrastructure effectively inside AWS.

    1. Welcome back, and in this lesson, I want to talk about AWS CloudFormation. I'm going to be brief because learning CloudFormation is something which will happen throughout the course, as we'll be using it to automate certain things. Before we dive in, I'll introduce the concepts you'll need and give you a chance to experience a simple practical example.

      CloudFormation is a tool which lets you create, update, and delete infrastructure in AWS in a consistent and repeatable way using templates. Rather than creating and updating resources manually, you create a template, and CloudFormation will do the rest on your behalf.

      At its base, CloudFormation uses templates. You can use a template to create AWS infrastructure using CloudFormation. You can also update a template and reapply it, which causes CloudFormation to update the infrastructure, and eventually, you can use CloudFormation to delete that same infrastructure. A CloudFormation template is written either in YAML or JSON. Depending on your experience, you might be familiar with one or both of these. If you haven't touched YAML or JSON before, don't worry. They achieve the same thing, and it's easy to convert between them. You might get to pick which one to use when writing templates, or your business might have a preference. It's mostly a matter of personal preference. Most people in the AWS space like one and dislike the other, though very few people like both. I am one of those who likes both. I started my AWS career using JSON but have come to appreciate the extra functionality that YAML offers. However, YAML can be easier to make mistakes with because it uses white spaces to indicate which parts belong to which other parts. Since spaces are not always visible, it can be a problem for less experienced engineers or architects. If I have to pick one, I'll use YAML. So for the rest of this lesson, I'll focus on YAML.

      I want to quickly step through what makes a template, the components of a template, and then discuss the architecture of CloudFormation before moving on to a demo. All templates have a list of resources, at least one. The resources section of a CloudFormation template tells CloudFormation what to do. If resources are added, CloudFormation creates them. If resources are updated, CloudFormation updates them. If resources are removed from a template and that template is reapplied, then physical resources are removed. The resources section of a template is the only mandatory part of a CloudFormation template, which makes sense because without resources, the template wouldn't do anything. The simple template that we'll use in the demo lesson immediately following this one has resources defined in it, and we'll step through those and evaluate exactly what they do.

      Next is the description section. This is a free text field that lets the author of the template add a description, as the name suggests. Generally, you would use this to provide details about what the template does, what resources get changed, and the cost of the template. Anything that you want users to know can be included in the description. The only restriction to be aware of is if you have both a description and an AWSTemplateFormatVersion, then the description needs to immediately follow the template format version. The template format version isn't mandatory, but if you use both, the description must directly follow the template format version. This has been used as a trick question in many AWS exams, so it pays to be aware of this restriction. The template format version allows AWS to extend standards over time. If it's omitted, the value is assumed.

      The metadata in the template is the next part I want to discuss. It has many functions, including some advanced ones. For example, metadata can control how different elements in the CloudFormation template are presented through the console UI. You can specify groupings, control the order, and add descriptions and labels, which helps in managing how the UI presents the template. Generally, the bigger your template and the wider the audience, the more likely it is to have a metadata section. Metadata serves other purposes, which I'll cover later in the course.

      The parameters section of a template allows you to add fields that prompt the user for more information. When applying the template from the console UI, you'll see boxes to type in or select from dropdowns. This can be used to specify things like the size of the instance to create, the name of something, or the number of availability zones to use. Parameters can have settings for valid entries and default values. You'll gain more experience with this as we progress through the course and use CloudFormation templates.

      The next section is mappings, which is another optional section of the CloudFormation template and something we won't use as much, especially when starting with CloudFormation. It allows you to create lookup tables. For example, you can create a mappings table called RegionAndInstanceTypeToAMI, which selects a specific Amazon Machine Image based on the region and environment type (e.g., test or prod). This is something you'll get experience with as the course continues, but I wanted to introduce it at this point.

      Next, let's talk about conditions. Conditions allow decision-making in the template, enabling certain things to occur only if a condition is met. Using conditions involves a two-step process. Step one is to create the condition. For instance, if a parameter is equal to "prod" (i.e., if the template is being used to create prod resources), then you create a condition called CreateProdResources. If the parameter "environment type" is set to "prod," the condition CreateProdResources will be true. Step two is using this condition within resources in the CloudFormation template. For example, a resource called Prodcatgifserver will only be created if the condition CreateProdResources is true. This will only be true if the "environment type" parameter is set to "prod" rather than "test." If it's set to "test," that resource won't be created.

      Finally, outputs are a way for the template to present outputs based on what's being created, updated, or deleted once the template is finished. For example, outputs might return the instance ID of an EC2 instance that's been created, or if the template creates a WordPress blog, it could return the admin or setup address for that blog.

      So, how exactly does CloudFormation use templates? CloudFormation starts with a template. A template contains resources and other elements you'll become familiar with as we use CloudFormation more. Let's take a simple example—a template that creates an EC2 instance. Resources inside a CloudFormation template are called logical resources. In this case, the logical resource is called "instance," with a type of AWS::EC2::Instance. The type tells CloudFormation what to create. Logical resources generally also have properties that CloudFormation uses to configure the resources in a specific way.

      When you provide a template to CloudFormation, it creates a stack, which contains all the logical resources defined in the template. A stack is a living and active representation of a template. One template could create one stack, or several stacks, or anywhere in between. A stack is created when you tell CloudFormation to do something with that template.

      For any logical resources in the stack, CloudFormation makes a corresponding physical resource in your AWS account. For example, if the stack contains a logical resource called "instance," which defines an EC2 instance, the physical resource is the actual EC2 instance created by CloudFormation. It's CloudFormation's job to keep the logical and physical resources in sync. When you use a template to create a stack, CloudFormation scans the template, creates a stack with logical resources, and then creates matching physical resources.

      You can also update a template and use it to update the stack. When you do this, the stack's logical resources will change—new ones may be added, existing ones updated or deleted. CloudFormation performs the same actions on the physical resources, adding, updating, or removing them as necessary. If you delete a stack, its logical resources are deleted, leading CloudFormation to delete the matching physical resources.

      CloudFormation is a powerful tool that allows you to automate infrastructure. For instance, if you host WordPress blogs, you can use one template to create multiple deployments rather than setting up each site individually. CloudFormation can also be part of change management, allowing you to store templates in source code repositories, make changes, get approval, and apply them as needed. It can also be used for one-off deployments.

      Throughout this course, I'll be using CloudFormation to help you implement various things in demo lessons. If a demo lesson requires certain products to function, I might provide a CloudFormation template to set up the base infrastructure. Alternatively, you can use the template to implement the entire demo end-to-end. CloudFormation is super powerful, and you'll get plenty of exposure to it throughout the course.

      Now, that's all the theory I wanted to cover. The next lesson will be a demo where you'll use CloudFormation to create an EC2 instance. Remember in the EC2 demo lesson, where you created an EC2 instance? In the next demo lesson, you'll create a similar EC2 instance using CloudFormation, demonstrating how much quicker and easier it is to automate infrastructure tasks with CloudFormation. So go ahead, complete this video, and when you're ready, join me in the next lesson where we'll demo CloudFormation.

    1. “Reformat”

      function as checker

    2. Java is in the family of programming languages that use curly braces ({}) to group together statements.

      function

    3. Punctuation is important

      Such as ; at the end

    4. names are what connect different parts of your program

      It must be precise and correct

    5. specific details of Java’s syntax.

      need to learn specific expressions even though I've already learned other language.

    6. In Java you’ll find the structures are similar but you’ll have to get used to expressing them in text.

      using Java language instead of block coding

    7. AP CSP (Computer Science Principles)

      another Computer Science class

    1. Welcome back and in this demo lesson, I just want you to get some experience working with S3.

      In this demo lesson you're going to create an S3 bucket which is going to be used for a campaign within the Animals for Life organization.

      You're going to get the chance to create the bucket, interact with the bucket, upload some objects to that bucket and then finally interact with those objects.

      Now to get started you'll need to make sure that you're logged in to the IAM

      admin user within the general AWS account. By this point in the course you

      should have a general account and a production account and you need to make

      sure that you're logged in to the general AWS account. As always make sure

      that you're also using the Northern Virginia region which is US-East-1.

      Now assuming that you do have that configuration next you need to move to

      the S3 console and there are a couple of ways that you can do that you can type

      S3 into this find services box if you've previously used the service it will be

      listed under the recently visited services and then finally at the top

      here you can click on the services drop-down and either type S3 into the

      all services box here or locate it in the list of services and click to move

      to the S3 console so I'm going to go ahead and type S3 and then click to move

      to the console. Now when you first arrive at the S3 console you'll be presented

      with a list of buckets within this AWS account. I want to draw specific

      attention to the fact that with S3 you do not have to choose a region with the

      region drop-down. When you create buckets within S3 you have to pick the region

      that that bucket is created in but because S3 uses a global namespace you

      don't have to select a region when using the console. So on this list you will see

      any buckets in any regions within this one single AWS account. You don't have to

      pick the region in advance. So let's go ahead and create an S3 bucket and to do

      that logically enough we click on create bucket. Now to create a bucket you need

      to specify a name and we're creating this bucket for a koala campaign for the

      Animals for Life organization. So we're going to start with Koala Campaign.

      Now because bucket names do need to be unique we can't just leave it at Koala Campaign

      we need to add some random numbers at the end. This is just to make sure that

      the name that you pick is different than the name that I pick and different than

      the name that every other student uses. So just put some numbers after this name.

      I'm going to pick 1-3-3-3-3-3-7. Now there are some rules around bucket

      naming names need to be between 3 and 63 characters they can only consist of

      lowercase letters numbers dots and hyphens they need to begin and end with a

      letter or number they can't be formatted like an IP address and they can't begin

      with X and N and of course they need to be entirely unique now there are some

      specific restrictions or specific rules for naming buckets if you want to use

      certain S3 features. Later in the course I'll be talking about static website

      hosting within S3 and I'll be showing you how you can use a custom domain name

      with an S3 bucket so you can get a domain name to host for example a blog

      or a static website and you can use S3 to host that website and if you want to

      do that then you need to name the bucket name the same as the DNS name that

      you'll be using to access that bucket. But at this point this is just an

      introductory demo so we can leave this as just a standard name. So use Koala

      campaign with some random numbers at the end and that should be good. Now when

      you're creating a bucket you need to specify a region and this is the region

      that this bucket will be placed in. Now I'm going to use US-East-1

      as a default throughout this course and so I do recommend that you pick that to

      create the bucket in. Now if you have any existing buckets within your account and

      you want to copy the settings from those buckets and this will of course just

      save you some time when setting up the bucket then you can click on choose

      bucket and copy the settings from another bucket in your account. Now

      because we're starting this from fresh and we don't have any existing buckets we

      can't use this option. So we need to scroll down and just review what options

      we have. For now we're going to skip past object ownership because this is a

      feature that I'll be discussing in much more detail later in the course. I can't

      really explain this until you have some experience of how the permissions model

      works with S3 so I'll be talking about it in the S3 section of the course. Now

      the first thing that you need to pick when you're creating buckets is this

      bucket settings for block public access. So all S3 buckets by default are

      private. Nobody has permissions to this bucket apart from the account that

      creates the bucket. So in this particular case we're creating it inside the

      general AWS account and so only the general AWS account and the account root

      user of that account have permissions. Now because we've granted the IAM admin

      user full admin permissions then it too has access to this bucket but by default

      nothing else can have access. Now you can make a bucket completely public. You can

      grant access for all users to that bucket including unauthenticated or

      anonymous users. Now that's a security risk because potentially you might have

      sensitive data within that bucket. This is a fail-safe. This means that even if

      you grant completely public access to a bucket then this will block that access.

      And I'm going to be talking about this in much more detail later in the course

      But you need to know that this exists if we untick this option for example

      Even though we are now not blocking all public access

      You still need to grant access to this bucket

      So all this option does is prevent you granting public access if you disable it

      It does not mean that the bucket is public. It just means that you can grant public access to this bucket

      So for this demonstration, we're going to go ahead and untick this option

      Now if you do untick this you'll need to scroll down and check this box just to acknowledge that you understand

      Exactly what you're doing

      So this is a safety feature of s3 that if you're going to remove this fail-safe check then you need to accept responsibility

      It means that if you do mistakenly grant public access to the bucket then potentially information can be exposed.

      Now I'm not going to explain any of these other options because I cover all of them in the S3 section of the course.

      So I'm going to skip past bucket versioning and tags,

      default encryption, and I'm not going to be covering any of these advanced settings.

      Instead, let's just go ahead and click on create bucket.

      At this point you might get an error that a bucket with the same name already exists, and that's fine.

      Remember S3 bucket names need to be globally unique

      And there's obviously a lot of koala campaigns happening in the wild. If you do get the error

      Then just feel free to add extra digits of random to the bucket name

      Then scroll all the way down to the bottom and create the bucket. Once the bucket's created

      You'll see it in the list of buckets

      So there's a column for the name a column for the region

      So you'll be able to see which region this bucket is in

      It will give you an overview of the access that this bucket has so because we unchecked the block public access

      Then it informs us that objects can be public again

      Just to stress this does not mean they are public because s3 buckets are private by default

      This is just telling us that they can be public

      Lastly, we also have the creation date which tells us when the bucket was created.

      So now let's just go ahead and click on the bucket to move inside so we can see additional information.

      Now one thing that I do want to draw your attention to is the Amazon resource name or ARN for this bucket.

      All resources in AWS have a unique identifier, the ARN or Amazon resource name.

      So this is the ARN for the bucket that we've just created

      ARNs have a consistent format. They start with ARN for Amazon resource name

      Then they have the partition for most

      AWS resources in most regions this will always say AWS

      Then you have the service name in this case S3

      Then you have some other values which I'll be talking about later in the course and you can omit those with certain

      services by just putting double colons. These for example might be the region or the account number.

      Now for services where resources are not globally unique

      then obviously you need to specify the region and the account number in order for this name to be globally unique.

      But because S3 buckets by default have to be globally unique

      then we don't have to specify in the ARN either

      the region or the account number. As long as we have the S3 service and the bucket name, we know that this uniquely

      references a resource and that's the key thing about ARNs. ARNs

      uniquely reference one resource within

      AWS. You always know if you have one ARN that it references one particular resource within AWS.

      Now you can use wildcards to reference multiple resources,

      but as a basis it has to reference at least one.

      Now let's just click on the objects tab here

      and this will give us an overview of all of the objects

      which are in this bucket.

      You have a number of tabs here that you can step through.

      We've got a properties tab where you can enable

      bucket versioning, tags, encryption, logging,

      CloudTrail data events, event notifications,

      transfer acceleration, object lock, request to pays,

      and static website hosting.

      and we'll be talking about all of those features in detail

      within the S3 section of the course.

      We'll also be covering permissions in that section

      because you can be very granular

      with the permissions of S3 buckets.

      You can see some metrics about the bucket.

      So this uses CloudWatch,

      which we'll be talking about in detail

      elsewhere in the course.

      You're also able to access management functionality.

      Again, we'll be talking about all of this

      later in the course.

      And then finally you're able to create access points.

      Now access points are some advanced functionality and so we'll be covering this later in the course.

      For now I just want you to get some experience of uploading some objects and interacting with them.

      Now there's a link attached to this lesson which you'll need to go ahead and click and that will download a zip file.

      Go ahead and extract that zip file and it will create a folder.

      And then once you've extracted that into a folder we're good to continue.

      Now the easiest way at this point to upload some objects is to make sure that you've got the objects tab selected

      and then click on upload.

      Now you're able to upload both files and folders to this S3 bucket.

      So let's start off by uploading some files.

      So click on add files.

      Now at this point locate and go inside the folder that you extracted a few moments ago

      and you'll see that there are three image files.

      We've got koala_nom1.jpg, koala_nom2.jpg and koala_zzz.jpg

      Now go ahead and select all three of these JPEG files and click on open.

      You'll see that you have three files in total queued for upload

      and you'll be provided with an estimate of the amount of space that these files will consume.

      Now scrolling down you're told the destination where you'll be uploading these objects to

      So this is the S3 bucket that we've created and this will be different for you.

      This will be your bucket name.

      Now we haven't enabled versioning on this bucket.

      This is a feature which I'll be covering in the S3 section of the course, but

      because we don't have versioning enabled, it means that if we do upload files with

      the same name, then potentially we're going to overwrite other objects in that bucket.

      So we have to accept the risk because we don't have versioning enabled.

      We could overwrite objects if we re-upload ones with the same name.

      In this case that's fine because we're not uploading anything important

      and regardless this bucket is empty so we can't overwrite anything.

      You have the option of enabling versioning where you can just acknowledge the risk.

      Then we can scroll down further still. We need to pick the storage class for the objects.

      This defaults to standard and I haven't covered storage classes in the course yet.

      I'll be doing that within the S3 section, so we're going to accept the default

      and then we're going to skip past all of these options.

      I'll be covering these later in the course and just go ahead and click on Upload.

      And this will upload all of those three objects to the S3 bucket.

      You'll be told whether the upload has been successful or whether it's failed.

      In our case, it's succeeded.

      So we can go ahead and click on Close to close down this dialogue.

      Now when we scroll down, we'll see an overview of the objects within this bucket.

      In our case we only have the three, Koala Nom1, Koala Nom2 and KoalaZZZ.jpg

      We can also create folders within S3. Now of course because S3 is a flat structure

      This isn't actually creating a folder. It's just creating a file which emulates a folder

      So if we create a folder and let's call this folder archive and then click on create folder

      It's not actually creating a folder called archive what it's doing is creating an object with this name

      so archive forward slash

      Now if we click on this archive folder and go inside it we can upload objects into this folder

      So let's go ahead and do that click on upload

      Go to add files and then just pick one of these files. Let's go with koala

      zz.jpg so select that one and click on open and just click on upload. Now what

      we've done is we've uploaded an object into what we see as a folder in this s3

      bucket. If we click on close what this has actually done is it's created an

      object which is called archive/koalazz.jpg. S3 doesn't really have

      folders. Folders are emulated using prefixes and that's important to know as

      you move through the course. Now if we click this at the top to go back to the

      main bucket and we're going to go ahead and open one of these objects. So let's

      pick one of these objects let's use koala_nom1.jpg. This opens an overview

      screen for this particular object and where we see this object URL just go

      ahead and right-click and open that in a new tab. When you open that in a new tab

      you'll be presented with an access denied error. The reason for that is

      you're trying to access this object with no authentication. You're accessing the

      object as an unauthenticated user and as I mentioned earlier all S3 objects and

      all S3 buckets are private by default and that's why you get this access

      denied you won't be able to access this object without authenticating to AWS and

      using that identity to access an object. That's of course unless you grant public

      access to this object which we haven't done and we won't be doing in this

      lesson. So if we close down that tab and instead click on open you might have to

      bypass a pop-up blocker but this time it will open this object and that's because

      we're including authentication in the URL at the top here. So when you click on

      the open button it's opening the object as you it's not opening it as an

      unauthenticated identity so that's important because you have access to

      this bucket you can open the objects using this open button. The same is true

      for the other objects so if we go back to the bucket let's pick koala nom2

      Then click on the open button and again we'll see a koala having some food.

      Go back to the bucket and then let's try koala ZZZ.

      So click on the object, click on open again and now we can see a koala having a well deserved

      rest after his lunch.

      Now that's everything I wanted to cover in this demo lesson.

      It's just been a really high level introduction into how to interact with S3 using the console

      UI.

      be covering S3 in detail later in the course, I just wanted this demo lesson to

      be a very brief introduction. Now what we need to do before we finish this demo

      lesson is to go back to the main S3 console and we need to tidy up by

      deleting this bucket. So deleting buckets within S3 is a two-step process. First we

      need to empty the bucket. So go ahead and select the bucket and click on empty.

      You'll need to either type or copy and paste permanently delete into the box

      and then click on empty and that will remove any objects within the bucket.

      Assuming that's successful go ahead and click on exit and with the bucket still

      selected click on delete and then once you've clicked on delete you'll need to

      copy and paste or type the name of the bucket and finally click on delete

      bucket to confirm that deletion process and that will delete the bucket and your

      account will be back in the same state as it was at the start of this demo

      lesson. Now at this point I hope this has been useful it's just been a really

      basic introduction to S3 and don't worry you'll be getting plenty more theory and

      practical exposure to the product in the S3 section of the course. For now just go

      ahead and complete this video and when you're ready I look forward to you

      joining me in the next.

    1. eLife assessment

      This study presents valuable data on sensory integration in a model pre-motor neuron, the Mauthner cell. The authors use both stimulation of the optic tectum (a proxy for vision) and auditory stimulation to study the integration of these modalities in the Mauthner cell using convincing, technically demanding, and well done experiments. There are, however, concerns about the degree to which the two modalities interact; multisensory integration of subthreshold unisensory stimuli appears uncommon, and not significantly above events observed from single modalities. This work will be of interest to both synaptic physiologists and neurophysiologists working on sensory-motor integration.

    2. Reviewer #1 (Public Review):

      Summary:

      Otero-Coronel et al. address an important question for neuroscience - how does a premotor neuron capable of directly controlling behavior integrate multiple sources of sensory inputs to inform action selection? For this, they focused on the teleost Mauthner cell, long known to be at the core of a fast escape circuit. What is particularly interesting in this work is the naturalistic approach they took. Classically, the M-cell was characterized, both behaviorally and physiologically, using an unimodal sensory space. Here the authors make the effort (substantial!) to study the physiology of the M-cell taking into account both the visual and auditory inputs. They performed well-informed electrophysiological approaches to decipher how the M-cell integrates the information of two sensory modalities depending on the strength and temporal relation between them.

      Strengths:

      The empirical results are convincing and well-supported. The manuscript is well-written and organized. The experimental approaches and the selection of stimulus parameters are clear and informed by the bibliography. The major finding is that multisensory integration increases the certainty of environmental information in an inherently noisy environment.

      Weaknesses:

      Even though the manuscript and figures are well organised, I found myself struggling to understand key points of the figures.

      For example, in Figure 1 it is not clear what are actually the Tonic and Phasic components. The figure will benefit from more details on this matter. Then, in Figure 4 the label for the traces in panel A is needed since I was not able to pick up that they were coming from different sensory pathways.

      In line 338 it should be optic tectum and not "optical tectum".

    3. Reviewer #2 (Public Review):

      Summary:

      In this manuscript, Otero-Coronel and colleagues use a combination of acoustic stimuli and electrical stimulation of the tectum to study MSI in the M-cells of adult goldfish. They first perform a necessary piece of groundwork in calibrating tectal stimulation for maximal M-cell MSI, and then characterize this MSI with slightly varying tectal and acoustic inputs. Next, they quantify the magnitude and timing of FFI that each type of input has on the M-cell, finding that both the tectum and the auditory system drive FFI, but that FFI decays more slowly for auditory signals. These are novel results that would be of interest to a broader sensory neuroscience community. By then providing pairs of stimuli separated by 50ms, they assess the ability of the first stimulus to suppress responses to the second, finding that acoustic stimuli strongly suppress subsequent acoustic responses in the M-cell, that they weakly suppress subsequent tectal stimulation, and that tectal stimulation does not appreciably inhibit subsequent stimuli of either type. Finally, they show that M-cell physiology mirrors previously reported behavioural data in which stronger stimuli underwent less integration.

      The manuscript is generally well-written and clear. The discussion of results is appropriately broad and open-ended. It's a good document. Our major concerns regarding the study's validity are captured in the individual comments below. In terms of impact, the most compelling new observation is the quantification of the FFI from the two sources and the logical extension of these FFI dynamics to M-cell physiology during MSI. It is also nice, but unsurprising, to see that the relationship between stimulus strength and MSI is similar for M-cell physiology to what has previously been shown for behavior. While we find the results interesting, we think that they will be of greatest interest to those specifically interested in M-cell physiology and function.

      Strengths:

      The methods applied are challenging and appropriate and appear to be well executed. Open questions about the physiological underpinnings of M-cell function are addressed using sound experimental design and methodology, and convincing results are provided that advance our understanding of how two streams of sensory information can interact to control behavior.

      Weaknesses:

      Our concerns about the manuscript are captured in the following specific comments, which we hope will provide a useful perspective for readers and actionable suggestions for the authors.

      Comment 1 (Minor):

      Line 124. Direct stimulation of the tectum to drive M-cell-projecting tectal neurons not only bypasses the retina, it also bypasses intra-tectal processing and inputs to the tectum from other sources (notably the thalamus). This is not an issue with the interpretation of the results, but this description gives the (false) impression that bypassing the retina is sufficient to prevent adaptation. Adding a sentence or two to accurately reflect the complexity of the upstream circuitry (beyond the retina) would be welcome.

      Comment 2 (Major):

      The premise is that stimulation of the tectum is a proxy for a visual stimulus, but the tectum also carries the auditory, lateral line, and vestibular information. This seems like a confound in the interpretation of this preparation as a simple audio-visual paradigm. Minimally, this confound should be noted and addressed. The first heading of the Results should not refer to "visual tectal stimuli".

      Comment 3 (Major):

      Figure 1 and associated text.

      It is unclear and not mentioned in the Methods section how phasic and tonic responses were calculated. It is clear from the example traces that there is a change in tonic responses and the accumulation of subthreshold responses. Depending on how tonic responses were calculated, perhaps the authors could overlay a low-passed filtered trace and/or show calculations based on the filtered trace at each tectal train duration.

      Comment 4 (Minor):

      Figure 3 and associated text.<br /> This is a lovely experiment. Although it is not written in text, it provides logic for the next experiment in choosing a 50ms time interval. It would be great if the authors calculated the first timepoint at which the percentage of shunting inhibition is not significantly different from zero. This would provide a convincing basis for picking 50ms for the next experiment. That said, I suspect that this time point would be earlier than 50m s. This may explain and add further complexity to why the authors found mostly linear or sublinear integration, and perhaps the basis for future experiments to test different stimulus time intervals. Please move calculations to Methods.

      Comment 5 (Major):

      Figure 4C and lines 398-410.<br /> These are beautiful examples of M-cell firing, but the text suggests that they occurred rarely and nowhere close to significantly above events observed from single modalities. We do not see this as a valid result to report because there is insufficient evidence that the phenomenon shown is consistent or representative of your data.

    4. Author response:

      Answers to Reviewer #1 (Public Review):

      (1) Tonic and phasic components in Figure 1 are not clear.

      We will reformulate Figure 1A to show how the tonic and phasic components were measured. As this point was also raised by Reviewer #2 (Comment 3), we will explicitly clarify this in the Methods section. We will modify the color scheme to improve clarity.

      (2) Labeling of traces in Figure 4.

      We will add labels to traces informing which sensory pathways were stimulated to produce each response.

      (3) Optic tectum instead of optical tectum.

      We apologize for the error. We will replace “optical tectum” with “optic tectum” as also suggested by Reviewer #2.

      Answers to Reviewer #2 (Public Review):<br /> (1) Complexity of tectum upstream circuitry (Comments 1 and 2).

      Processing of visual information is certainly a major role of the tectum, but it is true that it also receives sensory inputs from other structures including sensory pathways. We will acknowledge this complexity in our revised manuscript along with suggestions for heading titles.

      (2) Figure 1 and associated text. 

      As mentioned in the provisional answer point 1 to Reviewer #1, we will reformulate Figure 1A and clarify how tonic and phasic responses were calculated.

      (3) Figure 3 and associated text.

      We will perform the analysis suggested by the reviewer and move calculations to the Methods section as requested.

      (4) Figure 5C and lines 398-410.

      We will consider omitting Figure 5C or clearly stating its value in the context of the rest of the data and our previous behavioral experiments.

    1. at least 20 hours of lab time for you to practice Java programming.

      requirement for AP CSA

    2. 10 units shown in the table below

      College board pay attention especially to 10 units below

    3. 2D Array

      type of free response question

    4. Array/ArrayList

      type of free response question

    5. Classes

      type of free response question

    6. Methods and Control Structures

      one kind of free-response question

    1. Lucy Calkins Retreats on Phonics in Fight Over Reading Curriculum by Dana Goldstein

      Not much talk of potentially splitting out methods for neurodivergent learners here. Teaching reading strategies may net out dramatically differently between neurotypical children and those with issues like dyslexia. Perceptual and processing issues may make some methods dramatically harder for some learners over others, and we still don't seem to have any respect for that.

      This example is an interesting one of the sort of generational die out of old ideas and adoption of new ones as seen in Kuhn's scientific revolutions.

    1. The Portal A podcast hosted by Eric Weinstein, The Portal is a journey of discovery. It is wide ranging and deep diving discussions with distinguished guests from the realms of science, culture and business. Join us as we seek portals that will carry us through the impossible- and beyond.
    1. George MacDonald (10 December 1824 – 18 September 1905) was a Scottish author, poet and Christian Congregational minister. He became a pioneering figure in the field of modern fantasy literature and the mentor of fellow-writer Lewis Carroll. In addition to his fairy tales, MacDonald wrote several works of Christian theology, including several collections of sermons.
    1. https://en.wikipedia.org/wiki/Matthew_effect

      The Matthew effect of accumulated advantage, sometimes called the Matthew principle, is the tendency of individuals to accrue social or economic success in proportion to their initial level of popularity, friends, and wealth. It is sometimes summarized by the adage or platitude "the rich get richer and the poor get poorer". The term was coined by sociologists Robert K. Merton and Harriet Zuckerman in 1968 and takes its name from the Parable of the Talents in the biblical Gospel of Matthew.

      related somehow to the [[Lindy effect]]?

    1. A critique on the Mass Media... The problem is that they want the Mass Media system to operate on the code of "True/False" rather than "Known/Unknown"... But if it were to be so, it would not be Mass Media anymore, but rather the Science System.

      For Mass Media to be Mass Media it needs to be concerned with selection and filtering, to condense and make known, not to present "all the facts". Sure, they need to be concerned with truth to a certain degree, but it's not the primary priority.


      This is a reflection based on my knowledge of Luhmann's theory of society as functionally differentiated systems; as explained by Hans-Georg Moeller (Carefree Wandering) on YouTube.

    2. Today while listening to the song I am reminded, through reflection, upon the fact that it takes quite some self-awareness and intellectual humility to prevent the rigorous defense of uneducated opinion, especially in online intellectual communities.

      "Real knowledge is to know the extent of one's ignorance." -- Confucius

      Something that intellectuals must be aware of. We must be flexible in opinion and not defend that which we actually have no knowledge of.

      We can debate for Socratic sakes; to deepen our understanding, but not to persuade... Pitfall is one might come to believe beyond doubts that which one debates for.

      Key is to becoming more aware of our debate behavior and stop ourselves when we realize we can't actually prove that which we think.

      This is especially critical for someone in position of teacher or great advisor; he who is looken up to. People are easier to take their opinion for granted based on "authority". As an ethical intellectual we must not abuse this, either on purpose or by accident. With great power comes great responsibility.

    1. Heiress to one of the world’s most powerful families. Her grandfather cut her out of the $15.4 BILLION family fortune after her scandal. But she fooled the world with her “dumb blond” persona and built a $300 MILLION business portfolio. This is the crazy story of Paris Hilton:

      Interesting thread about Paris Hilton.

      Main takeaway: Don't be quick to judge. Only form an opinion based on education; thorough research, evidence-based. If you don't want to invest the effort, then don't form an opinion. Simple as that.

      Similar to "Patience" by Nas & Damian Marley.

      Also Charlie Munger: "I never allow myself to have [express] an opinion about anything that I don't know the opponent side's argument better than they do."

    1. , like smoking, having sex, andtaking drugs, that are discussed in health education classes, high schoolassemblies, and public service announcements on televisio

      They anticipate suicide or feelings of suicide as a common behavior during adolescence that teens will come across at one point of their life. It can be compared to inevitable urges like sexual desires or peer pressure.

    2. In fact, in anonymous forms of care, personalconnections are supposed to be suppressed.

      As I mentioned before, this type of practice is contradictory to personal connections that are crucial to sustainable well-being. It is ironic that in these hotline services "callers" are expected to share deep, personal thoughts yet still volunteers still place boundaries as a strategy to address these problems, which doesn't represent a fully safe space.

    3. We teach cleanliness but expect filth. We teach life as theultimate value but expect death.

      It is important to understand one to fulfill the other. We need experience a valuable life to learn that death is not something to fear as it is inevitable.

    4. By turning people who are suffering into“clients” who become objects of suicide risk management tools, thecounselor no longer has to cope with the existential anxiety that israised by suicide and the specificity of the suffering one is witnessing

      I believe this could be contradictory to feelings of loneliness and need for meaningful relationships. It is different being heard as a friend or family member than a professional.

    5. “Sui-cidal individuals themselves are positioned within this discourse ofpathology as mentally unwell, and thus not fully responsible for theiractions; instead, clinicians are taken to be the responsible, accountable,and possibly culpable agents in relation to their ‘suicidal patients.’

      I agree with this statement because children are not fully developed to understand their emotions. Parents are supposed to guide them and observe their behaviors to rectify them.

    6. “People who talk about suicide do it. Four out offi ve people who kill themselves have given out definite signals or talkedto someone about it”

      Even just talking about suicide is enough concern to prove how much they are mental suffering to seek immediate treatment before their triggers are exacerbated

    1. Welcome back. In this lesson, I want to introduce another core AWS service, the simple storage service known as S3. If you use AWS in production, you need to understand S3. This lesson will give you the very basics because I'll be deep diving into a specific S3 section later in the course, and the product will feature constantly as we go. Pretty much every other AWS service has some kind of interaction with S3. So let's jump in and get started.

      S3 is a global storage platform. It's global because it runs from all of the AWS regions and can be accessed from anywhere with an internet connection. It's a public service. It's regional based because your data is stored in a specific AWS region at rest. So when it's not being used, it's stored in a specific region. And it never leaves that region unless you explicitly configure it to. S3 is regionally resilient, meaning the data is replicated across availability zones in that region. S3 can tolerate the failure of an AZ, and it also has some ability to replicate data between regions, but more on that in the S3 section of the course.

      Now S3 might initially appear confusing. If you utilize it from the UI, you appear not to have to select a region. Instead, you select the region when you create things inside S3, which I'll talk about soon. S3 is a public service, so it can be accessed from anywhere as long as you have an internet connection. The service itself runs from the AWS public zone. It can cope with unlimited data amounts and it's designed for multi-user usage of that data. So millions of users could be accessing cute cat pictures added by the Animals for Life Rescue Officers. S3 is perfect for hosting large amounts of data. So think movies or audio distribution, large scale photo storage like stock images, large textual data or big data sets. It could be just as easily used for millions or billions of IOT devices or to store images for a blog. It scales from nothing to near unlimited levels.

      Now S3 is economical, it's a great value service for storing and allowing access to data. And it can be accessed using a variety of methods. There's the GUI, you can use the command line, the AWS APIs or even standard methods such as HTTP. I want you to think of S3 as the default storage service in AWS. It should be your default starting point unless your requirement isn't delivered by S3. And I'll talk more about the limitations and use cases later in this lesson.

      S3 has two main things that it delivers: Objects and Buckets. Objects are the data the S3 stores, your cat picture, the latest episode of Game of Thrones, which you have stored legally, of course, or it could be large scale datasets showing the migration of the koala population in Australia after a major bushfire. Buckets are containers for objects. It's buckets and objects that I want to cover in this lesson as an introduction to the service.

      So first, let's talk about objects. An object in S3 is made up of two main components and some associated metadata. First, there is the object key. And for now you can think of the object key, similar to a file name. The key identifies the object in a bucket. So if you know the object key and the bucket, then you can uniquely access the object, assuming that you have permissions. Remember by default, even for public services, there is no access in AWS initially, except for the account root user.

      Now, the other main component of an object is its value. And the value is the data or the contents of the object. In this case, a sequence of binary data, which represents a koala in his house. In this course, I want to avoid suggesting that you remember pointless values. Sometimes though, there are things that you do need to commit to memory. And this is one of those times. The value of an object, in essence, how large the object is, can range from zero bytes up to five terabytes in size. So you can have an empty object or you can have one that is a huge five TB. This is one of the reasons why S3 is so scalable and so useful in a wide range of situations because of this range of sizes for objects.

      Now, the other components of an object, aren't that important to know at this stage, but just so you hear the terms that I'll use later, objects also have a version ID, metadata, some access control, as well as sub resources. Now don't worry about what they do for now, I'll be covering them all later. For this lesson, just try to commit to memory what an object is, what components it has and the size range for an object.

      So now that we've talked about objects, let's move on and look at buckets. Buckets are created in a specific AWS region. And let's use Sydney or ap-southeast-2 as an example. This has two main impacts. Firstly, your data that's inside a bucket has a primary home region. And it never leaves that region, unless you as an architect or one of your system admins configures that data to leave that region. That means that S3 has stable and controlled data sovereignty. By creating a bucket in a region, you can control what laws and rules apply to that data. What it also means is that the blast radius of a failure is that region.

      Now this might be a new term. What I mean by blast radius is that if a major failure occurs, say a natural disaster or a large scale data corruption, the effect of that will be contained within the region. Now a bucket is identified by its name, the bucket name in this case, koala data. A bucket name needs to be globally unique. So that's across all regions and all accounts of AWS. If I pick a bucket name, in this case, koala data, nobody else can use it in any AWS account. Now making a point of stressing this as it often comes up in the exam. Most AWS things are often unique in a region or unique in your account. For example, I can have an IAM user called Fred and you can also have an IAM user called Fred. Buckets though are different, with buckets, the name has to be totally unique, and that's across all regions and all AWS accounts. I've seen it come up in the exam a few times. So this is definitely a point to remember.

      Now buckets can hold an unlimited number of objects. And because objects can range from zero to five TB in size, that essentially means that a bucket can hold from zero to unlimited bytes of data. It's an infinitely scalable storage system. Now one of the most important things that I want to say in this lesson is that as an object storage system, an S3 bucket has no complex structure. It's flat, it has flat structure. All objects stored within the bucket at the same level. So this isn't like a file system where you can truly have files within folders, within folders. Everything is stored in the bucket at the root level.

      But, if you do a listing on an S3 bucket, you will see what you think are folders. Even the UI presents this as folders. But it is important for you to know at this stage that that's not how it actually is. Imagine a bucket where you see three image files, koala one, two and three dot JPEG. The first thing is that inside S3, there's no concept of file type based on the name. These are just three objects where the object key is koala1.JPEG, koala2.JPEG and koala3.JPEG. Now folders in S3 are represented when we have object names that are structured like these. So the objects have a key, a forward slash old forward slash koala one, two and three dot JPEG. When we create object names like this, then S3 presents them in the UI as a folder called old. So because we've got object names that begin with slash old, then S3 presents this as a folder called old. And then inside that folder, we've got koala one, two, and three dot JPEG.

      Now nine out of 10 times, this detail doesn't matter, but I want to make sure that you understand how it actually works. Folders are often referred to as prefixes in S3 because they're part of the object names. They prefix the object names. As you move through the various stages of your AWS learnings, this is gonna make more and more sense. And I'm gonna demonstrate this in the next lesson, which is a demo lesson.

      Now to summarize buckets are just containers, they're stored in a region, and for S3, they're generally where a lot of permissions and options are set. So remember that buckets are generally the default place where you should go to, to configure the way the S3 works.

      Now, I want to cover a few summary items and then step through some patterns and anti-patterns for S3, before we move to the demo. But first an exam powerup. These are things that you should try to remember and they will really help in the exam. First bucket names are globally unique. Remember that one because it will really help in the exam. I've seen a lot of times where AWS have included trick questions, which test your knowledge of this one. If you get any errors, you can't create a bucket a lot of the time it's because somebody else already has that bucket name.

      Now bucket names do have some restrictions. They need to be between three and 63 characters, all lower case and no underscores. They have to start with a lowercase letter or a number, and they can't be formatted like IP addresses. So you can't have one.one.one.one as your bucket name. Now there are some limits in terms of buckets. Now limits are often things that you don't need to remember for the exam, but this is one of the things that you do. There is a limit of a hundred buckets that you can have in an AWS account. So this is not per region, it's for the entire account. There's a soft limit of 100 and a hard limit so you can increase all the way up to this hard limit using support requests, and this hard limit is a thousand.

      Now this matters for architectural reasons. It's not just an arbitrary number. If you're designing a system which uses S3 and users of that system store data inside S3, you can implement a solution that has one bucket per user if you have anywhere near this number of users. So if you have anywhere from a hundred to a thousand users or more of a system, then you're not gonna be able to have one bucket per user because you'll hit this hard limit. You tend to find this in the exam quite often, it'll ask you how to structure a system, which has potentially thousands of users. What you can do is take a single bucket and divide it up using prefixes, so those folders that aren't really folders, and then in that way, you can have multiple users using one bucket. Remember the 100/1000, it's a 100 soft limit and a 1000 hard limit.

      You aren't limited in terms of objects in a bucket, you can have zero to an infinite number of objects in a bucket. And each object can range in size from zero bytes to five TB in size. And then finally, in terms of the object structure, an object consists of a key, which is its name and then the value, which is the data. And there are other elements to an object which I'll discuss later in the course, but for now, just remember the two main components, the key and the value. Now, all of these points are worth noting down, maybe make them into a set of flashcards and you can use them later on during your studies.

      S3 is pretty straightforward and that there tend to be a number of things that it's really good at and a fairly small set of things that it's not suitable for. So let's take a look. S3 is an object storage system. It's not a file storage system, and it's not a block storage system, which are the other main types. What this means is that if you have a requirement where you're accessing the whole of these entities, so the whole of an object, so an image, an audio file, and you're doing all of that at once, then it's a candidate for object storage. If you have a Window server which needs to access a network file system, then it's not S3 that needs to be file-based storage. S3 has no file system, it's flat. So you can't browse to an S3 bucket like you would a file share in Windows. Likewise, it's not block storage, which means you can't mount it as a mount point or a volume on the Linux or Windows. When you're dealing with virtual machines or instances, you mount block storage to them. Block storage is basically virtual hard disks. In EC2, you have EBS, which is block storage. Block storage is generally limited to one thing accessing it at a time, one instance in the case of EBS. S3 doesn't have that single user limitation and it's not block storage, but that means you can't mount it as a drive.

      S3 is great for large scale data storage or distribution. Many examples I'll show you throughout the course will fit into that category. And it's also good for offloading things. If you have a blog with lots of posts and lots of images or audio or movies, instead of storing that data on an expensive compute instance, you can move it to an S3 bucket and configure your blog software to point your users at S3 directly. You can often shrink your instance by offloading data onto S3. And don't worry, I'll be demoing this later in the course. Finally, S3 should be your default thought for any input to AWS services or output from AWS services. Most services which consume data and or output data can have S3 as an option to take data from or put data to when it's finished. So if you're faced with any exam questions and there's a number of options on where to store data, S3 should be your default. There are plenty of AWS services which can output large quantities of data or ingest large quantities of data. And most of the time, it's S3, which is an ideal storage platform for that service.

      Okay time for a quick demo. And in this demo, we're just gonna run through the process of creating a simple S3 bucket, uploading some objects to that bucket, and demonstrating exactly how the folder functionality works inside S3. And I'm also gonna demonstrate a number of elements of how access and permissions work with S3. So go ahead and complete this video, and when you're ready join me in the next, which is gonna be a demo of S3.

    1. Brainwave activity changes dramatically across the different stages of sleep.

      I find this interesting and a little tricky to understand how exactly they graph brain activity and if the sleep spindles are always completely accurate.

    1. FastDownload.io

      一个下载各种流媒体视频的在线工具,支持 Youtube 和 TikTok。

    2. WebUI

      我工作需要

    3. 内容农场

      我不喜欢这种内容

    1. We spend approximately one-third of our lives sleeping. Given the average life expectancy for U.S. citizens falls between 73 and 79 years old

      I find this very surprising that we spend so much of our time on this earth sleeping and is a fact I never though about before.

    1. In one study on suicide in the U.S., the rising rates were closely linked with reductions in social welfare spending between 1960 and 1995.

      Social welfare is linked to one's overall well-being. The system should focus more on spending these programs to avoid detrimental effects. Suicide may "die by their own hand," but to put it into perspective: the U.S. holds the gun while these individuals pull the trigger.

    2. According to social strain theory, when there’s a large gap between the rich and poor, those at or near the bottom struggle more, making them more susceptible to addiction, criminality and mental illness than those at the top.

      This makes sense as lack of resources and difficulty of living can lead to mental health issues and unhealthy ways of coping.

    1. Poverty protects against suicide because it is a restraint initself. No matter how one acts, desires have to depend upon resourcesto some extent; actual possessions are partly the criterion of thoseaspired to. So the less one has the less he is tempted to extend the rangeof his needs indefinitely

      When you have power, you are greedy to prove you can get more than what you already have. Poverty builds humility and resilience as they tolerate more suffering and manage to survive in other ways without such resources.

    2. In reality they are an effect rather than a cause; they merelysymbolize in abstract language and systematic form the physiologicaldistress of the body social.

      The individual in this type of suicide perceives themselves as abnormal part that disrupts flow or functioning of society.

    3. Where collective sentiments are strong, it is because the forcewith which they affect each individual conscience is echoed in all theothers, and reciprocally

      In other words, does this mean that family functioning affects the intensity of people affected? The energy that one or a few individuals possess affect others like a domino effect?

    4. Due to this extreme sensitivity of his nervous system, his ideas andfeelings are always in unstable equilibrium.

      The intrusive symptoms are contradicting to one's mental state and makes it difficult to live a sustainable life. It seems it is difficult to predict events to prepare for appropriate responses.

    5. for the excessive penetrability of a weakenednervous system makes it a prey to stimuli which would not excite anormal organism

      Can constant discomfort from physiological symptoms trigger mental urges to perform suicide?

    1. The brain’s clock mechanism is located in an area of the hypothalamus known as the suprachiasmatic nucleus (SCN). The axons of light-sensitive neurons in the retina provide information to the SCN based on the amount of light present, allowing this internal clock to be synchronized with the outside world (Klein, Moore, & Reppert, 1991; Welsh, Takahashi, & Kay, 2010) (Figure 4.3).

      This fact is surprising, because it is weird to think that our body naturally gets tired and ready for bed based on the intake of light.

    1. He was Spotify's most streamed artist in both 2020 and 2021. Now his new album, "Un Verano Sin Ti," has set its own round of streaming records. This past Friday, the day it came out, Bad Bunny received the most streams any artist has ever registered in a single day, with more than 183 million.

      Logos is used here to show the facts of his success with the ability to show us exactly the rate of his success and to inform the audience.

    2. "El Ultimo Tour Del Mundo," was the first entirely Spanish-language record ever to hit No. 1 on the U.S. Billboard albums chart.

      Logos and ethos is clearly used here immediately because as soon as you read this you recognize that there's credibility and its based on the facts whether you like him or not he's created quite the accomplishment.

    1. Chapter Outline

      This chapter seems to deal with genetics, the brain, the nervous system, and the endocrine system. It also seems to be how all of these affect people and how that can be studied through psychology. The most important sections were: Charles Darwin: He is known as the father of evolution, which is important when studying how humans act and make decisions Genetics and Behavior: behavior is the main focus of psychology so this seems central Neurotransmitters and drugs: drugs heavily impair the decisions a person makes Parts of the nervous system: this system is important in how we feel and perceive our environment, which is important for psychology brain imaging: studying the medical aspect of psychology