- Oct 2024
-
Local file Local file
-
Transmission of pulpal infections to apical tissues through the apical foramen
Pulpa enfeksiyonlarının apikal foramen yoluyla apikal dokulara iletimi
-
there is a possibility of providing treatmentwithout the need for surgery, or thetreatment can be provided by alveolar surgery.
Cerrahi müdahaleye gerek kalmadan tedavi sağlanma olasılığı vardır veya tedavi alveolar cerrahi ile yapılabilir.
-
-
www.americanyawp.com www.americanyawp.com
-
Oneida Declaration of Neutrality, 1775
The purpose of this document is to outline the Oneida's stance on the American Revolution, and express their middle stance between the colonists and Great Britain. They address governor and leaders of the New England colonies to set their neutrality, and urge both sides to do the same by co-existing along with each other.
-
Signed by the Chief Warriors of the Oneida: William Sunoghsis, Viklasha Watshaleagh, William Kanaghquassea, Peter Thayehcase, Germine Tegayavher, Nickhes Ahsechose, Thomas Yoghtanawca, Adam Ohonwano, Quedellis Agwerondongwas, Handerchiko Tegahpreahdyen, John Skeanender, Thomas Teorddeatha.
Is this from all the Oneida chiefs to show the unanimous decision within the tribe? Or is this the only chiefs who agreed to neutrality?
-
and taken umbrage that we Indians refuse joining in the contest; we are for peace.
Question; Why did the Native tribe change their mind on supporting the colonists? How long did it take them to change their mind? Upon further research, the Oneida tribe was the first Native American tribe to pledge their allegiance to the colonists, so what made them change their mind from this letter?
-
brother Governour, and all the other chiefs in New-England.
Question: Why do the natives refer to the leaders of New England as "chiefs"? Do they do so because they have they have a similar government set-up in the tribes and see the New England Chiefs as equals/allies within North America?
-
dispute between two brothers
Who: The letter often refers to the metaphor of the "two brothers" with respect to the American Colonies and Great Britain. It is used to push the Native need for peace in the sense that the two Nations should co-exist with one another as they are closely related. The Natives also go onto say that "you are two brothers of one blood", relating back to the colonists roots in England and purveying that the American Colonies are closer connected to Great Britain than they wish to think.
-
June 19, 1775
When: This was around 2 months after the start of the American Revolution, where the first shots were fired at the battle of Lexington and Concord. This marks the point where the official conflict between the American Colonies and the British escalated, and it makes sense that the Native American tribe felt they needed to state their stance as war was nearing early America.
-
The Oneida nation, one of the Six Nations of the Haudenosaunee (Iroquois)
Who: A Native American tribe of the Iroquois who resided in central New York and later Wisconsin. They often interacted with the early northeastern colonies. Upon further research, the tribe eventually took the side of the American colonies and even fought alongside the colonies during the American Revolution. With this, they were known as "The First Ally of American" during the war.
-
an alien, a foreign Nation
Definition: Alien and foreign Nation is used in this context as a way to describe an outside country or force. The Natives are not using this to refer to Great Britain however, as they see the colonies and Great Britain as the need to be one.
-
Should the great King of England apply to us for our aid, we shall deny him. If the Colonies apply, we will refuse.
Expressing how the Native tribes will refuse favor from both sides in an effort to remain neutral.
-
a formal declaration of neutrality
Definition: Announcing that they will not be choosing sides, instead remaining in the middle of the conflict between the colonies and Great Britain.
-
-
jitc.bmj.com jitc.bmj.com
-
Society for Immunotherapy of Cancer (SITC) clinical practice guideline on immunotherapy for the treatment of lung cancer and mesothelioma
SITC continuously evaluates the field for emerging data and new FDA approvals. Updates to the recommendations, tables, treatment algorithms, and/or guideline text in this publication are made with the approval of the SITC Lung Cancer and Mesothelioma Immunotherapy Guideline Expert Panel. More information on the SITC Guidelines can be found at sitcancer.org/guidelines.
v2.1(A) Update Summary
An addendum to the SITC Lung Cancer and Mesothelioma CPG is currently in preparation to address new FDA approvals and practice-changing data for the treatment of metastatic or locally advanced NSCLC, resectable NSCLC, LS-SCLC, and malignant pleural mesothelioma using ICI-based treatment regimens. This update in preparation will also address the FDA approval of the bispecific antibody, tarlatamab, for the treatment of ES-SCLC.
-
-
www.americanyawp.com www.americanyawp.com
-
; I have now nothing more to speak but my desire that you may still retain (what I know you do) that love with which I daily was blest and that readiness in pardoning whatsoever you find amiss, and to believe that my affections are not changed with the Climate unless like it too, grown warmer,
This shows how we need to consider what Thomas's account offers. This account shapes readers minds on the settlers and their lifestyles. Knowing the desire to keep peace helps give a better understanding of the settlers and their relationships. Understanding the perspectives of both parties helps give a better perspective to all those involved. Knowing where they stand will help give stronger insight to those reviewing and learning history as it was and is.
-
Thomas Newe’s account of his experience in Carolina offers an interesting counter to Robert Horne’s prediction of what would await settlers. Newe describes deadly disease, war with Native Americans, and unprepared colonists
This line answers the question of who had an interesting counter to predictions of settlers life style. Knowing more about Thomas and his contradiction to Robert allows for a deeper understanding of what the settlers are about to face. Knowing predictions as to what the settlers will face not only tells the reader what is occurring but also allows for a more thorough understanding. Knowing this helps shape the readers views of the settlers. Being able to predict what awaits settlers helps the readers tremendously.
-
-
viewer.athenadocs.nl viewer.athenadocs.nl
-
The first phase of the process focuses on establishing boundaries, strengthening coping strategies and maintaining trust. Techniques that prove helpful include keeping diaries, viewing family photos, recording sessions that are repeated in later sessions, and teaching relaxation and mindfulness techniques. The end of the initial therapeutic phase can be recognized when the mother-offender is able to acknowledge, if only superficially, the abuse she committed. At the second stage, the development of empathy for the child can begin, because only then can the mother understand the extent of the suffering. At this point, one's own guilt is also considered more than the influence of someone else's behavior. Traumatic memories from one's own childhood tend to come up, and this often leads to family separation as well. The content of the final phase is not yet concrete, mainly because few offenders complete the process in its entirety. The new relationship an offender is "allowed" to establish with the child-victim becomes central. Empathy, full recognition of the abuse, and the development of a new and more appropriate attachment bond become the focus of treatment. There is no data on the average length of treatment, but it is known that many offenders are in treatment for more than 5 years anyway. Therapy with other family members According to Parnell and Day's (1998) model, treatment of family members should occur at the same time as that of the offender. This allows treatment providers to set up a plan that is adequate for everyone in the family. The father of the child victim should participate in individual therapy to process his new awareness of his family, especially the abuse of his children. Since the child-victims often cannot yet talk, play therapy seems so far the most appropriate form of treatment, the child can thus learn to adapt to the new family dynamics. Little can be found in the literature on the treatment of siblings. Available information is limited to a review of their medical history and mortality, or to a cursory mention of their involvement in legal proceedings. Many of the most important facets of this syndrome are and remain unknown, especially its prevalence, course and mortality rates. Only clear education of professionals tasked with helping these families can provide more accurate diagnoses and protect victims. Legal Issues Child protection agencies supervise the family even years after the maltreatment was identified. In addition, many mothers face criminal charges.
Hier is een samenvatting van de belangrijkste onderdelen van deze tekst:
-
Eerste fase van behandeling: De focus ligt op het stellen van grenzen, het versterken van copingstrategieën en het behouden van vertrouwen. Technieken zoals het bijhouden van dagboeken, familie foto's bekijken, opnames van sessies herhalen en ontspannings- en mindfulness-technieken toepassen, worden als nuttig beschouwd. Deze fase wordt afgesloten als de moeder (dader) kan erkennen dat ze het kind heeft misbruikt, al is het oppervlakkig.
-
Tweede fase: In deze fase begint de moeder empathie voor het kind te ontwikkelen en haar eigen schuld te erkennen, zonder anderen de schuld te geven. Traumatische herinneringen uit haar eigen jeugd kunnen naar boven komen, wat vaak leidt tot familiebreuk.
-
Laatste fase: Deze fase richt zich op het opbouwen van een nieuwe en gezonde band tussen de moeder en het kind. Empathie, volledige erkenning van het misbruik, en het ontwikkelen van een geschikte gehechtheid staan centraal. Veel dader-moeders maken de volledige behandeling echter niet af, en behandeling kan meer dan vijf jaar duren.
-
Therapie voor andere gezinsleden: Volgens het model van Parnell en Day (1998) moeten gezinsleden tegelijk met de dader worden behandeld. De vader krijgt individuele therapie om zijn emoties over het misbruik te verwerken, en het kind volgt vaak speltherapie om zich aan te passen aan de nieuwe gezinsdynamiek. Over behandeling van broers en zussen is weinig bekend.
-
Wettelijke aspecten: Na het vaststellen van het misbruik houdt kinderbescherming toezicht op het gezin, soms jaren lang. Veel moeders krijgen daarnaast te maken met strafrechtelijke vervolging.
Er is nog veel onbekend over dit syndroom, zoals de prevalentie, het verloop en sterftecijfers. Alleen door professionals goed op te leiden, kunnen slachtoffers beter beschermd worden en nauwkeuriger diagnoses gesteld worden.
-
-
The doctor suggests viewing the diagnosis as an alternative which should be ruled out during a period of intensive surveillance. 2. Investigators place cameras in the child's hospital environment or arrange a search warrant. 3. A doctor reviews the victim's medical records. These records can provide valuable information about patterns of abuse.
Na een PCF-diagnose zijn er drie mogelijke vervolgstappen:
- De arts stelt voor de diagnose als een alternatieve mogelijkheid te beschouwen en uit te sluiten door middel van intensieve observatie.
- Onderzoekers plaatsen camera's in de ziekenhuisomgeving van het kind of verkrijgen een huiszoekingsbevel.
- Een arts beoordeelt de medische dossiers van het slachtoffer. Deze kunnen waardevolle informatie bieden over patronen van misbruik.
-
Management of MBP cases From experience, it is important for an MBP case to have a multidisciplinary approach. Of great importance once the question of MBP is raised, is the careful collection of all information necessary to rule out or confirm the diagnosis of PCF. The information needed includes not only current but also previous medical records of the children and the mother-offender, additional interviews with those familiar with the daily activities of the family (i.e., schools, daycare centers, etc.), and any history that may be provided by individual family members. The investigation of cases of abuse of MBPs often takes place in hospitals through video recordings. If abuse is found during observation, the child victim and siblings must be immediately protected from further contact with the mother perpetrator. This type of evidence is also very likely to lead to arrest. Diagnostic procedures in MBP studies The diagnosis of MBP depends primarily on a pediatrician's determination that the child's symptoms are not the result of a true medical condition. Therefore, the first step of child protective services is usually to refer the child victim for evaluation by a pediatrician. In some cases, this examination is enough to diagnose PCF.
Bij het aanpakken van een Munchausen by Proxy (MBP)-zaak is een multidisciplinaire aanpak cruciaal. Zodra MBP wordt vermoed, moet alle relevante informatie zorgvuldig verzameld worden om de diagnose van Pediatric Condition Falsification (PCF) te bevestigen of uit te sluiten. Hierbij horen de huidige en eerdere medische gegevens van het kind en de moeder, gesprekken met mensen die het gezin kennen (zoals scholen en kinderopvang), en informatie van familieleden. Onderzoeken gebeuren vaak in ziekenhuizen, soms met videobewaking. Als misbruik wordt vastgesteld, moeten het kind en broers/zussen direct worden beschermd tegen contact met de moeder. Diagnostiek begint vaak met een kinderarts die bepaalt of de symptomen een medische oorzaak hebben; dit kan voldoende zijn om PCF te diagnosticeren.
-
Although self-inflicted dermatosis was already recognized by the World Health Organization in 1948.
Dit betekent dat bepaalde huidproblemen die door een persoon op zichzelf worden veroorzaakt – zoals door krabben, branden of andere vormen van zelfbeschadiging – al lang als een erkende medische kwestie worden gezien. Dit toont aan dat zelfbeschadiging van de huid serieus wordt genomen in de medische wereld en dat er al lang aandacht aan wordt besteed.
-
The problem with the criminological model is that it assumes general characteristics of malingerers rather than distinctive ones. For example, many malingerers have an antisocial background, but so do many people with disorders.
Het probleem met het criminologische model is dat het uitgaat van algemene kenmerken van malingerers in plaats van onderscheidende kenmerken. Veel malingerers hebben bijvoorbeeld een antisociale achtergrond, maar dat geldt ook voor veel mensen met stoornissen.
-
-
www.biorxiv.org www.biorxiv.org
-
Author response:
The following is the authors’ response to the original reviews.
eLife Assessment
This important work presents a new methodology for the statistical analysis of fiber photometry data, improving statistical power while avoiding the bias inherent in the choices that are necessarily made when summarizing photometry data. The reanalysis of two recent photometry data sets, the simulations, and the mathematical detail provide convincing evidence for the utility of the method and the main conclusions, however, the discussion of the re-analyzed data is incomplete and would be improved by a deeper consideration of the limitations of the original data. In addition, consideration of other data sets and photometry methodologies including non-linear analysis tools, as well as a discussion of the importance of the data normalization are needed.
Thank you for reviewing our manuscript and giving us the opportunity to respond and improve our paper. In our revision, we have strived to address the points raised in the comments, and implement suggested changes where feasible. We have also improved our package and created an analysis guide (available on our Github - https://github.com/gloewing/fastFMM and https://github.com/gloewing/photometry_fGLMM), showing users how to apply our methods and interpret their results. Below, we provide a detailed point-by-point response to the reviewers.
Reviewer #1:
Summary:
Fiber photometry has become a very popular tool in recording neuronal activity in freely behaving animals. Despite the number of papers published with the method, as the authors rightly note, there are currently no standardized ways to analyze the data produced. Moreover, most of the data analyses confine to simple measurements of averaged activity and by doing so, erase valuable information encoded in the data. The authors offer an approach based on functional linear mixed modeling, where beyond changes in overall activity various functions of the data can also be analyzed. More in-depth analysis, more variables taken into account, and better statistical power all lead to higher quality science.
Strengths:
The framework the authors present is solid and well-explained. By reanalyzing formerly published data, the authors also further increase the significance of the proposed tool opening new avenues for reinterpreting already collected data.
Thank you for your favorable and detailed description of our work!
Weaknesses:
However, this also leads to several questions. The normalization method employed for raw fiber photometry data is different from lab to lab. This imposes a significant challenge to applying a single tool of analysis.
Thank you for these important suggestions. We agree that many data pre-processing steps will influence the statistical inference from our method. Note, though, that this would also be the case with standard analysis approaches (e.g., t-tests, correlations) applied to summary measures like AUCs. For that reason, we do not believe that variability in pre-processing is an impediment to widespread adoption of a standard analysis procedure. Rather, we would argue that the sensitivity of analysis results to pre-processing choices should motivate the development of statistical techniques that reduce the need for pre-processing, and properly account for structure in the data arising from experimental designs. For example, even without many standard pre-processing steps, FLMM provides smooth estimation results across trial timepoints (i.e., the “functional domain”), has the ability to adjust for betweentrial and -animal heterogeneity, and provides a valid statistical inference framework that quantifies the resulting uncertainty. We appreciate the reviewer’s suggestion to emphasize and further elaborate on our method from this perspective. We have now included the following in the Discussion section:
“FLMM can help model signal components unrelated to the scientific question of interest, and provides a systematic framework to quantify the additional uncertainty from those modeling choices. For example, analysts sometimes normalize data with trial-specific baselines because longitudinal experiments can induce correlation patterns across trials that standard techniques (e.g., repeated measures ANOVA) may not adequately account for. Even without many standard data pre-processing steps, FLMM provides smooth estimation results across trial time-points (the “functional domain”), has the ability to adjust for between-trial and -animal heterogeneity, and provides a valid statistical inference approach that quantifies the resulting uncertainty. For instance, session-to-session variability in signal magnitudes or dynamics (e.g., a decreasing baseline within-session from bleaching or satiation) could be accounted for, at least in part, through the inclusion of trial-level fixed or random effects. Similarly, signal heterogeneity due to subject characteristics (e.g., sex, CS+ cue identity) could be incorporated into a model through inclusion of animal-specific random effects. Inclusion of these effects would then influence the width of the confidence intervals. By expressing one’s “beliefs” in an FLMM model specification, one can compare models (e.g., with AIC). Even the level of smoothing in FLMM is largely selected as a function of the data, and is accounted for directly in the equations used to construct confidence intervals. This stands in contrast to “trying to clean up the data” with a pre-processing step that may have an unknown impact on the final statistical inferences.”
Does the method that the authors propose work similarly efficiently whether the data are normalized in a running average dF/F as it is described in the cited papers? For example, trace smoothing using running averages (Jeong et al. 2022) in itself may lead to pattern dilution.
By modeling trial signals as “functions”, the method accounts for and exploits correlation across trial timepoints and, as such, any pre-smoothing of the signals should not negatively affect the validity of the 95% CI coverage. It will, however, change inferential results and the interpretation of the data, but this is not unique to FLMM, or many other statistical procedures.
The same question applies if the z-score is calculated based on various responses or even baselines. How reliable the method is if the data are non-stationery and the baselines undergo major changes between separate trials?
Adjustment for trial-to-trial variability in signal magnitudes or dynamics could be accounted for, at least in part, through the inclusion of trial-level random effects. This heterogeneity would then influence the width of the confidence intervals, directly conveying the effect of the variability on the conclusions being drawn from the data. This stands in contrast to “trying to clean up the data” with a pre-processing step that may have an unknown impact on the final statistical inferences. Indeed, non-stationarity (e.g., a decreasing baseline within-session) due to, for example, measurement artifacts (e.g., bleaching) or behavioral causes (e.g., satiation, learning) should, if possible, be accounted for in the model. As mentioned above, one can often achieve the same goals that motivate pre-processing steps by instead applying specific FLMM models (e.g., that include trial-specific intercepts to reflect changes in baseline) to the unprocessed data. One can then compare model criteria in an objective fashion (e.g., with AIC) and quantify the uncertainty associated with those modeling choices. Even the level of smoothing in FLMM is largely selected as a function of the data, and is accounted for directly in the equations used to construct confidence intervals. In sum, our method provides both a tool to account for challenges in the data, and a systematic framework to quantify the additional uncertainty that accompanies accounting for those data characteristics.
Finally, what is the rationale for not using non-linear analysis methods? Following the paper’s logic, non-linear analysis can capture more information that is diluted by linear methods.
This is a good question that we imagine many readers will be curious about as well. We have added in notes to the Discussion and Methods Section 4.3 to address this (copied below). We thank the reviewer for raising this point, as your feedback also motivated us to discuss this point in Part 5 of our Analysis Guide.
Methods
“FLMM models each trial’s signal as a function that varies smoothly across trial time-points (i.e., along the “functional domain”). It is thus a type of non-linear modeling technique over the functional domain, since we do not assume a linear model (straight line). FLMM and other functional data analysis methods model data as functions, when there is a natural ordering (e.g., time-series data are ordered by time, imaging data are ordered by x-y coordinates), and are assumed to vary smoothly along the functional domain (e.g., one assumes values of a photometry signal at close time-points in a trial have similar values). Functional data analysis approaches exploit this smoothness and natural ordering to capture more information during estimation and inference.”
Discussion
“In this paper, we specified FLMM models with linear covariate–signal relationships at a fixed trial time-point across trials/sessions, to compare the FLMM analogue of the analyses conducted in (Jeong et al., 2022). However, our package allows modeling of covariate–signal relationships with non-linear functions of covariates, using splines or other basis functions. One must consider, however, the tradeoff between flexibility and interpretability when specifying potentially complex models, especially since FLMM is designed for statistical inference.”
Reviewer #2:
Summary:
This work describes a statistical framework that combines functional linear mixed modeling with joint 95% confidence intervals, which improves statistical power and provides less conservative statistical inferences than in previous studies. As recently reviewed by Simpson et al. (2023), linear regression analysis has been used extensively to analyze time series signals from a wide range of neuroscience recording techniques, with recent studies applying them to photometry data. The novelty of this study lies in 1) the introduction of joint 95% confidence intervals for statistical testing of functional mixed models with nested random-effects, and 2) providing an open-source R package implementing this framework. This study also highlights how summary statistics as opposed to trial-by-trial analysis can obscure or even change the direction of statistical results by reanalyzing two other studies.
Strengths:
The open-source package in R using a similar syntax as the lme4 package for the implementation of this framework on photometry data enhances the accessibility, and usage by other researchers. Moreover, the decreased fitting time of the model in comparison with a similar package on simulated data, has the potential to be more easily adopted.
The reanalysis of two studies using summary statistics on photometry data (Jeong et al., 2022; Coddington et al., 2023) highlights how trial-by-trial analysis at each time-point on the trial can reveal information obscured by averaging across trials. Furthermore, this work also exemplifies how session and subject variability can lead to opposite conclusions when not considered.
We appreciate the in-depth description of our work and, in particular, the R package. This is an area where we put a lot of effort, since our group is very concerned with the practical experience of users.
Weaknesses:
Although this work has reanalyzed previous work that used summary statistics, it does not compare with other studies that use trial-by-trial photometry data across time-points in a trial. As described by the authors, fitting pointwise linear mixed models and performing t-test and BenjaminiHochberg correction as performed in Lee et al. (2019) has some caveats. Using joint confidence intervals has the potential to improve statistical robustness, however, this is not directly shown with temporal data in this work. Furthermore, it is unclear how FLMM differs from the pointwise linear mixed modeling used in this work.
Thank you for making this important point. We agree that this offers an opportunity to showcase the advantages of FLMM over non-functional data analysis methods, such as the approach applied in Lee et al. (2019). As mentioned in the text, fitting entirely separate models at each trial timepoint (without smoothing regression coefficient point and variance estimates across timepoints), and applying multiple comparisons corrections as a function of the number of time points has substantial conceptual drawbacks. To see why, consider that applying this strategy with two different sub-sampling rates requires adjustment for different numbers of comparisons, and could thus lead to very different proportions of timepoints achieving statistical significance. In light of your comments, we decided that it would be useful to provide a demonstration of this. To that effect, we have added Appendix Section 2 comparing FLMM with the method in Lee et al. (2019) on a real dataset, and show that FLMM yields far less conservative and more stable inference across different sub-sampling rates. We conducted this comparison on the delay-length experiment (shown in Figure 6) data, sub-sampled at evenly spaced intervals at a range of sampling rates. We fit either a collection of separate linear mixed models (LMM) followed by a Benjamini–Hochberg (BH) correction, or FLMM with statistical significance determined with both Pointwise and Joint 95% CIs. As shown in Appendix Tables 1-2, the proportion of timepoints at which effects are statistically significant with FLMM Joint CIs is fairly stable across sampling rates. In contrast, the percentage is highly inconsistent with the BH approach and is often highly conservative. This illustrates a core advantage of functional data analysis methods: borrowing strength across trial timepoints (i.e., the functional domain), can improve estimation efficiency and lower sensitivity to how the data is sub-sampled. A multiple comparisons correction may, however, yield stable results if one first smooths both regression coefficient point and variance estimates. Because this includes smoothing the coefficient point and variance estimates, this approach would essentially constitute a functional mixed model estimation strategy that uses multiple comparisons correction instead of a joint CI. We have now added in a description of this experiment in Section 2.4 (copied below).
“We further analyze this dataset in Appendix Section 2, to compare FLMM with the approach applied in Lee et al. (2019) of fitting pointwise LMMs (without any smoothing) and applying a Benjamini–Hochberg (BH) correction. Our hypothesis was that the Lee et al. (2019) approach would yield substantially different analysis results, depending on the sampling rate of the signal data (since the number of tests being corrected for is determined by the sampling rate). The proportion of timepoints at which effects are deemed statistically significant by FLMM joint 95% CIs is fairly stable across sampling rates. In contrast, that proportion is both inconsistent and often low (i.e., highly conservative) across sampling rates with the Lee et al. (2019) approach. These results illustrate the advantages of modeling a trial signal as a function, and conducting estimation and inference in a manner that uses information across the entire trial.”
In this work, FLMM usages included only one or two covariates. However, in complex behavioral experiments, where variables are correlated, more than two may be needed (see Simpson et al. (2023), Engelhard et al. (2019); Blanco-Pozo et al. (2024)). It is not clear from this work, how feasible computationally would be to fit such complex models, which would also include more complex random effects.
Thank you for bringing this up, as we endeavored to create code that is able to scale to complex models and large datasets. We agree that highlighting this capability in the paper will strengthen the work. We now state in the Discussion section that “[T]he package is fast and maintains a low memory footprint even for complex models (see Section 4.6 for an example) and relatively large datasets.” Methods Section 4.6 now includes the following:
Our fastFMM package scales to the dataset sizes and model specifications common in photometry. The majority of the analyses presented in the Results Section (Section 2) included fairly simple functional fixed and random effect model specifications because we were implementing the FLMM versions of the summary measure analyses presented in Jeong et al. (2022). However, we fit the following FLMM to demonstrate the scalability of our method with more complex model specifications:
We use the same notation as the Reward Number model in Section 4.5.2, with the additional variable TL_i,j,l_ denoting the Total Licks on trial j of session l for animal i. In a dataset with over 3,200 total trials (pooled across animals), this model took ∼1.2 min to fit on a MacBook Pro with an Apple M1 Max chip with 64GB of RAM. Model fitting had a low memory footprint. This can be fit with the code:
model_fit = fui(photometry ~ session + trial + iri + lick_time + licks + (session + trial + iri + lick_time + licks | id), parallel = TRUE, data = photometry_data)
This provides a simple illustration of the scalability of our method. The code (including timing) for this demonstration is now included on our Github repository.
Reviewer #3:
Summary:
Loewinger et al., extend a previously described framework (Cui et al., 2021) to provide new methods for statistical analysis of fiber photometry data. The methodology combines functional regression with linear mixed models, allowing inference on complex study designs that are common in photometry studies. To demonstrate its utility, they reanalyze datasets from two recent fiber photometry studies into mesolimbic dopamine. Then, through simulation, they demonstrate the superiority of their approach compared to other common methods.
Strengths:
The statistical framework described provides a powerful way to analyze photometry data and potentially other similar signals. The provided package makes this methodology easy to implement and the extensively worked examples of reanalysis provide a useful guide to others on how to correctly specify models.
Modeling the entire trial (function regression) removes the need to choose appropriate summary statistics, removing the opportunity to introduce bias, for example in searching for optimal windows in which to calculate the AUC. This is demonstrated in the re-analysis of Jeong et al., 2022, in which the AUC measures presented masked important details about how the photometry signal was changing.
Meanwhile, using linear mixed methods allows for the estimation of random effects, which are an important consideration given the repeated-measures design of most photometry studies.
We would like to thank the reviewer for the deep reading and understanding of our paper and method, and the thoughtful feedback provided. We agree with this summary, and will respond in detail to all the concerns raised.
Weaknesses:
While the availability of the software package (fastFMM), the provided code, and worked examples used in the paper are undoubtedly helpful to those wanting to use these methods, some concepts could be explained more thoroughly for a general neuroscience audience.
Thank you for this point. While we went to great effort to explain things clearly, our efforts to be concise likely resulted in some lack of clarity. To address this, we have created a series of analysis guides for a more general neuroscience audience, reflecting our experience working with researchers at the NIH and the broader community. These guides walk users through the code, its deployment in typical scenarios, and the interpretation of results.
While the methodology is sound and the discussion of its benefits is good, the interpretation and discussion of the re-analyzed results are poor:
In section 2.3, the authors use FLMM to identify an instance of Simpson’s Paradox in the analysis of Jeong et al. (2022). While this phenomenon is evident in the original authors’ metrics (replotted in Figure 5A), FLMM provides a convenient method to identify these effects while illustrating the deficiencies of the original authors’ approach of concatenating a different number of sessions for each animal and ignoring potential within-session effects.
Our goal was to demonstrate that FLMM provides insight into why the opposing within- and between-session effects occur: the between-session and within-session changes appear to occur at different trial timepoints. Thus, while the AUC metrics applied in Jeong et al. (2022) are enough to show the presence of Simpson’s paradox, it is difficult to hypothesize why the opposing within-/between-session effects occur. An AUC analysis cannot determine at what trial timepoints (relative to licking) those opposing trends occur.
The discussion of this result is muddled. Having identified the paradox, there is some appropriate speculation as to what is causing these opposing effects, particularly the decrease in sessions. In the discussion and appendices, the authors identify (1) changes in satiation/habitation/motivation, (2) the predictability of the rewards (presumably by the click of a solenoid valve) and (3) photobleaching as potential explanations of the decrease within days. Having identified these effects, but without strong evidence to rule all three out, the discussion of whether RPE or ANCCR matches these results is probably moot. In particular, the hypotheses developed by Jeong et al., were for a random (unpredictable) rewards experiment, whereas the evidence points to the rewards being sometimes predictable. The learning of that predictability (e.g. over sessions) and variation in predictability (e.g. by attention level to sounds of each mouse) significantly complicate the analysis. The FLMM analysis reveals the complexity of analyzing what is apparently a straightforward task design.
While we are disappointed to hear the reviewer felt our initial interpretations and discussion were poor, the reviewer brings up an excellent point re: potential reward predictability that we had not considered. They have convinced us that acknowledging this alternative perspective will strengthen the paper, and we have added it into the Discussion. We agree that the ANCCR/RPE model predictions were made for unpredictable rewards and, as the reviewer rightly points out, there is evidence that the animals may sense the reward delivery. After discussing extensively with the authors of Jeong et al. (2022), it is clear that they went to enormous trouble to prevent the inadvertent generation of a CS+, and it is likely changes in pressure from the solenoid (rather than a sound) that may have served as a cue. Regardless of the learning theory one adopts (RPE, ANCCR or others), we agree that this potential learned predictability could, at least partially, account for the increase in signal magnitude across sessions. As this paper is focused on analysis methods, we feel that we can contribute most thoughtfully to the dopamine–learning theory conversation by presenting this explanation in detail, for consideration in future experiments. We have substantially edited this discussion and, as per the reviewer’s suggestion, have qualified our interpretations to reflect the uncertainty in explaining the observed trends.
If this paper is not trying to arbitrate between RPE and ANCCR, as stated in the text, the post hoc reasoning of the authors of Jeong et al 2022 provided in the discussion is not germane. Arbitrating between the models likely requires new experimental designs (removing the sound of the solenoid, satiety controls) or more complex models (e.g. with session effects, measures of predictability) that address the identified issues.
Thank you for this point. We agree with you that, given the scope of the paper, we should avoid any extensive comparison between the models. To address your comment, we have now removed portions of the Discussion that compared RPE and ANCCR. Overall, we agree with the reviewer, and think that future experiments will be needed for conclusively testing the accuracy of the models’ predictions for random (unpredicted) rewards. While we understand that our description of several conversations with the Jeong et al., 2022 authors could have gone deeper, we hope the reviewer can appreciate that inclusion of these conversations was done with the best of intentions. We wish to emphasize that we also consulted with several other researchers in the field when crafting our discussion. We do commend the authors of Jeong et al., 2022 for their willingness to discuss all these details. They could easily have avoided acknowledging any potential incompleteness of their theory by claiming that our results do not invalidate their predictions for a random reward, because the reward could potentially have been predicted (due to an inadvertent CS+ generated from the solenoid pressure). Instead, they emphasized that they thought their experiment did test a random reward, to the extent they could determine, and that our results suggest components of their theory that should be updated. We think that engagement with re-analyses of one’s data, even when findings are at odds with an initial theoretical framing, is a good demonstration of open science practice. For that reason as well, we feel providing readers with a perspective on the entire discussion will contribute to the scientific discourse in this area.
Finally, we would like to reiterate that this conversation is happening at least in part because of our method: by analyzing the signal at every trial timepoint, it provides a formal way to test for the presence of a neural signal indicative of reward delivery perception. Ultimately, this was what we set out to do: help researchers ask questions of their data that may have been harder to ask before. We believe that having a demonstration that we can indeed do this for a “live” scientific issue is the most appropriate way of demonstrating the usefulness of the method.
Of the three potential causes of within-session decreases, the photobleaching arguments advanced in the discussion and expanded greatly in the appendices are not convincing. The data being modeled is a processed signal (∆F/F) with smoothing and baseline correction and this does not seem to have been considered in the argument. Furthermore, the photometry readout is also a convolution of the actual concentration changes over time, influenced by the on-off kinetics of the sensor, which makes the interpretation of timing effects of photobleaching less obvious than presented here and more complex than the dyes considered in the cited reference used as a foundation for this line of reasoning.
We appreciate the nuance of this point, and we have made considerable efforts in the Results and Discussion sections to caution that alternative hypotheses (e.g., photobleaching) cannot be definitively ruled out. In response to your criticism, we have consulted with more experts in the field regarding the potential for bleaching in this data, and it is not clear to us why photobleaching would be visible in one time-window of a trial, but not at another (less than a second away), despite high ∆F/F magnitudes in both time-windows. We do wish to point out that the Jeong et al. (2022) authors were also concerned about photobleaching as a possible explanation. At their request, we analyzed data from additional experiments, collected from the same animals. In most cases, we did not observe signal patterns that seemed to indicate photobleaching. Given the additional scrutiny, we do not think that photobleaching is more likely to invalidate results in this particular set of experiments than it would be in any other photometry experiment. While the role of photobleaching may be more complicated with this sensor than others in the references, that citation was included primarily as a way of acknowledging that it is possible that non-linearities in photobleaching could occur. Regardless, your point is well taken and we have qualified our description of these analyses to express that photobleaching cannot be ruled out.
Within this discussion of photobleaching, the characterization of the background reward experiments used in part to consider photobleaching (appendix 7.3.2) is incorrect. In this experiment (Jeong et al., 2022), background rewards were only delivered in the inter-trial-interval (i.e. not between the CS+ and predicted reward as stated in the text). Both in the authors’ description and in the data, there is a 6s before cue onset where rewards are not delivered and while not described in the text, the data suggests there is a period after a predicted reward when background rewards are not delivered. This complicates the comparison of this data to the random reward experiment.
Thank you for pointing this out! We removed the parenthetical on page 18 of the appendix that incorrectly stated that rewards can occur between the CS+ and the predicted reward.
The discussion of the lack of evidence for backpropagation, taken as evidence for ANCCR over RPE, is also weak.
Our point was initially included to acknowledge that, although our method yields results that conflict with the conclusions described by Jeong et al., 2022 on data from some experiments, on other experiments our method supports their results. Again, we believe that a critical part of re-analyzing shared datasets is acknowledging both areas where new analyses support the original results, as well as those where they conflict with them. We agree with the reviewer that qualifying our results so as not to emphasize support for/against RPE/ANCCR will strengthen our paper, and we have made those changes. We have qualified the conclusions of our analysis to emphasize they are a demonstration of how FLMM can be used to answer a certain style of question with hypothesis testing (how signal dynamics change across sessions), as opposed to providing evidence for/against the backpropagation hypothesis.
A more useful exercise than comparing FLMM to the methods and data of Jeong et al., 2022, would be to compare against the approach of Amo et al., 2022, which identifies backpropagation (data publicly available: DOI: 10.5061/dryad.hhmgqnkjw). The replication of a positive result would be more convincing of the sensitivity of the methodology than the replication of a negative result, which could be a result of many factors in the experimental design. Given that the Amo et al. analysis relies on identifying systematic changes in the timing of a signal over time, this would be particularly useful in understanding if the smoothing steps in FLMM obscure such changes.
Thank you for this suggestion. Your thoughtful review has convinced us that focusing on our statistical contribution will strengthen the paper, and we made changes to further emphasize that we are not seeking to adjudicate between RPE/ANCCR. Given the length of the manuscript as it stands, we could only include a subset of the analyses conducted on Jeong et al., 2022, and had to relegate the results from the Coddington et al., data to an appendix. Realistically, it would be hard for us to justify including analyses from a third dataset, only to have to relegate them to an appendix. We did include numerous examples in our manuscript where we already replicated positive results, in a way that we believe demonstrates the sensitivity of the methodology. We have also been working with many groups at NIH and elsewhere using our approach, in experiments targeting different scientific questions. In fact, one paper that extensively applies our method, and compares the results with those yielded by standard analysis of AUCs, is already published (Beas et al., 2024). Finally, in our analysis guide we describe additional analyses, not included in the manuscript, that replicate positive results. Hence there are numerous demonstrations of FLMM’s performance in less controversial settings. We take your point that our description of the data supporting one theory or the other should be qualified, and we have corrected that. Specifically for your suggestion of Amo et al. 2022, we have not had the opportunity to personally reanalyze their data, but we are already in contact with other groups who have conducted preliminary analyses of their data with FLMM. We are delighted to see this, in light of your comments and our decision to restrict the scope of our paper. We will help them and other groups working on this question to the extent we can.
Recommendations for the Authors:
Reviewer #2:
First, I would like to commend the authors for the clarity of the paper, and for creating an open-source package that will help researchers more easily adopt this type of analysis.
Thank you for the positive feedback!
I would suggest the authors consider adding to the manuscript, either some evidence or some intuition on how feasible would be to use FLMM for very complex model specifications, in terms of computational cost and model convergence.
Thank you for this suggestion. As we described above in response to Reviewer #2’s Public Reviews, we have added in a demonstration of the scalability of the method. Since our initial manuscript submission, we have further increased the package’s speed (e.g., through further parallelization). We are releasing the updated version of our package on CRAN.
From my understanding, this package might potentially be useful not just for photometry data but also for two-photon recordings for example. If so, I would also suggest the authors add to the discussion this potential use.
This is a great point. Our updated manuscript Discussion includes the following:
“The FLMM framework may also be applicable to techniques like electrophysiology and calcium imaging. For example, our package can fit functional generalized LMMs with a count distribution (e.g., Poisson). Additionally, our method can be extended to model time-varying covariates. This would enable one to estimate how the level of association between signals, simultaneously recorded from different brain regions, fluctuates across trial time-points. This would also enable modeling of trials that differ in length due to, for example, variable behavioral response times (e.g., latency-topress).”
Reviewer #3:
The authors should define ’function’ in context, as well as provide greater detail of the alternate tests that FLMM is compared to in Figure 7.
We include a description of the alternate tests in Appendix Section 5.2. We have updated the Methods Section (Section 4) to introduce the reader to how ‘functions’ are conceptualized and modeled in the functional data analysis literature. Specifically, we added the following text:
“FLMM models each trial’s signal as a function that varies smoothly across trial time-points (i.e., along the “functional domain”). It is thus a type of non-linear modeling technique over the functional domain, since we do not assume a linear model (straight line). FLMM and other functional data analysis methods model data as functions, when there is a natural ordering (e.g., time-series data are ordered by time, imaging data are ordered by x-y coordinates), and are assumed to vary smoothly along the functional domain (e.g., one assumes values of a photometry signal at close time-points in a trial have similar values). Functional data analysis approaches exploit this smoothness and natural ordering to capture more information during estimation and inference.”
Given the novelty of estimating joint CIs, the authors should be clearer about how this should be reported and how this differs from pointwise CIs (and how this has been done in the past).
We appreciate your pointing this out, as the distinction is nuanced. Our manuscript includes a description of how joint CIs enable one to interpret effects as statistically significant for time-intervals as opposed to individual timepoints. Unlike joint CIs, assessing significance with pointwise CIs suffers from multiple-comparisons problems. As a result of your suggestion, we have included a short discussion of this to our analysis guide (Part 1), entitled “Pointwise or Joint 95% Confidence Intervals.” The Methods section of our manuscript also includes the following:
“The construction of joint CIs in the context of functional data analysis is an important research question; see Cui et al. (2021) and references therein. Each point at which the pointwise 95% CI does not contain 0 indicates that the coefficient is statistically significantly different from 0 at that point. Compared with pointwise CIs, joint CIs takes into account the autocorrelation of signal values across trial time-points (the functional domain). Therefore, instead of interpreting results at a specific timepoint, joint CIs enable joint interpretations at multiple locations along the functional domain. This aligns with interpreting covariate effects on the photometry signals across time-intervals (e.g., a cue period) as opposed to at a single trial time-point. Previous methodological work has provided functional mixed model implementations for either joint 95% CIs for simple random-effects models (Cui et al., 2021), or pointwise 95% CIs for nested models (Scheipl et al., 2016), but to our knowledge, do not provide explicit formulas or software for computing joint 95% CIs in the presence of general random-effects specifications.”
The authors identify that many photometry studies are complex nested longitudinal designs, using the cohort of 8 animals used in five task designs of Jeong et al. 2022 as an example. The authors miss the opportunity to illustrate how FLMM might be useful in identifying the effects of subject characteristics (e.g. sex, CS+ cue identity).
This is a fantastic point and we have added the following into the Discussion:
“...[S]ignal heterogeneity due to subject characteristics (e.g., sex, CS+ cue identity) could be incorporated into a model through inclusion of animal-specific random effects.”
In discussing the delay-length change experiment, it would be more accurate to say that proposed versions of RPE and ANCCR do not predict the specific change.
Good point. We have made this change.
Minor corrections:
Panels are mislabeled in Figure 5.
Thank you. We have corrected this.
The Crowder (2009) reference is incorrect, being a review of the book with the book presumably being the correct citation.
Good catch, thank you! Corrected.
In Section 5 (first appendix), the authors could include the alternate spelling ’fibre photometry’ to capture any citations that use British English spelling.
This is a great suggestion, but we did not have time to recreate these figures before re-submission.
Section 7.4 is almost all quotation, though unevenly using the block quotation formatting. It is unclear why such a large quotation is included.
Thank you for pointing this out. We have removed this Appendix section (formerly Section 7.4) as the relevant text was already included in the Methods section.
References
Sofia Beas, Isbah Khan, Claire Gao, Gabriel Loewinger, Emma Macdonald, Alison Bashford, Shakira Rodriguez-Gonzalez, Francisco Pereira, and Mario A Penzo. Dissociable encoding of motivated behavior by parallel thalamo-striatal projections. Current Biology, 34(7):1549–1560, 2024.
Erjia Cui, Andrew Leroux, Ekaterina Smirnova, and Ciprian Crainiceanu. Fast univariate inference for longitudinal functional models. Journal of Computational and Graphical Statistics, 31:1–27, 07 2021. doi: 10.1080/10618600.2021.1950006.
Huijeong Jeong, Annie Taylor, Joseph R Floeder, Martin Lohmann, Stefan Mihalas, Brenda Wu, Mingkang Zhou, Dennis A Burke, and Vijay Mohan K Namboodiri. Mesolimbic dopamine release conveys causal associations. Science, 378(6626):eabq6740, 2022. doi: 10.1126/science.abq6740. URL https://www. science.org/doi/abs/10.1126/science.abq6740.
Rachel S Lee, Marcelo G Mattar, Nathan F Parker, Ilana B Witten, and Nathaniel D Daw. Reward prediction error does not explain movement selectivity in dms-projecting dopamine neurons. eLife, 8:e42992, apr 2019. ISSN 2050-084X. doi: 10.7554/eLife.42992. URL https://doi.org/10.7554/eLife.42992.
Fabian Scheipl, Jan Gertheiss, and Sonja Greven. Generalized functional additive mixed models. Electronic Journal of Statistics, 10(1):1455 – 1492, 2016. doi: 10.1214/16-EJS1145. URL https://doi.org/10.1214/16-EJS1145.
-
eLife Assessment
This important study presents a statistical framework for the analysis of photometry signals and provides an open-source implementation. The evidence supporting the benefits of the presented functional mixed-effect modeling analysis as opposed to 1) summary statistics and 2) other pointwise regression models is convincing with a thorough comparison with other methods and datasets. This work will be of great interest to researchers using not only fiber photometry, but other time-series data such as calcium imaging or electrophysiology data, and wanting to implement trial-by-trial temporal analysis, taking also into account variability within the dataset.
-
Reviewer #1 (Public review):
Summary:
Fiber photometry has become a very popular tool in recording neuronal activity in freely behaving animals. Despite the number of papers published with the method, as the authors rightly note, there are currently no standardized ways to analyze the data produced. Moreover, most of the data analyses confine to simple measurements of averaged activity and by doing so, erase valuable information encoded in the data. The authors offer an approach based on functional linear mixed modeling, where beyond changes in overall activity various functions of the data can also be analyzed. More in depth analysis, more variables taken into account, better statistical power all lead to higher quality science.
Strengths:
The framework the authors present is solid and well explained. By reanalyzing formerly published data, the authors also further increase the significance of the proposed tool opening new avenues for reinterpreting already collected data. They also made a convincing case showing that the proposed algorithm works on data with different preprocessing backgrounds.
-
Reviewer #2 (Public review):
Summary:
This work describes a statistical framework that combines functional linear mixed modeling with joint 95% confidence intervals, which improves statistical power and provides less conservative and more robust statistical inferences than in previous studies. Pointwise linear regression analysis has been used extensively to analyze time series signals from a wide range of neuroscience recording techniques, with recent studies applying them to photometry data. The novelty of this study lies in 1) the introduction of joint 95% confidence intervals for statistical testing of functional mixed models with nested random-effects, and 2) providing an open-source R package implementing this framework. This study also highlights how summary statistics as opposed to trial-by-trial analysis can obscure or even change the direction of statistical results by reanalyzing two other studies.
Strengths:
The open-source package in R using a similar syntax as lme4 package for the implementation of this framework, the high fitting speed and the low memory footprint, even in complex models, enhance the accessibility and usage by other researchers.
The reanalysis of two studies using summary statistics on photometry data (Jeong et al., 2022; Coddington et al., 2023) highlights how trial-by-trial analysis at each time-point on the trial can reveal information obscured by averaging across trials. Furthermore, this work also exemplifies how session and subject variability can lead to different conclusions when not considered.
This study also showcases the statistical robustness of FLMM by comparing this method to fitting pointwise linear mixed models and performing t-test and Benjamini-Hochberg correction as performed by Lee et al. (2019).
-
Reviewer #3 (Public review):
Summary:
Loewinger et al. extend a previously described framework (Cui et al., 2021) to provide new methods for statistical analysis of fiber photometry data. The methodology combines functional regression with linear mixed models, allowing inference on complex study designs that are common in photometry studies. To demonstrate its utility, they reanalyze datasets from two recent fiber photometry studies into mesolimbic dopamine. Then, through simulation, they demonstrate the superiority of their approach compared to other common methods.
Strengths:
The statistical framework described provides a powerful way to analyze photometry data and potentially other similar signals. The provided package makes this methodology easy to implement and the extensively worked examples of reanalysis provide a useful guide to others on how to correctly specify models.
Modeling the entire trial (function regression) removes the need to choose appropriate summary statistics, removing the opportunity to introduce bias, for example in searching for optimal windows in which to calculate the AUC. This is demonstrated in the re-analysis of Jeong et al., 2022, in which the AUC measures presented masked important details about how the photometry signal was changing. There is an appropriate level of discussion of the interpretation of the reanalyzed data that highlights the pitfalls of other methods and the usefulness of their methods.
The authors' use of linear mixed methods, allows for the estimation of random effects, which are an important consideration given the repeated-measures design of most photometry studies.
The authors provide a useful guide for how to practically use and implement their methods in an easy-to-use package. These methods should have wide applicability to those who use photometry or similar methods. The development of this excellent open-source software is a great service to the wider neuroscience community.
-
-
katinamagazine.org katinamagazine.org
-
challenges have arrived in waves
testing annotation
-
-
80000hours.org 80000hours.org
-
three global problems that you think most need additional people working on them
- Global cooperation incentives: Find incentives that motivate countries and MNCs to cooperate and act pro-socially.
- Identify and promote efficient structures of cooperation on various levels (international, national, township, family etc.)
- How to alleviate the suffering of the poorest
-
-
52.2.80.92:1336 52.2.80.92:1336Edm8ker4
-
Share Your Joy
@emilio Not aligned with the Google Doc.
-
Get in touch with our team,
This sshould be configured as a mailt o link that opens the user's default email client when clicked, directing them to send an email to hello@edm8ker.com.
-
Write to us!
This button should be configured as a mailto link that opens the user's default email client when clicked, directing them to send an email to hello@edm8ker.com.
-
Something for Everyone
The section after this point is missing. @emilio ref to https://docs.google.com/document/d/1RNKQyGQX3xPUyXYuKRIoPodcYTnDQHCrh2kn2cPf19c/edit
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This article presents valuable findings on the impact of climate change on odonates, integrating phenological and range shifts to broaden our understanding of biodiversity change. The study leverages extensive natural history data, offering a combined analysis of temporal trends in phenology and distribution and their potential drivers. The support for the findings is solid, though additional clarification regarding the methods and alternative sensitivity analyses could make the conclusions stronger.
-
-
52.2.80.92:1336 52.2.80.92:1336Edm8ker3
-
Why Partner with edm8ker?
The headline is just "Why edm8ker?"
-
Our Collaborative Approach
The points aren't in order of preference. Ref: figma Also, it's just "People"
-
Here’s how we work with you.
It should be on the next line.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This study poses an important step forward in understanding the brain-network embedding of beta oscillations. The study advances our circuit-level understanding of the pathophysiology associated with dopaminergic alterations in psychiatric or neurological disorders. The study provides compelling evidence that beta oscillations across the neocortex and basal ganglia map onto shared functional and structural networks that show significant positive correlations with dopamine receptors.
-
Reviewer #1 (Public review):
The study by Chikermane and colleagues investigates functional, structural, and dopaminergic network substrate of cortical beta oscillations (13-30 Hz). The major strength of the work lies in the methodology taken by the authors, namely a multimodal lesion network mapping. First, using invasive electrophysiological recordings from healthy cortical territories of epileptic patients they identify regions with highest beta power. Next, they leverage open access MRI data and PET atlases and use the identified high-beta regions as seeds to find (1) the whole-brain functional and structural maps of regions that form the putative underlying network of high-beta regions and (2) the spatial distribution of dopaminergic receptors that show correlation with nodal connectivity of the identified networks. These steps are achieved by generating aggregate functional, structural, and dopaminergic network maps using lead-DBS toolbox, and by contrasting the results with those obtained from high-alpha regions. The main findings are:
(1) Beta power is strongest across frontal, cingulate, and insular regions in invasive electrophysiological data, and these regions map onto a shared functional and structural network.<br /> (2) The shared functional and structural networks show significant positive correlations with dopamine receptors across cortex and basal ganglia (which is not the case for alpha, where correlations are found with GABA).
-
Reviewer #2 (Public review):
Summary:
This is a very interesting paper that leveraged several publicly available datasets: invasive cortical recording in epilepsy patients, functional and structural connectomic data, and PET data related to dopaminergic and gaba-ergic synapses. These were combined to create a unified hypothesis of beta band oscillatory activity in the human brain. They show that beta frequency activity is ubiquitous, and does not just occur in sensorimotor areas. Cortical regions where beta oscillations predominated had high connectivity to regions that are high in dopamine re-update.
Strengths:
The authors leverage and integrate three publicly available human brain datasets in a creative way. These public datasets are powerful tools for human neuroscience, and it is innovative to combine these three types of data into a common brain space to generate novel findings and hypotheses. Findings are nicely controlled by separately examining cortical regions where alpha predominates (which have a different connectivity pattern). GABA uptake from PET studies is used as a control for the specificity of the relationship between beta activity and dopamine uptake. There is much interest in synchronized oscillatory activity as a mechanism of brain function and dysfunction, but the field is short on unifying hypotheses of why particular rhythms predominate in particular regions. This paper contributes nicely to that gap. It is ambitious in generating hypotheses, particularly that modulation of beta activity may be used as a "proxy" for modulating phasic dopamine release.
Weaknesses:
As the authors point out, the use of normative data is excellent for exploring hypotheses but does not address or explore individual variations which could lead to other insights. It is also biased to resting state activity; maps of task related activity (if they were available) might show different findings.
Challenges:
In the Discussion, the authors do a fairly deep dive into the implications of their findings, particularly with respect to the hypothesis that beta band activity "preserves the status quo", and with respect to the use of beta band activity in controlling brain-machine interfaces. Mechanistically and theoretically oriented readers might gain rewarding new insights by a careful read of the discussion, but full appreciation of their deep dive may require real time interaction with the authors.
-
Reviewer #3 (Public review):
Summary:
In this paper, Chikermane et al. leverage a large open dataset of intracranial recordings (sEEG or ECoG) to analyze resting state (eyes closed) oscillatory activity from a variety of human brain areas. The authors identify a dominant proportion of channels in which beta band activity (12-30Hz) is most prominent, and subsequently seek to relate this to anatomical connectivity data by using the sEEG/ECoG electrodes as seeds in a large set of MRI data from the human connectome project. This reveals separate regions and white matter tracts for alpha (primarily occipital) and beta (prefrontal cortex and basal ganglia) oscillations. Finally, using a third available dataset of PET imaging, the authors relate the parcellated signals to dopamine signaling as estimated by spatial uptake patterns of dopamine, and reveal a significant correlation between the functional connectivity maps and the dopamine reuptake maps, suggesting a functional relationship between the two.
Strengths:
Overall, I found the paper well justified, focused on an important topic and interesting. The authors' use of 3 different open datasets was creative and informative, and it significantly adds to our understanding of different oscillatory networks in the human brain, and their more elusive relation with neuromodulator signaling networks by adding to our knowledge of the association between beta oscillations and dopamine signaling. Even my main comments about the lack of a theta network analysis and discussion points are relatively minor, and I believe this paper is valuable and informative.
Weaknesses:
The analyses were adequate, and the authors cleverly leverage these different datasets to build an interesting story. The main aspect I found missing (in addition to some discussion items, see below) was an examination of the theta network. Theta oscillations have been involved in a number of cognitive processes including spatial navigation and memory, and have been proposed to have different potential originating brain regions, and it would be informative to see how their anatomical networks (e.g. as in Fig. 2) look like under the author's analyses.
The authors devote a significant portion of the discussion to relating their findings to a popular hypothesis for the function of beta oscillations, the maintenance of the "status quo", mostly in the context of motor control. As the authors acknowledge, given the static nature of the data and lack of behavior, this interpretation remains largely speculative and I found it a bit too far-reaching given the data shown in the paper. In contrast, I missed a more detailed discussion on the growing literature indicating a role for beta in mood (e.g. in Kirkby et al. 2018), especially given the apparent lack of hippocampal and amygdala involvement in the paper, which was surprising.
-
-
askubuntu.com askubuntu.com
-
Can I (how) show the pinned items in the Dock as text? (For me, all icons are "mystery meat".)
Sería muy interesante. No me gustan los iconos y me gusta cómo se ven las aplicaciones en Dmenu y Rofi.
-
-
52.2.80.92:1336 52.2.80.92:1336Edm8ker5
-
Expand Your
@emilio (Ref to google doc: https://docs.google.com/document/d/18HtK5PbMVasJpUlBrFwmNHxsX7VhihJ65Sh9xuTGEM8/edit#heading=h.7mou8kj9rv0h) This is the updated section - 4.4.5 Your Classroom's One-Stop Maker Shop Discover a treasure trove of tools, activities, and inspiration to elevate your maker education journey. From engaging blog posts and free STEM challenges to expert guidance and community support, edm8ker has everything you need to foster creativity and innovation in your classroom. CTA Button: Explore Our Resources Now!
-
Ready to inspire the next generation of thinkers and makers?Start Exploring Our Programs Today!
@emilio There's one line that's missing and CTA Button content is different.
-
Discover Repair Skills and Reduce Waste, One Fix at a Time
The Description is - Don’t toss that broken appliance—repair it! @emilio @arif
-
Ready-to-Deliver Maker Programs
@emilio @Arif Name of the programme is "Future-ready Maker Programs".
-
Explore Our Ready-to-Deliver Maker Programs
The other programs have not beed added. Ref: Figma
-
-
programs.clearerthinking.org programs.clearerthinking.org
-
career quiz: In which career can you do the most good?
The link is broken.
Our career quiz is no longer available.
-
-
Local file Local file
-
ack of concern
On the other hand, this situation creates an unusual diversity within a church that typically aims for consistency, especially in how services are conducted. The church’s limited guidelines for producing online Masses—seen in the lack of protocols from dioceses or bishops’ conferences—suggests a tendency to overlook this diversity.
-
local priests actually have avery circumscribed amount of freedom regarding liturgical practices
circumscribed = restricted but in st. paul parish priest has authority to change the prayers..
-
as really real community incontradistinction to the virtual (read: less real or even unreal)community online
Christians believe the real community is the offline one over the virtual (even less/unreal)
-
pandemic has made digital cultureand technology a new interest for many people in the church
the pandemic has forced the church to consider digital culture
-
the smoothness I touch connects me to her
first paragraph - in mass
-
-
academic-oup-com.ezproxy.leidenuniv.nl academic-oup-com.ezproxy.leidenuniv.nl
-
Adjudication is the effort to resolve a dispute by determining, amid the clamour of rival claims, what is just.
definition of adjudication accordinf to what is just.
-
-
52.2.80.92:1336 52.2.80.92:1336Edm8ker3
-
Have More Questions? Dive Into Our FAQs!
The CTA content and location were edited but aren’t showing up in Figma. Astitva will need to update this in the design.
-
Why Edm8ker is Your Go-To Partner for Maker Education
CTA Button missing at the bottom of the section. Ref: Figma
-
Makerspace Consultancy & Design Services
NOTE FOR THE IMAGE: Please include images of makerspaces designed by us/ or actual makerspace. @Terence, could you let me know where I can find these photos? @Astitva can you take this up pls?
-
-
www.americanyawp.com www.americanyawp.com
-
British North American colonists fashioned increasingly complex societies with unique religious cultures, economic ties, and political traditions. T
This comment answers the question of who fashioned the complex societies. This is significant as it is important to know who is doing what. Without knowing that British colonists fashioned complex societies it is harder to understand the lesson. Knowing the British formed tough societies makes it easier to know what their religion and economic ties were like. The complexity of their society is what makes their style of life so unique.
-
-
www.etymonline.com www.etymonline.com
-
onomatology
find the name for the field of study pertaining to answering the question
What's in a name?
-
-
www.americanyawp.com www.americanyawp.com
-
White’s English perspective comes through: archaeological evidence shows that these houses were usually situated around communal gathering places or moved next to fields under cultivation not ordered in European-style rows.
This answers the question of what other accounts do we need to consider. While it is really important to think about the Native perspectives it is also important to consider what White might have been thinking. In this line we learn where White's perspective comes from and what shapes it. Knowing this can help up be more open to hearing from White. White gains his perspective from archaeological evidence.
-
Native settlements were usually organized around political, economic, or religious activit
Another thing that has me curious is to see what economic political and religious activities were most prominent? Some explorers were Christian but among the Natives I would be interested in knowing more about their practices. I truly believe in knowing both sides of history and I think this applies to this. John White going in to a new area, the reader knowing more about the Natives practices would help us empathize and see it from a new perspective. Both sides need to be treated accordingly.
-
-
memorysystems.substack.com memorysystems.substack.com
-
many of the things we take for granted about modern work are actually just artifacts of folder-based thinking:Hierarchical org charts (folders for people)Project-based work organization (folders for tasks)Departmental divisions (folders for functions)
Folders were developed to model trees and other hierarchical structures, not the other way around. These were all informed by hierarchical societies that humans have lived in, where king or chief has a family and thus a tree of succession
-
-
www.etymonline.com www.etymonline.com
-
noetic (adj.)
"pertaining to, performed by, or originating in the intellect," 1650s, from Greek noētikos "intelligent," from noēsis "a perception, intelligence, thought" (see noesis). Related: Noetical (1640s).
-
-
learn-eu-central-1-prod-fleet01-xythos.content.blackboardcdn.com learn-eu-central-1-prod-fleet01-xythos.content.blackboardcdn.com
-
– the very best definition of a story: ‘Once upon atime, in such and such a place, something hap-pened.’
what the story exactly is
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This important study substantially advances our understanding of nocturnal animal navigation and the ways that animals use polarized light. The evidence supporting the conclusions is convincing, with elegant behavioural experiments in actively navigating ants. The work will be of interest to biologists working on animal navigation or sensory ecology.
-
Reviewer #1 (Public review):
Freas et al. investigated if the exceedingly dim polarization pattern produced by the moon can be used by animal to guide a genuine navigational task. The sun and moon are celestial beacons for directional information, but they can be obscured by clouds, canopy, or the horizon. However, even when hidden from view, these celestial bodies provide directional information through the polarized light patterns in the sky. While the sun's polarization pattern is famously used by many animals for compass orientation, until now it has never been shown that the extremely dim polarization pattern of the moon can be used for navigation. To test this, Freas et al. studied nocturnal bull ants, by placing a linear polarizer in the homing path on a freely navigating ant 45 degrees shifted to the moon's natural polarization pattern. They recorded the homing direction of an ant before entering the polarizer, under the polarizer, and again after leaving the area covered by the polarizer. The results very clearly show, that ants walking under the linear polarizer change their homing direction by about 45 degrees in comparison to the homing direction under the natural polarization pattern and change it back after leaving the area covered by the polarizer again. These results can be repeated throughout the lunar month, showing that bull ants can use the moon's polarization pattern even under crescent moon conditions. Finally, the authors show, that the degree in which the ants change their homing direction is dependent on the length of their home vector, just as it is for the solar polarization pattern.
The behavioral experiments are very well designed, and the statistical analyses are appropriate for the data presented. The authors' conclusions are nicely supported by the data and clearly show nocturnal bull ants use the dim polarization pattern of the moon for homing, in the same way many animals use the sun's polarization pattern during the day. This is the first proof of the use of the lunar polarization pattern in any animal.
Comments on revised version:
The authors have addressed all of my previous comments and suggestions. I am happy with the way the manuscript has improved and have no further comments.
-
Reviewer #2 (Public review):
Summary:
The authors aimed to understand whether polarised moonlight could be used as a directional cue for nocturnal animals homing at night, particularly at times of night when polarised light is not available from the sun. To do this, the authors used nocturnal ants, and previously established methods, to show that the walking paths of ants can be altered predictably when the angle of polarised moonlight illuminating them from above is turned by a known angle (here +/- 45 degrees).
Strengths:
The behavioural data are very clear and unambiguous. The results clearly show that when the angle of downwelling polarised moonlight is turned, ants turn in the same direction. The data also clearly show that this result is maintained even for different phases (and intensities) of the moon, although during the waning cycle of the moon the ants' turn is considerably less than may be expected.
Impact:
The authors have discovered that nocturnal bull ants, while homing back to their nest holes at night, are able to use the dim polarised light pattern formed around the moon for path integration. Even though similar methods have previously shown the ability of dung beetles to orient along straight trajectories for short distances using polarised moonlight, this the first evidence of an animal that uses polarised moonlight in homing. This is quite significant, and their findings are well supported by their data.
Comments on revised version:
The authors have made a good effort to accommodate my suggestions for improvement (and from what I can tell, those of the other reviewers). I have no further comments.
-
Reviewer #3 (Public review):
Summary:
This manuscript presents a series of experiments aimed at investigating orientation to polarized lunar skylight in a nocturnal ant, the first report of its kind that I am aware of.
Strengths:
The study was conducted carefully and is clearly explained here.
Comments on revised version:
The manuscript is much improved and will make an excellent contribution to the field.
-
Author response:
The following is the authors’ response to the previous reviews.
Public Reviews:
Reviewer #1 (Public Review):
Freas et al. investigated if the exceedingly dim polarization pattern produced by the moon can be used by animals to guide a genuine navigational task. The sun and moon have long been celestial beacons for directional information, but they can be obscured by clouds, canopy, or the horizon. However, even when hidden from view, these celestial bodies provide directional information through the polarized light patterns in the sky. While the sun's polarization pattern is famously used by many animals for compass orientation, until now it has never been shown that the extremely dim polarization pattern of the moon can be used for navigation. To test this, Freas et al. studied nocturnal bull ants, by placing a linear polarizer in the homing path on freely navigating ants 45 degrees shifted to the moon's natural polarization pattern. They recorded the homing direction of an ant before entering the polarizer, under the polarizer, and again after leaving the area covered by the polarizer. The results very clearly show, that ants walking under the linear polarizer change their homing direction by about 45 degrees in comparison to the homing direction under the natural polarization pattern and change it back after leaving the area covered by the polarizer again. These results can be repeated throughout the lunar month, showing that bull ants can use the moon's polarization pattern even under crescent moon conditions. Finally, the authors show, that the degree in which the ants change their homing direction is dependent on the length of their home vector, just as it is for the solar polarization pattern.
The behavioral experiments are very well designed, and the statistical analyses are appropriate for the data presented. The authors' conclusions are nicely supported by the data and clearly show that nocturnal bull ants use the dim polarization pattern of the moon for homing, in the same way many animals use the sun's polarization pattern during the day. This is the first proof of the use of the lunar polarization pattern in any animal.
Reviewer #2 (Public Review):
Summary:
The authors aimed to understand whether polarised moonlight could be used as a directional cue for nocturnal animals homing at night, particularly at times of night when polarised light is not available from the sun. To do this, the authors used nocturnal ants, and previously established methods, to show that the walking paths of ants can be altered predictably when the angle of polarised moonlight illuminating them from above is turned by a known angle (here +/- 45 degrees).
Strengths:
The behavioural data are very clear and unambiguous. The results clearly show that when the angle of downwelling polarised moonlight is turned, ants turn in the same direction. The data also clearly show that this result is maintained even for different phases (and intensities) of the moon, although during the waning cycle of the moon the ants' turn is considerably less than may be expected.
Weaknesses:
The final section of the results - concerning the weighting of polarised light cues into the path integrator - lacks clarity and should be reworked and expanded in both the Methods and the Results (also possibly with an extra methods figure). I was really unsure of what these experiments were trying to show or what the meaning of the results actually are.
Rewrote these sections and added figure panel to Figure 6.
Impact:
The authors have discovered that nocturnal bull ants while homing back to their nest holes at night, are able to use the dim polarised light pattern formed around the moon for path integration. Even though similar methods have previously shown the ability of dung beetles to orient along straight trajectories for short distances using polarised moonlight, this is the first evidence of an animal that uses polarised moonlight in homing. This is quite significant, and their findings are well supported by their data.
Reviewer #3 (Public Review):
Summary:
This manuscript presents a series of experiments aimed at investigating orientation to polarized lunar skylight in a nocturnal ant, the first report of its kind that I am aware of.
Strengths:
The study was conducted carefully and is clearly explained here.
Weaknesses:
I have only a few comments and suggestions, that I hope will make the manuscript clearer and easier to understand.
Time compensation or periodic snapshots
In the introduction, the authors compare their discovery with that in dung beetles, which have only been observed to use lunar skylight to hold their course, not to travel to a specific location as the ants must. It is not entirely clear from the discussion whether the authors are suggesting that the ants navigate home by using a time-compensated lunar compass, or that they update their polarization compass with reference to other cues as the pattern of lunar skylight gradually shifts over the course of the night - though in the discussion they appear to lean towards the latter without addressing the former. Any clues in this direction might help us understand how ants adapted to navigate using solar skylight polarization might adapt use to lunar skylight polarization and account for its different schedule. I would guess that the waxing and waning moon data can be interpreted to this effect.
Added a paragraph discussing this distinction in mechanisms and the limits of the current data set in untangling them. An interesting topic for a follow up to be sure.
Effects of moon fullness and phase on precision
As well as the noted effect on shift magnitudes, the distributions of exit headings and reorientations also appear to differ in their precision (i.e., mean vector length) across moon phases, with somewhat shorter vectors for smaller fractions of the moon illuminated. Although these distributions are a composite of the two distributions of angles subtracted from one another to obtain these turn angles, the precision of the resulting distribution should be proportional to the original distributions. It would be interesting to know whether these differences result from poorer overall orientation precision, or more variability in reorientation, on quarter moon and crescent moon nights, and to what extent this might be attributed to sky brightness or degree of polarization.
See below for response to this and the next reviewer comment
N.B. The Watson-Williams tests for difference in mean angle are also sensitive to differences in sample variance. This can be ruled out with another variety of the test, also proposed by Watson and Williams, to check for unequal variances, for which the F statistic is = (n2-1)*(n1-R1) / (n1-1)*(n2-R2) or its inverse, whichever is >1.
We have looked at the amount of variance from the mean heading direction in terms of both the shifts and the reorientations and found no significant difference in variance between all relevant conditions. It is possible (and probably likely) that with a higher n we might find these differences but with the current data set we cannot make statistical statements regarding degradations in navigational precision.
As an additional analysis to address the Watson-Williams test‘s sensitivity to changes in variance, we have added var test comparisons for each of the comparisons, which is a well-established test to compare variance changes. None of these were significantly different, suggesting the observed differences in the WW tests are due to changes in the mean vector and not the distribution. We have added this test to the text.
Recommendations for the authors:
Reviewer #1 (Recommendations For The Authors):
I have only very few minor suggestions to improve the manuscript:
(1) While I fully agree with the authors that their study, to the best of my knowledge, provides the first proof (in any animal) of the use of the moon's polarization pattern, the many repetitions of this fact disturb the flow of the text and could be cut at several instances.
Yes, it is indeed repeated to an annoying degree.
We have removed these beyond bookending mentions (Abstract and Discussion).
(2) In my opinion, the authors did not change the "ambient polarization pattern" when using the linear polarization filter (e.g., l. 55, 170, 177 ...). The linear polarizer presents an artificial polarization pattern with a much higher degree of polarization in comparison to the ambient polarization pattern. I would suggest re-phrasing this, to emphasize the artificial nature of the polarization pattern under the polarizer.
We have made these suggested changes throughout the text to clarify. We no longer say the ambient pattern was
(3) Line 377: I do not see the link between the sentence and Figure 7
Changed where in the discussion we refer to Figure 7.
(4) Figure 7 upper part: In my opinion, the upper part of Figure 7 does not add any additional value to the illustration of the data as compared to Figure 5 and could be cut.
We thought it might be easier for some reader to see the shifts as a dial representation with the shift magnitude converted to 0-100% rather than the shifts in Figure 5. This makes it somewhat like a graphical abstract summarising the whole study.
I agree that Figure 5 tells the same story but a reader that has little background in directional stats might find figure 7 more intuitive. This was the intent at least.
If it becomes a sticking point, then we can remove the upper portion.
Reviewer #2 (Recommendations For The Authors):
MINOR CORRECTIONS AND QUERIES
Line 117: THE majority
Corrected
Lines 129-130: Do you have a reference to support this statement? I am unaware of experiments that show that homing ants count their steps, but I could have missed it.
We have added the references that unpack the ant pedometer.
Line 140: remove "the" in this line.
Removed
Line 170: We need more details here about the spectral transmission properties of the polariser (and indeed which brand of filter, etc.). For instance, does it allow the transmission of UV light?
Added
Line 239: "...tested identicALLY to ...."
Corrected
Lines 242-258 (Vector testing): I must admit I found the description of these experiments very difficult to follow. I read this section several times and felt no wiser as a result. I think some thought needs to be given to better introduce the reader to the rationale behind the experiment (e.g., start by expanding lines 243-246, and maybe add a methods figure that shows the different experimental procedures).
I have rewritten this section of the methods to clearly state the experiment rational and to be clearer as to the methodology.
Also Added a methods panel to Figure 6.
Line 247: "reoriented only halfway". What does this mean? Do you mean with half the expected angle?
Yes, this is a bit unclear. We have altered for clarity:
‘only altered their headings by about half of the 45° e-vector shift (25.2°± 3.7°), despite being tested on near-full-moon nights.’
Results section (in general): In Figure 1 (which is a very nice figure!) you go to all the trouble of defining b degrees (exit headings) and c degrees (reorientation headings), which are very intuitive for interpreting the results, and then you totally abandon these convenient angles in favour of an amorphous Greek symbol Phi (Figs. 2-6) to describe BOTH exit and reorientation headings. Why?? It becomes even more confusing when headings described by Phi can be typically greater than 300 degrees in the figures, but they are never even close to this in the text (where you seem to have gone back to using the b degrees and c degrees angles, without explicitly saying so). Personally, I think the b degrees and c degrees angles are more intuitive (and should be used in both the text and the figures), but if you do insist on using Phi then you should use it consistently in both the text and the figures.
Replaced Phi with b° and c° for both figures and in the text.
Finally, for reorientation angles in Figure 4A, you say that the angle is 16.5 degrees. This angle should have been 143.5 degrees to be consistent with other figures.
Yes, the reorientation was erroneously copied from the shift data (it is identical in both the +45 shift and reorientation for Figure 4A). This has now been corrected
Line 280, and many other lines: Wherever you refer to two panels of the same figure, they should be written as (say) Figure 2A, B not Figure 2AB.
Changed as requested throughout the text.
Line 295 (Waxing lunar phases): For these experiments, which nest are you using? 1 or 2?
We have added that this is nest 1.
Figure 3B: The title of this panel should be "Waxing Crescent Moon" I think.
Ah yes, this is incorrect in the original submission. I have fixed this.
Lines 312-313: Here it sounds as though the ants went right back to the full +/- 45 degrees orientations when they clearly didn't (it was -26.6 degrees and 189.9 degrees). Maybe tone the language down a bit here.
Changed this to make clear the orientation shift is only ‘towards’ the ambient lunar e-vector.
Line 327: Insert "see" before "Figure 5"
Added
Line 329: See comment for Line 295.
We have added that this is nest 1.
Lines 357-373 (Vector testing): Again, because of the somewhat confusing methods section describing these experiments, these results were hard to follow, both here and in the Discussion. I don't really understand what you have shown here. Re-think how you present this (and maybe re-working the Methods will be half the battle won).
I have rewritten these sections to try to make clear these are ant tested with differences in vector length 6m vs. 2m, tested at the same location. Hopefully this is much clearer, but I think if these portions remain a bit confusing that a full rename of the conditions is in order. Something like long vector and short vector would help but comes with the problem of not truly describing what the purpose of the test is which is to control for location, thus the current condition names. As it stands, I hope the new clarifications adequately describe the reasoning while keeping the condition names. Of course, I am happy to make more changes here as making this clear to readers is important for driving home that the path integrator is in play.
See current change to results as an example: ‘Both forgers with a long ~6m remaining vector (Halfway Release), or a short ~2m remaining vector (Halfway Collection & Release), tested at the same location_,_ exhibited significant shifts to the right of initial headings when the e-vector was rotated clockwise +45°.’
Line 361: I think this should be 16.8 not 6.8
Yes, you are correct. Fixed in text (16.8).
Line 365: I think this should be -12.7 not 12.7
Yes, you are correct. Fixed in text (–12.7).
Line 408: "morning twilight". Should this be "morning solar twilight"? Plus "M midas" should be "M. midas"
Added and fixed respectively.
Line 440. "location" is spelt wrong.
Fixed spelling.
Line 444: "...WITH longer accumulated vectors, ..."
Added ‘with’ to sentence.
Line 447: Remove "that just as"
Removed.
Line 448: "Moonlight polarised light" should be "Polarised moonlight"
Corrected.
Lines 450-453: This sentence makes little sense scientifically or grammatically. A "limiting factor" can't be "accomplished". Please rephrase and explain in more detail.
This sentence has been rephrased:
‘The limiting factors to lunar cue use for navigation would instead be the ant’s detection threshold to either absolute light intensity, polarization sensitivity and spectral sensitivity. Moonlight is less UV rich compared to direct sunlight and the spectrum changes across the lunar cycle (Palmer and Johnsen 2015).’
Line 474: Re-write as "... due to the incorporation of the celestial compass into the path integrator..."
Added.
Reviewer #3 (Recommendations For The Authors):
Minor comments
Line 84 I am not sure that we can infer attentional processes in orientation to lunar skylight, at least it has not yet been investigated.
Yes, this is a good point. We have changed ‘attend’ to ‘use’.
Line 90 This description of polarized light is a little vague; what is meant by the phrase "waves which occur along a single plane"? (What about the magnetic component? These waves can be redirected, are they then still polarized? Circular polarization?). I would recommend looking at how polarized light is described in textbooks on optics.
Response: We have rewritten the polarised light section to be clearer using optics and light physics for background.
Line 92 The phrase "e-vector" has not been described or introduced up to this point.
We now introduce e-vector and define it.
‘Polarised light comprises light waves which occur along a single plane and are produced as a by-product of light passing through the upper atmosphere (Horváth & Varjú 2004; Horváth et al., 2014). The scattering of this light creates an e-vector pattern in the sky, which is arranged in concentric circles around the sun or moon's position with the maximum degree of polarisation located 90° from the source. Hence when the sun/moon is near the horizon, the pattern of polarised skylight is particularly simple with uniform direction of polarisation approximately parallel to the north-south axes (Dacke et al., 1999, 2003; Reid et al. 2011; Zeil et al., 2014).’
Happy to make further changes as well.
Line 107 Diurnal dung beetles can also orient to lunar skylight if roused at night (Smolka et al., 2016), provided the sky is bright enough. Perhaps diurnal ants might do the same?
Added the diurnal dung beetles mention as well as the reference.
Also, a very good suggestion using diurnal bull ants.
Line 146 Instead of lunar calendar the authors appear to mean "lunar cycle".
Changed
Line 165 In Figure 1B, it looks like visual access to the sky was only partly "unobstructed". Indeed foliage covers as least part of the sky right up to the zenith.
We have added that the sky is partially obstructed.
Line 179 This could also presumably be checked with a camera?
For this testing we tried to keep equipment to a minimum for a single researcher walking to and from the field site given the lack of public transport between 1 and 4am. But yes, for future work a camera based confirmation system would be easier.
Line 243 The abbreviation "PI" has not been described or introduced up to this point.
Changes to ‘path integration derived vector lengths….’
Line 267 The method for comparing the leftwards and rightwards shifts should be described in full here (presumably one set of shifts was mirrored onto the other?).
We have added the below description to indicate the full description of the mirroring done to counterclockwise shifts.
‘To assess shift magnitude between −45° and +45° foragers within conditions, we calculated the mirror of shift in each −45° condition, allowing shift magnitude comparisons within each condition. Mirroring the −45° conditions was calculated by mirroring each shift across the 0° to 180° plane and was then compared to the corresponding unaltered +45 condition.’
Discussion Might the brightness and spectrum of lunar skylight also play a role here?
We have added a section to the discussion to mention the aspects of moonlight which may be important to these animals, including the spectrum, brightness and polarisation intensity.
Line 451 The sensitivity threshold to absolute light intensity would not be the only limiting factor here. Polarization sensitivity and spectral sensitivity may also play a role (moonlight is less UV rich than sunlight and the spectrum of twilight changes across the lunar cycle: Palmer & Johnsen, 2015).
Added this clarification.
Line 478 Instead of the "masculine ordinal" symbol used (U+006F) here a degree symbol (U+00B0) should be used.
Ah thank you, we have replaced this everywhere in the text.
Line 485 It should be possible to calculate the misalignment between polarization pattern before and after this interruption of celestial cues. Does the magnitude of this misalignment help predict the size of the reorientation?
Reorientations are highly correlated with the shift size under the filter, which makes sense as larger shifts mean that foragers need to turn back more to reorient to both the ambient pattern and to return to their visual route. Reorientation sizes do not show a consistent reduction compared to under-the-filter shifts when the lunar phase is low and is potentially harder to detect.
I have reworked this line in the text as I do not think there is much evidence for misalignment and it might be more precise to say that overnight periods where the moon is not visible may adversely impact the path integrator estimate, though it is currently unknown the full impact of this celestial cue gap of if other cues might also play a role.
Line 642 "from their" should be "relative to"
Changed as requested
Figure 1B Some mention should be made of the differences in vegetation density.
Added a sentence to the figure caption discussing the differences in both vegetation along the horizon and canopy cover.
Figures 2-6 A reference line at 0 degrees change might help the reader to assess the size of orientation changes visually. Confidence intervals around the mean orientation change would also help here.
We have now added circular grid lines and confidence intervals to the circular plots. These should help make the heading changes clear to readers.
-
-
www.americanyawp.com www.americanyawp.com
-
Recruiting Settlers to Carolina, 1666
This document was published 3 years after colony of Carolina was established. The Carolina colony was established March 24, 1963.
-
Therefore all Artificers, as Carpenters, Wheelrights, Joiners, Coopers, Bricklayers, Smiths, or diligent Husbandmen and Laborers, that are willing to advance their fortunes, and live in a most pleasant healthful and fruitful Country, where Artificers are of high esteem, and used with all Civility and Courtesy imaginable…
The new colony offers a higher quality of life; those with labor skills can come to work in the colony and make more money and have a more advantageous life than in England while being highly respected and revered.
-
Such as are here tormented with much care how to get worth to gain a Livelihood, or that with their labor can hardly get a comfortable subsistence, shall do well to go to this place, where any man whatever, that is but willing to take moderate pains, may be assured of a most comfortable subsistence, and be in a way to raise his fortunes far beyond what he could ever hope for in England.
This basically states that if you have unfortunate or rough circumstances in England, you are sure to have a much better chance of being able to sustain yourself and/ or attain wealth, status and land in the new colony. If this is their selling point, I'm interested to know how those with status and wealth in England felt about the possibility of a more level playing field in the new Colony and if the recruitment garnered much interest from them?
-
Thirdly, Every Free-man and Free-woman that transport themselves and Servants by the 25 of March next, being 1667. shall have for Himself, Wife, Children, and Men-servants, for each 100 Acres of Land for him and his Heirs for ever, and for every Woman-servant and Slave 50 Acres, paying at most 1/2d. per acre, per annum, in lieu of all demands, to the Lords Proprietors:
Slave owners were awarded land based on their ownership of slaves, but slaves were awarded none of their own.
-
If any Maid or single Woman have a desire to go over, they will think themselves in the Golden Age, when Men paid a Dowry for their Wives; for if they be but Civil, and under 50 years of Age, some honest Man or other, will purchase them for their Wives.
John Horne advertised that it would be very likely for a woman who made the trip to be purchased as wife by an honest man as long as they met two requirements; that they were under 50 years old and civil. This particular selling point geared towards unmarried women helps illustrate how important it was for a woman to be married in this time to secure their social, economic and legal status.
-
Robert Horne’
Who exactly is Robert Horne? Did he hold an important role within the colony or was he just hired to publish the advertisement?
-
Fourthly, Every Man-Servant at the expiration of their time, is to have of the Country a 100 Acres of Land to him and his heirs for ever, paying only 1/2d. per Acre, per annum, and the Women 50. Acres of Land on the same conditions;
The voices of the original occupants of this land being offered is completely omitted.
-
“of Genteel blood”
*being of or relating to the upper class or gentry.
-
Bottom.
*The lowest price a financial security, commodity or index has traded or been published in a specific time frame.
-
but the Right Honorable Lords Proprietors
The Right Honorable Lords Proprietors were a group of eight English Nobleman who had been granted the land of Carolina by Charles II as a reward for their support and efforts to help him regain the throne of England.
-
First, There is full and free Liberty of Conscience granted to all, so that no man is to be molested or called in question for matters of Religious Concern; but every one to be obedient to the Civil Government, worshipping God after their own way.
The promise of "full and free" religious liberty sounds good; however, I have to question how honest this promise of religious freedom is considering that part of the justification for initially traveling to the new world was to religiously influence and convert natives (and advance economically while doing so). Perhaps, some freedom from the dominance of Catholic Spain but not a full and free liberty to "worship God after their own way" as the text suggests.
-
Robert Horne’s wanted to entice English settlers to join the new colony of Carolina. According to Horne, natural bounty, economic opportunity, and religious liberty awaited anyone willing to make the journey. Horne wanted to recruit settlers of every social class, from those “of Genteel blood” to those who would have to sign a contract of indentured servitude.
Robert Horne was recruiting both men and women of various skills, status and backgrounds who were willing to relocate to the new colony in order to help develop/ expand the colony of Carolina with a series of incentives/ promises that included religious freedom, economic advancement and land.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This valuable study is a companion to a paper introducing a theoretical framework and methodology for identifying Cancer Driving Nucleotides (CDNs). The evidence that recurrent SNVs or CDNs are common in true cancer driver genes is convincing, with more limited evidence that many more undiscovered cancer driver mutations will have CDNs, and that this approach could identify these undiscovered driver genes with about 100,000 samples.
-
Reviewer #1 (Public review):
The study investigates Cancer Driving Nucleotides (CDNs) using the TCGA database, finding that these recurring point mutations could greatly enhance our understanding of cancer genomics and improve personalized treatment strategies. Despite identifying 50-150 CDNs per cancer type, the research reveals that a significant number remain undiscovered, limiting current therapeutic applications, underscoring the need for further larger-scale research.
Strengths:
The study provides a detailed examination of cancer-driving mutations at the nucleotide level, offering a more precise understanding than traditional gene-level analyses. The authors found a significant number of CDNs remain undiscovered, with only 0-2 identified per patient out of an expected 5-8, indicating that many important mutations are still missing. The study indicated that identifying more CDNs could potentially significantly impact the development of personalized cancer therapies, improving patient outcomes.
Weaknesses:
The challenges in direct functional testing of CDNs due to the complexity of tumor evolution and unknown mutation combinations limit the practical applicability of the findings.
-
Reviewer #2 (Public review):
Summary:
The study proposes that many cancer driver mutations are not yet identified but could be identified if they harbor recurrent SNVs. The paper leverages the analysis from Paper #1 that used quantitative analysis to demonstrate that SNVs or CDNs seen 3 or more times are more likely due to selection (ie a driver mutation) than by chance or random mutation.
Strengths:
Empirically, mutation frequency is an excellent marker of a driver gene because canonical driver mutations typically have recurrent SNVs. Using the TCGA database, the paper illustrates that CDNs can identify canonical driver mutations (Fig 3) and that most CDN are likely to disrupt protein function (Fig 2). In addition, CDNs can be shared between cancer types (Fig 4).
Weaknesses:
Driver alteration validation is difficult, with disagreements on what defines a driver mutation, and how many driver mutations are present in a cancer. The value proposed by the authors is that the identification of all driver genes can facilitate the design of patient specific targeting therapies, but most targeted therapies are already directed towards known driver genes. There is an incomplete discussion of oncogenes (where activating mutations tend to target a single amino acid or repeat) and tumor suppressor genes (where inactivating mutations may be more spread across the gene). Other alterations (epigenetic, indels, translocations, CNVs) would be missed by this type of analysis.
The method could be more valuable when applied to the noncoding genome, where driver mutations in promoters or enhancers are relatively rare, or as yet to be discovered. Increasingly more cancers have had whole genome sequencing. Compared to WES, criteria for driver mutations in noncoding regions are less clear, and this method could potentially provide new noncoding driver CDNs. Observing the same mutation in more than one cancer specimen is empirically unusual, and the authors provide a solid quantitative analysis that indicates many recurrent mutations are likely to be cancer-driver mutations.
-
Author response:
The following is the authors’ response to the original reviews.
eLife Assessment <br /> This valuable study is a companion to a paper introducing a theoretical framework and methodology for identifying Cancer Driving Nucleotides (CDNs). While the evidence that recurrent SNVs or CDNs are common in true cancer driver genes is solid, the evidence that many more undiscovered cancer driver mutations will have CDNs, and that this approach could identify these undiscovered driver genes with about 100,000 samples, is limited.
Same criticism as in the eLife assessment of eLife-RP-RA-2024-99340 (https://elifesciences.org/reviewed-preprints/99340). Hence, please refer to the responses to the companion paper.
Public Reviews:
Reviewer #1 (Public Review):
The study investigates Cancer Driving Nucleotides (CDNs) using the TCGA database, finding that these recurring point mutations could greatly enhance our understanding of cancer genomics and improve personalized treatment strategies. Despite identifying 50-150 CDNs per cancer type, the research reveals that a significant number remain undiscovered, limiting current therapeutic applications, and underscoring the need for further larger-scale research.
Strengths:
The study provides a detailed examination of cancer-driving mutations at the nucleotide level, offering a more precise understanding than traditional gene-level analyses. The authors found a significant number of CDNs remain undiscovered, with only 0-2 identified per patient out of an expected 5-8, indicating that many important mutations are still missing. The study indicated that identifying more CDNs could potentially significantly impact the development of personalized cancer therapies, improving patient outcomes.
Weaknesses:
The study is constrained by relatively small sample sizes for each cancer type, which reduces the statistical power and robustness of the findings. ICGC and other large-scale WGS datasets are publicly available but were not included in this study.
Thanks. We indeed have used all public data, including GENIE (figure 7 of the companion paper), ICGC and other integrated resources such as COSMIC. The main study is based on TCGA because it is unbiased for estimating the probability of CDN occurrences. In many datasets, the numerators are given but the denominators are not (the number of patients with the mutation / the total number of patients surveyed). In GENIE, we observed that E(u) estimated upon given sequencing panels are much smaller than in TCGA, this might be due to the selective report of nonsynonymous mutations for synonymous mutations are generally considered irrelevant in tumorigenesis.
To be able to identify rare driver mutations, more samples are needed to improve the statistical power, which is well-known in cancer research. The challenges in direct functional testing of CDNs due to the complexity of tumor evolution and unknown mutation combinations limit the practical applicability of the findings.
We fully agree. We now add a few sentences, making clear that the theory allows us to see how much more can be gained by each stepwise increase in sample size. For example, when the sample size reaches 106, further increases will yield almost no gain in confidence of CDNs identified (see figures of eLife-RP-RA-2024-99340. As pointed out in our provisional responses, an important strength of this pair of studies is that the results are testable. The complexity is the combination of mutations required for tumorigenesis and the identification of such combinations is the main goal and strength of this pair of studies. We add a few sentences to this effect.
While the importance of large sample sizes in identifying cancer drivers is well-recognized, the analytical framework presented in the companion paper (https://elifesciences.org/reviewed-preprints/99340) goes a step further by quantitatively elucidating the relationship between sample size and the resolution of CDN detection.
The question is very general as it is about multigene interactions, or epistasis. The challenges are true in all aspects of evolutionary biology, for example, the genetics of reproductive isolation(Wu and Ting 2004). The issue of epistasis is difficult because most, if not all, of the underlying mutations have to be identified in order to carry out functional tests. While the full identification is rarely feasible, it is precisely the objective of the CDN project. When the sample size increases to 100,000 for a cancer type, all point mutations for that cancer type should be identifiable.
The QC of the TCGA data was not very strict, i.e, "patients with more than 3000 coding region point mutations were filtered out as potential hypermutator phenotypes", it would be better to remove patients beyond +/- 3*S.D from the mean number of mutations for each cancer type. Given some point mutations with >3 hits in the TCGA dataset, they were just false positive mutation callings, particularly in the large repeat regions in the human genome.
Thanks. The GDC data portal offers data calls from multiple pipelines, enabling us to select mutations detected by at least two pipelines. While including patients with hypermutator phenotypes could introduce potential noise, as shown in Eq. 10 of the main text, our method for defining the upper limit of i* is relative robust to the fluctuations in the E(u) of the corresponding cancer population. Since readers may often ask about this, we expand the Methods section somewhat to emphasize this point.
The codes for the statistical calculation (i.e., calculation of Ai_e, et al) are not publicly available, which makes the findings hard to be replicated.
We have now updated the section of “Data Availability” in both papers. The key scripts for generating the major results are available at: https://gitlab.com/ultramicroevo/cdn_v1.
Reviewer #2 (Public Review):
Summary:
The study proposes that many cancer driver mutations are not yet identified but could be identified if they harbor recurrent SNVs. The paper leverages the analysis from Paper #1 that used quantitative analysis to demonstrate that SNVs or CDNs seen 3 or more times are more likely to occur due to selection (ie a driver mutation) than they are to occur by chance or random mutation.
Strengths:
Empirically, mutation frequency is an excellent marker of a driver gene because canonical driver mutations typically have recurrent SNVs. Using the TCGA database, the paper illustrates that CDNs can identify canonical driver mutations (Figure 3) and that most CDNs are likely to disrupt protein function (Figure 2). In addition, CDNs can be shared between cancer types (Figure 4).
Weaknesses:
Driver alteration validation is difficult, with disagreements on what defines a driver mutation, and how many driver mutations are present in a cancer. The value proposed by the authors is that the identification of all driver genes can facilitate the design of patient-specific targeting therapies, but most targeted therapies are already directed towards known driver genes. There is an incomplete discussion of oncogenes (where activating mutations tend to target a single amino acid or repeat) and tumor suppressor genes (where inactivating mutations may be more spread across the gene). Other alterations (epigenetic, indels, translocations, CNVs) would be missed by this type of analysis.
The above paragraph has three distinct points. We shall respond one by one.
First, … can facilitate the design of patient-specific targeting therapies, but most targeted therapies are already directed towards known driver genes…
We state in the text of Discussion the following that shows only a few best-known driving mutations have been targeted. It is accurate to say that < 5% of CDNs we have identified are on the current targeting list. Furthermore, this list we have compiled is < 10% of what we expect to find.
Direct functional test of CDNs would be to introduce putative cancer-driving mutations and observe the evolution of tumors. Such a task of introducing multiple mutations that are collectively needed to drive tumorigenesis has been done only recently, and only for the best-known cancer driving mutations (Ortmann et al. 2015; Takeda et al. 2015; Hodis et al. 2022). In most tumors, the correct combination of mutations needed is not known. Clearly, CDNs, with their strong tumorigenic strength, are suitable candidates.
Second, “There is an incomplete discussion of oncogenes (where activating mutations tend to target a single amino acid or repeat) and tumor suppressor genes (where inactivating mutations may be more spread across the gene).”
We sincerely thank the reviewer for this insightful comment. Below are two new paragraphs in the Discussion pertaining to the point:
In this context, we should comment on the feasibility of targeting CDNs that may occur in either oncogenes (ONCs) or tumor suppressor genes (TSGs). It is generally accepted that ONCs drive tumorigenesis thanks to the gain-of-function (GOF) mutations whereas TSGs derive their tumorigenic powers by loss-of-function (LOF) mutations. It is worthwhile to point out that, since LOF mutations are likely to be more widespread on a gene, CDNs are biased toward GOF mutations. The often even distribution of non-sense mutations along the length of TSGs provide such evidence. As gene targeting aims to diminish gene functions, GOF mutations are perceived to be targetable whereas LOF mutations are not. By extension, ONCs should be targetable but TSGs are not. This last assertion is not true because mutations on TSGs may often be of the GOF kind as well.
The data often suggest that mis-sense mutations on TSGs are of the GOF kind. If mis-sense mutations are far more prevalent than nonsense mutations in tumors, the mis-sense mutations cannot possibly be LOF mutations. (After all, it is not possible to lose more functions than nonsense mutations.) For example, AAA to AAC (K to Q) is a mis-sense mutation while AAA to AAT (K to stop) is a non-sense mutation. In a separate study (referred to as the escape-route analysis), we found many cases where the mis-sense mutations on TSGs are more prevalent (> 10X) than nonsense mutations. Another well-known example is the distribution of non-sense mutations TSGs. For example, on APC, a prominent TSG, non-sense mutations are far more common in the middle 20% of the gene than the rest (Zhang and Shay 2017; Erazo-Oliveras et al. 2023). The pattern suggests that even these non-sense mutations could have GOF properties.
The following response is about the clinical implications of our CDN analysis. Canonical targeted therapy often relies on the Tyrosine Kinase Inhibitors (TKIs) (Dang et al. 2017; Danesi et al. 2021; Waarts et al. 2022). Theoretically, any intervention that suppresses the expression of gain-of-function (GOF) CDNs could potentially have therapeutic value in cancer treatment. This leads us to a discussion of oncogenes versus TSGs in the context of GOF / LOF (loss of function) mutations. Not all mutations on oncogenes have oncogenic effect, besides, truncated mutations in oncogenes are often subject to negative selection (Bányai et al. 2021), the identification of CDNs within oncogenes is therefore crucial for developing effective cancer treatment guidelines. Secondly, while TSGs are generally believed to promote cancer development via loss of function mutations, research suggests that certain mutations within TSGs can have GOF-like effect, such as the dominant negative effect of truncated TP53 mutations (Marutani et al. 1999; de Vries et al. 2002; Gerasimavicius et al. 2022). Characterizing driver mutations as GOF or LOF mutations could potentially expand the scope of targeted cancer therapy. We’ll address this issue in a third study in preparation.
The method could be more valuable when applied to the noncoding genome, where driver mutations in promoters or enhancers are relatively rare, or as yet to be discovered. Increasingly more cancers have had whole genome sequencing. Compared to WES, criteria for driver mutations in noncoding regions are less clear, and this method could potentially provide new noncoding driver CDNs. Observing the same mutation in more than one cancer specimen is empirically unusual, and the authors provide a solid quantitative analysis that indicates many recurrent mutations are likely to be cancer-driver mutations.
Again, we are grateful for the comments which prompt us to expand a paragraph in Discussion, reproduced below.
The CDN approach has two additional applications. First, it can be used to find CDNs in non-coding regions. Although the number of whole genome sequences at present is still insufficient for systematic CDN detection, the preliminary analysis suggests that the density of CDNs in non-coding regions is orders of magnitude lower than in coding regions. Second, CDNs can also be used in cancer screening with the advantage of efficiency as the targeted mutations are fewer. For the same reason, the false negative rate should be much lower too. Indeed, the false positive rate should be far lower than the gene-based screen which often shows a false positive rate of >50% (supplement File S1).
Again, we are grateful that Reviewer #2 have addressed the potential value of our study in finding cancer drivers in non-coding regions. A major challenge in this area lies in defining the appropriate L value as presented in Eq. 10. In the main text, we used a gamma distribution to account for the variability of mutation rates across sites in coding region. For the non-coding region, we will categorize these regions based on biological annotations. The goal is to set different i* cutoffs for different genomic regions (such as heterochromatin / euchromatin, GC-rich regions or centromeric regions), and avoid false positive calls for CDN in repeated regions (Elliott and Larsson 2021; Peña et al. 2023).
References
Bányai L, Trexler M, Kerekes K, Csuka O, Patthy L. 2021. Use of signals of positive and negative selection to distinguish cancer genes and passenger genes. Elife 10:e59629.
Danesi R, Fogli S, Indraccolo S, Del Re M, Dei Tos AP, Leoncini L, Antonuzzo L, Bonanno L, Guarneri V, Pierini A, et al. 2021. Druggable targets meet oncogenic drivers: opportunities and limitations of target-based classification of tumors and the role of Molecular Tumor Boards. ESMO Open 6:100040.
Dang CV, Reddy EP, Shokat KM, Soucek L. 2017. Drugging the “undruggable” cancer targets. Nat Rev Cancer 17:502–508.
Elliott K, Larsson E. 2021. Non-coding driver mutations in human cancer. Nat Rev Cancer 21:500–509.
Erazo-Oliveras A, Muñoz-Vega M, Mlih M, Thiriveedi V, Salinas ML, Rivera-Rodríguez JM, Kim E, Wright RC, Wang X, Landrock KK, et al. 2023. Mutant APC reshapes Wnt signaling plasma membrane nanodomains by altering cholesterol levels via oncogenic β-catenin. Nat Commun 14:4342.
Gerasimavicius L, Livesey BJ, Marsh JA. 2022. Loss-of-function, gain-of-function and dominant-negative mutations have profoundly different effects on protein structure. Nat Commun 13:3895.
Hodis E, Triglia ET, Kwon JYH, Biancalani T, Zakka LR, Parkar S, Hütter J-C, Buffoni L, Delorey TM, Phillips D, et al. 2022. Stepwise-edited, human melanoma models reveal mutations’ effect on tumor and microenvironment. Science 376:eabi8175.
Marutani M, Tonoki H, Tada M, Takahashi M, Kashiwazaki H, Hida Y, Hamada J, Asaka M, Moriuchi T. 1999. Dominant-negative mutations of the tumor suppressor p53 relating to early onset of glioblastoma multiforme. Cancer Res 59:4765–4769.
Ortmann CA, Kent DG, Nangalia J, Silber Y, Wedge DC, Grinfeld J, Baxter EJ, Massie CE, Papaemmanuil E, Menon S, et al. 2015. Effect of Mutation Order on Myeloproliferative Neoplasms. N Engl J Med 372:601–612.
Peña MV de la, Summanen PAM, Liukkonen M, Kronholm I. 2023. Chromatin structure influences rate and spectrum of spontaneous mutations in Neurospora crassa. Genome Res. 33:599–611.
Takeda H, Wei Z, Koso H, Rust AG, Yew CCK, Mann MB, Ward JM, Adams DJ, Copeland NG, Jenkins NA. 2015. Transposon mutagenesis identifies genes and evolutionary forces driving gastrointestinal tract tumor progression. Nat Genet 47:142–150.
de Vries A, Flores ER, Miranda B, Hsieh H-M, van Oostrom CThM, Sage J, Jacks T. 2002. Targeted point mutations of p53 lead to dominant-negative inhibition of wild-type p53 function. Proceedings of the National Academy of Sciences 99:2948–2953.
Waarts MR, Stonestrom AJ, Park YC, Levine RL. 2022. Targeting mutations in cancer. J Clin Invest 132:e154943.
Wu C-I, Ting C-T. 2004. Genes and speciation. Nat Rev Genet 5:114–122.
Zhang L, Shay JW. 2017. Multiple Roles of APC and its Therapeutic Implications in Colorectal Cancer. JNCI: Journal of the National Cancer Institute 109:djw332.
-
-
www.biorxiv.org www.biorxiv.org
-
Author response:
The following is the authors’ response to the original reviews.
eLife assessment This valuable paper reports a theoretical framework and methodology for identifying Cancer Driving Nucleotides (CDNs), primarily based on single nucleotide variant (SNV) frequencies. A variety of solid approaches indicate that a mutation recurring three or more times is more likely to reflect selection rather than being the consequence of a mutation hotspot. The method is rigorously quantitative, though the requirement for larger datasets to fully identify all CDNs remains a noted limitation. The work will be of broad interest to cancer geneticists and evolutionary biologists.
The key criticism “the requirement for larger datasets to fully identify all CDNs remains a noted limitation” that is also found in both reviews. We have clarified the issue in the main text, the relevant parts, from which are copied below. The response below also addresses many comments in the reviews. In addition, Discussion of eLife-RP-RA-2024-99341 has been substantially expanded to answer the questions of Reviewer 2.
We shall answer the boldface comment in three ways. First, it can be answered using GENIE data. Fig. 7 of the main text (eLife-RP-RA-2024-99340) shows that, when n increases from ~ 1000 to ~ 9,000, the numbers of discovered CDNs increase by 3 – 5 fold, most of which come from the two-hit class. Hence, the power of discovering more CDNs with larger datasets is evident. By extrapolation, a sample size of 100,000 should be able to yield 90% of all CDNs, as calculated here. (Fig. 7 also addresses the queries of whether we have used datasets other than TCGA. We indeed have used all public data, including GENIE and COSMIC.)
Second, the power of discovering more cancer driver genes by our theory is evident even without using larger datasets. Table 3 of the companion study (eLife-RP-RA-2024-99341) shows that, averaged across cancer types, the conventional method would identify 45 CDGs while the CDN method tallies 258 CDGs. The power of the CDN method is demonstrated. This is because the conventional approach has to identify CDGs (cancer driver genes) in order to identify the CDNs they carry. However, many CDNs occur in non-CDGs and are thus missed by the conventional approach. In Supplementary File S2, we have included a full list of CDNs discovered in our study, along with population allele frequency annotations from gnomAD. The distribution patterns of these CDNs across different cancer types show their pan-cancer properties as further explored in the companion paper.
Third, while many, or even most CDNs occur in non-CDGs and are thus missed, the conventional approach also includes non-CDN mutations in CDGs. This is illustrated in Fig. 5 of the companion study (eLife-RP-RA-2024-99341) that shows the adverse effect of misidentifications of CDNs by the conventional approach. In that analysis, the gene-targeting therapy is effective if the patient has the CDN mutations on EGFR, but the effect is reversed if the EGFR mutations are non-CDN mutations.
Reviewer #1 (Public Review):
The authors developed a rigorous methodology for identifying all Cancer Driving Nucleotides (CDNs) by leveraging the concept of massively repeated evolution in cancer. By focusing on mutations that recur frequently in pan-cancer, they aimed to differentiate between true driver mutations and neutral mutations, ultimately enhancing the understanding of the mutational landscape that drives tumorigenesis. Their goal was to call a comprehensive catalogue of CDNs to inform more effective targeted therapies and address issues such as drug resistance.
Strengths
(1) The authors introduced a concept of using massively repeated evolution to identify CDNs. This approach recognizes that advantageous mutations recur frequently (at least 3 times) across cancer patients, providing a lens to identify true cancer drivers.
(2) The theory showed the feasibility of identifying almost all CDNs if the number of sequenced patients increases to 100,000 for each cancer type.
Weaknesses
(1) The methodology remains theoretical and no novel true driver mutations were identified in this study.
We now address the weakness criticism, which is gratefully received.
The second part of the criticism (no novel true driver mutations were identified in this study) has been answered in the long responses to eLife assessment above. The first part “The methodology remains theoretical” is somewhat unclear. It might be the lead to the second part. However, just in case, we interpret the word “theoretical” to mean “the lack of experimental proof” and answer below.
As Reviewer #1 noted, a common limitation of theoretical and statistical analyses of cancer drivers is the need to validate their selective advantage through in vitro or in vivo functional testing. This concern is echoed by both reviewers in the companion paper (eLife-RP-RA-2024-99341), prompting us to consider the methodology for functional testing of potential cancer drivers. An intuitive approach would involve introducing putative driver mutations into normal cells and observing phenotypic transformation in vitro and in vivo. In a recent stepwise-edited human melanoma model, Hodis et al. demonstrated that disease-relevant phenotypes depend on the “correct” combinations of multiple driver mutations (Hodis et al. 2022). Other high-throughput strategies can be broadly categorized into two approaches: (1) introducing candidate driver mutations into pre-malignant model systems that already harbor a canonical mutant driver (Drost and Clevers 2018; Grzeskowiak et al. 2018; Michels et al. 2020) and (2) introducing candidate driver mutations into growth factor-dependent cell models and assessing their impact on resulting fitness (Bailey et al. 2018; Ng et al. 2018). The underlying assumption of these strategies is that the fitness outcomes of candidate driver mutations are influenced by pre-existing driver mutations and the specific pathways or cancer hallmarks being investigated. This confines the functional test of potential cancer driver mutations to conventional cancer pathways. A comprehensive identification of CDNs is therefore crucial to overcome these limitations. In conjunction with other driver signal detection methods, our study aims to provide a more comprehensive profile of driver mutations, thereby enabling the functional testing of drivers involved in non-conventional cancer evolution pathways.
(2) Different cancer types have unique mutational landscapes. The methodology, while robust, might face challenges in uniformly identifying CDNs across various cancers with distinct genetic and epigenetic contexts.
We appreciate the comment. Indeed, different cancer types should have different genetic and epigenetic landscapes. In that case, one may have expected CDNs to be poorly shared among cancer types. However, as reported in Fig. 4 of the companion study, the sharing of CDNs across cancer types is far more common than the sharing of CDGs (Cancer Driving Genes). We suggest that CDNs have a much higher resolution than CDGs, whereby the signals are diluted by non-driver mutations. In other words, despite that the mutational landscape may be cancer-type specific, the pan-cancer selective pressure may be sufficiently high to permit the detection of CDN sharing among cancer types.
Below, we shall respond in greater details. Epigenetic factors, such as chromatin states, methylation/acetylation levels, and replication timing, can provide valuable insights when analyzing mutational landscapes at a regional scale (Stamatoyannopoulos et al. 2009; Lawrence et al. 2013; Makova and Hardison 2015; Baylin and Jones 2016; Alexandrov et al. 2020; Abascal et al. 2021; Sherman et al. 2022). However, at the site-specific level, the effectiveness of these covariates in predicting mutational landscapes depends on their integration into a detailed model. Overemphasizing these covariates could lead to false negatives for known driver mutations (Hess et al. 2019; Elliott and Larsson 2021). In figure 3B of the main text, we illustrate the discrepancy between the mutation rate predictions from Dig and empirical observation. Ideally, no covariates would be needed under extensive sample sizes, where each mutable genomic sites would have sufficient mutations to yield a statistic significance and consequently, synonymous mutations would be sufficient for the characterization of mutational landscape. In this sense, the integration of mutational covariates represents a compromise under current sample size. In our study, the effect of unique mutational landscapes is captured by E(u), the mean mutation rate for each cancer type. We further accounted for the variability of site-level mutability using a gamma distribution. The primary goal of our study is to determine the upper limit of mutation recurrences under mutational mechanisms only. While selection force acts blindly to genomic features, mutational hotspots should exhibit common characteristics determined by their underlying mechanisms. In the main text, we attempted to identify such shared features among CDNs. Until these mutational mechanisms are fully understood, CDNs should be considered as potential driver mutations.
(3) L223, the statement "In other words, the sequences surrounding the high-recurrence sites appear rather random.". Since it was a pan-cancer analysis, the unique patterns of each cancer type could be strongly diluted in the pan-cancer data.
We now state that the analyses of mutation characteristic have been applied to the individual cancer types and did not find any pattern that deviates from randomness. Nevertheless, it may be argued that, with the exception of those with sufficiently large sample sizes such as lung and breast cancers, most datasets do not have the power to reject the null hypothesis. To alleviate this concern, we applied the ResNet and LSTM/GRU methods for the discovery of potential mutation motifs within each cancer type. All methods are more powerful than the one used but the results are the same – no cancer type yields a mutation pattern that can reject the null hypothesis of randomness (see below).
As a positive control, we used these methods for the discovery of splicing sites of human exons. When aligned up with splicing site situated in the center (position 51 in the following plot), the sequence motif would look like:
Author response image 1.
5-prime
Author response image 2.
3-prime
However, To account for the potential influence of distance from the mutant site in motif analysis, we randomly shuffled the splicing sites within a specified window around the alignment center, and their sequence logo now looks like:
Author response image 3.
5-prime shuffled
Author response image 4.
3-prime shuffled
Author response image 5.
random sequences from coding regions
The classification results of the shuffled 5-prime (donner), 3-prime (acceptor) and random sequences from coding regions (Random CDS) are presented in the Author response table 1 (The accuracy for the aligned results, which is approximately 99%, is not shown here).
Author response table 1.
With the positive results from these positive controls (splicing site motifs) validating our methodology, we applied the same model structure to the train and test of potential mutational motifs of CDN sites. All models achieved approximately 50% accuracy in CDN motif analysis, suggesting that the sequence contexts surrounding CDN sites are not significantly different from other coding regions of the genome. This further implies that the recurrence of mutations at CDN sites is more likely driven by selection rather than mutational mechanisms.
Note that this preliminary analysis may be limited by insufficient training data for CDN sites. Future studies will require larger sample sizes and more sophisticated models to address these limitations.
(4) To solidify the findings, the results need to be replicated in an independent dataset.
Figure 7 validates our CDN findings using the GENIE dataset, which primarily consists of targeted sequencing data from various panels. By focusing on the same genomic regions sequenced by GENIE, we observed a 3-5 fold increase in the number of discovered CDNs as sample size increased from approximately 1000 to 9000. Moreover, the majority of CDNs identified in TCGA were confirmed as CDNs in GENIE.
(5) The key scripts and the list of key results (i.e., CDN sites with i{greater than or equal to}3) need to be shared to enable replication, validation, and further research. So far, only CDN sites with i{greater than or equal to}20 have been shared.
We have now updated the “Data Availability” section in the main text, the corresponding scripts for key results are available on Gitlab at: https://gitlab.com/ultramicroevo/cdn_v1.
(6) The versions of data used in this study are not clearly detailed, such as the specific version of gnomAD and the version and date of TCGA data downloaded from the GDC Data Portal.
The versions of data sources have now been updated in the revised manuscript.
Recommendations For The Authors:
(1) L119, states "22.7 million nonsynonymous sites," but Table 1 lists the number as 22,540,623 (22.5 million). This discrepancy needs to be addressed for consistency.<br /> (2) Figure 2B, there is an unexplained drop in the line at i = 6 and 7 (from 83 to 45). Clarification is needed on why this drop occurs.<br /> (3) Figure 3A, for the CNS type, data for recurrence at 8 and 9 are missing. An explanation should be provided for this absence.<br /> (4) L201, the title refers to "100-mers," but L218 mentions "101-mers." This inconsistency needs to be corrected to ensure clarity and accuracy.<br /> (5) Figures 6 and 7 currently lack titles. Titles should be added to these figures to improve readability.
Thanks. All corrections have been incorporated into the revised manuscript.
Reviewer #2 (Public Review):<br /> Summary:<br /> The authors propose that cancer-driver mutations can be identified by Cancer Driving Nucleotides (CDNs). CDNs are defined as SNVs that occur frequently in genes. There are many ways to define cancer driver mutations, and the strengths and weaknesses are the reliance on statistics to define them.<br /> Strengths:<br /> There are many well-known approaches and studies that have already identified many canonical driver mutations. A potential strength is that mutation frequencies may be able to identify as yet unrecognized driver mutations. They use a previously developed method to estimate mutation hotspots across the genome (Dig, Sherman et al 2022). This publication has already used cancer sequence data to infer driver mutations based on higher-than-expected mutation frequencies. The advance here is to further illustrate that recurrent mutations (estimated at 3 or more mutations (CDNs) at the same base) are more likely to be the result of selection for a driver mutation (Figure 3). Further analysis indicates that mutation sequence context (Figure 4) or mutation mechanisms (Figure 5) are unlikely to be major causes for recurrent point mutations. Finally, they calculate (Figure 6) that most driver mutations identifiable by the CDN approach could be identified with about 100,000 to one million tumor coding genomes.<br /> Weaknesses:<br /> The manuscript does provide specific examples where recurrent mutations identify known driver mutations but do not identify "new" candidate driver mutations. Driver mutation validation is difficult and at least clinically, frequency (ie observed in multiple other cancer samples) is indeed commonly used to judge if an SNV has driver potential. The method would miss alternative ways to trigger driver alterations (translocations, indels, epigenetic, CNVs). Nevertheless, the value of the manuscript is its quantitative analysis of why mutation frequencies can identify cancer driver mutations.
Recommendations For The Authors<br /> Whereas the analysis of driver mutations in WES has been extensive, the application of the method to WGS data (ie the noncoding regions) would provide new information.
We appreciate that Reviewer #2 has suggested the potential application of our method to noncoding regions. Currently, the background mutation model is based on the site level mutations in coding regions, which hinders its direct applications in other mutation types such as CNVs, translocations and indels. We acknowledge that the proportion of patients with driver event involving CNV (73%) is comparable to that of coding point mutations (76%) as reported in the PCAWG analysis (Fig. 2A from Campbell et al., 2020). In future studies, we will attempt to establish a CNV-based background mutation rate model to identify positive selection signals driving tumorigenesis.
References
Abascal F, Harvey LMR, Mitchell E, Lawson ARJ, Lensing SV, Ellis P, Russell AJC, Alcantara RE, Baez-Ortega A, Wang Y, et al. 2021. Somatic mutation landscapes at single-molecule resolution. Nature:1–6.
Alexandrov LB, Kim J, Haradhvala NJ, Huang MN, Tian Ng AW, Wu Y, Boot A, Covington KR, Gordenin DA, Bergstrom EN, et al. 2020. The repertoire of mutational signatures in human cancer. Nature 578:94–101.
Bailey MH, Tokheim C, Porta-Pardo E, Sengupta S, Bertrand D, Weerasinghe A, Colaprico A, Wendl MC, Kim J, Reardon B, et al. 2018. Comprehensive Characterization of Cancer Driver Genes and Mutations. Cell 173:371-385.e18.
Baylin SB, Jones PA. 2016. Epigenetic Determinants of Cancer. Cold Spring Harb Perspect Biol 8:a019505.
Campbell PJ, Getz G, Korbel JO, Stuart JM, Jennings JL, Stein LD, Perry MD, Nahal-Bose HK, Ouellette BFF, Li CH, et al. 2020. Pan-cancer analysis of whole genomes. Nature 578:82–93.
Drost J, Clevers H. 2018. Organoids in cancer research. Nat Rev Cancer 18:407–418.
Elliott K, Larsson E. 2021. Non-coding driver mutations in human cancer. Nat Rev Cancer 21:500–509.
Grzeskowiak CL, Kundu ST, Mo X, Ivanov AA, Zagorodna O, Lu H, Chapple RH, Tsang YH, Moreno D, Mosqueda M, et al. 2018. In vivo screening identifies GATAD2B as a metastasis driver in KRAS-driven lung cancer. Nat Commun 9:2732.
Hess JM, Bernards A, Kim J, Miller M, Taylor-Weiner A, Haradhvala NJ, Lawrence MS, Getz G. 2019. Passenger Hotspot Mutations in Cancer. Cancer Cell 36:288-301.e14.
Hodis E, Triglia ET, Kwon JYH, Biancalani T, Zakka LR, Parkar S, Hütter J-C, Buffoni L, Delorey TM, Phillips D, et al. 2022. Stepwise-edited, human melanoma models reveal mutations’ effect on tumor and microenvironment. Science 376:eabi8175.
Lawrence MS, Stojanov P, Polak P, Kryukov GV, Cibulskis K, Sivachenko A, Carter SL, Stewart C, Mermel CH, Roberts SA, et al. 2013. Mutational heterogeneity in cancer and the search for new cancer-associated genes. Nature 499:214–218.
Makova KD, Hardison RC. 2015. The effects of chromatin organization on variation in mutation rates in the genome. Nat Rev Genet 16:213–223.
Michels BE, Mosa MH, Streibl BI, Zhan T, Menche C, Abou-El-Ardat K, Darvishi T, Członka E, Wagner S, Winter J, et al. 2020. Pooled In Vitro and In Vivo CRISPR-Cas9 Screening Identifies Tumor Suppressors in Human Colon Organoids. Cell Stem Cell 26:782-792.e7.
Ng PK-S, Li J, Jeong KJ, Shao S, Chen H, Tsang YH, Sengupta S, Wang Z, Bhavana VH, Tran R, et al. 2018. Systematic Functional Annotation of Somatic Mutations in Cancer. Cancer Cell 33:450-462.e10.
Sherman MA, Yaari AU, Priebe O, Dietlein F, Loh P-R, Berger B. 2022. Genome-wide mapping of somatic mutation rates uncovers drivers of cancer. Nat Biotechnol 40:1634–1643.
Stamatoyannopoulos JA, Adzhubei I, Thurman RE, Kryukov GV, Mirkin SM, Sunyaev SR. 2009. Human mutation rate associated with DNA replication timing. Nat Genet 41:393–395.
-
eLife Assessment
This important paper introduces a theoretical framework and methodology for identifying Cancer Driving Nucleotides (CDNs), primarily based on single nucleotide variant (SNV) frequencies. A variety of solid approaches indicate that a mutation recurring three or more times is more likely to reflect selection rather than being the consequence of a mutation hotspot. The method is rigorously quantitative, though the requirement for larger datasets to fully identify all CDNs remains a noted limitation. The work will be of broad interest to cancer geneticists and evolutionary biologists.
-
Reviewer #1 (Public review):
The authors developed a rigorous methodology for identifying all Cancer Driving Nucleotides (CDNs) by leveraging the concept of massively repeated evolution in cancer. By focusing on mutations that recur frequently in pan-cancer, they aimed to differentiate between true driver mutations and neutral mutations, ultimately enhancing the understanding of the mutational landscape that drives tumorigenesis. Their goal was to call a comprehensive catalogue of CDNs to inform more effective targeted therapies and address issues such as drug resistance.
Strengths
(1) The authors introduced a concept of using massively repeated evolution to identify CDNs. This approach recognizes that advantageous mutations recur frequently (at least 3 times) across cancer patients, providing a lens to identify true cancer drivers.
(2) The theory showed the feasibility of identifying almost all CDNs if the number of sequenced patients increases to 100,000 for each cancer type.
Weaknesses
(1) No novel true driver mutations were identified in this study.
(2) Different cancer types have unique mutational landscapes. The methodology, while robust, might face challenges in uniformly identifying CDNs across various cancers with distinct genetic and epigenetic contexts.
(3) The statement "In other words, the sequences surrounding the high-recurrence sites appear rather random.". Since it was a pan-cancer analysis, the unique patterns of each cancer type could be strongly diluted in the pan-cancer data.
-
Reviewer #2 (Public review):
Summary:
The authors propose that cancer driver mutations can be identified by Cancer Driving Nucleotides (CDNs). CDNs are defined as SNVs that occur frequently in genes. There are many ways to define cancer driver mutations, and strengths and weaknesses are the reliance of statistics to define them.
Strengths:
There are many well-known approaches and studies that have already identified many canonical driver mutations. A potential strength is that mutation frequencies may be able to identify, as yet, unrecognized driver mutations. They use of a previously developed method to estimate mutation hotspots across the genome (Dig, Sherman et al 2022). This publication has already used cancer sequence data to infer driver mutations based on higher than expected mutation frequencies. The advance here is to further illustrate that recurrent mutations (estimated at 3 or more mutations (CDNs) at the same base) are more likely to be the result of selection for a driver mutation (Fig 3). Further analysis indicates that mutation sequence context (Fig 4) or mutation mechanisms (Fig 5) are unlikely to be major causes for recurrent point mutations. Finally, they calculate (Fig 6) that most driver mutations identifiable by the CDN approach could be identified with about 100,000 to one million tumor coding genomes.
Weaknesses:
The manuscript does provide specific examples where recurrent mutations identify known driver mutations, but does not identify "new" candidate driver mutations. Driver mutation validation is difficult and at least clinically, frequency (ie observed in multiple other cancer samples) is indeed commonly used to judge if a SNV has driver potential. The method would miss alternative ways to trigger driver alterations (translocations, indels, epigenetic, CNVs). Nevertheless, the value of the manuscript is its quantitative analysis of why mutation frequencies can identify cancer driver mutations.
-
-
arxiv.org arxiv.org
-
eLife Assessment
The authors proposed an important novel deep-learning framework to estimate posterior distributions of tissue microstructure parameters. The method shows superior performance to conventional Bayesian approaches and there is convincing evidence for generalizing the method to use data from different protocol acquisitions and work with models of varying complexity.
-
Reviewer #1 (Public review):
The authors proposed a framework to estimate the posterior distribution of parameters in biophysical models. The framework has two modules: the first MLP module is used to reduce data dimensionality and the second NPE module is used to approximate the desired posterior distribution. The results show that the MLP module can capture additional information compared to manually defined summary statistics. By using the NPE module, the repetitive evaluation of the forward model is avoided, thus making the framework computationally efficient. The results show the framework has promise in identifying degeneracy. This is an interesting work.
Comment on revised version:
The authors have addressed all the raised concerns and made appropriate modifications to the manuscript. The changes have improved the clarity, methodology, and overall quality of the paper. Given these improvements, I believe the paper now meets the standards for publication in this journal.
-
Reviewer #2 (Public review):
Summary:
The authors improve the work of Jallais et al. (2022) by including a novel module capable of automatically learning feature selection from different acquisition protocols inside a supervised learning framework. Combining the module above with an estimation framework for estimating the posterior distribution of model parameters, they obtain rich probabilistic information (uncertainty and degeneracy) on the parameters in a reasonable computation time.
The main contributions of the work are:
(1) The whole framework allows the user to avoid manually defining summary statistics, which may be slow and tedious and affect the quality of the results.
(2) The authors tested the proposal by tackling three different biophysical models for brain tissue and using data with characteristics commonly used by the diffusion-MR-microstructure research community.
(3) The authors validated their method well with the state-of-the-art.
(4) The methodology allows the quantification of the inherent model's degeneration and how it increases with strong noise.
The authors showed the utility of their proposal by computing complex parameter descriptors automatically in an achievable time for three different and relevant biophysical models.
Importantly, this proposal promotes tackling, analyzing, and considering the degenerated nature of the most used models in brain microstructure estimation.
-
Author response:
The following is the authors’ response to the original reviews.
Reviewer #1 (Public Review):
The authors proposed a framework to estimate the posterior distribution of parameters in biophysical models. The framework has two modules: the first MLP module is used to reduce data dimensionality and the second NPE module is used to approximate the desired posterior distribution. The results show that the MLP module can capture additional information compared to manually defined summary statistics. By using the NPE module, the repetitive evaluation of the forward model is avoided, thus making the framework computationally efficient. The results show the framework has promise in identifying degeneracy. This is an interesting work.
We thank the reviewer for the positive comments made on our manuscript.
Reviewer #1 (Recommendations For The Authors):
I have some minor comments.
(1) The uGUIDE framework has two modules, MLP and NPE. Why are the two modules trained jointly? The MLP module is used to reduce data dimensionality. Given that the number of features for different models is all fixed to 6, why does one need different MLPs? This module should, in principle, be general-purpose and independent of the model used.
The MLP must be trained together with the NPE module to maximise inference performance in terms of accuracy and precision. Although the number of features predicted by the MLP was fixed to six, the characteristics of these six features can be very different, depending on the chosen forward model and the available data, as we showed in Appendix 1 Figure 1. Training the MLP independently of the NPE would result in suboptimal performance of µGUIDE, with potentially higher bias and variance of the predicted posterior distributions. We have now added these considerations in the Methods section.
(2) The authors mentioned at L463 that all the 3 models use 6 features. From L445 to L447, it seems model 3 has 7 unknown parameters. How can one use 6 features to estimate 7 unknowns?
Thank you for pointing out the lack of clarity regarding the parameters to estimate in this section. Model 3 is a three-compartment model, whose parameters of interest are the signal fraction and diffusivity from water diffusing in the neurite space (fn and Dn), the neurites orientation dispersion index (ODI), the signal fraction in cell bodies (fs), a proxy to soma radius and diffusivity (Cs), and the signal fraction and diffusivity in the extracellular space (fe and De). The signal fractions are constrained by the relationship fn + fs + fe = 1, hence fe i_s calculated from the estimated _fn and fs. This leaves us with 6 parameters to estimate: fn, Dn, ODI, fs, Cs, De. We clarified it in the revised version of the paper.
(3) L471, Rician noise is not a proper term. Rician distribution is the distribution of pixel intensities observed in the presence of noise. And Rician distribution is the result of magnitude reconstruction. See "Noise in magnitude magnetic resonance images" published in 2008. I assume that real-valued Gaussian noise is added to simulated data.
We apologize for the confusion. We added Gaussian noise to the real and imaginary parts of the simulated signals and then used the magnitude of this noisy complex signal for our experiments. We rephrased the sentence for more clarity.
(4) L475, why thinning is not used in MCMC? In figure 3, the MCMC results are more biased than uGUIDE, is it related to no thinning in MCMC?
We followed the recommendations by Harms et al. (2018) for the MCMC experiments. They analysed the impact of thinning (among other parameters) on the estimated posterior distributions. Their findings indicate that thinning is unnecessary and inefficient, and they recommend using more samples instead. For further details, we refer the reviewer to their publication, along with the theoretical works they cite. We have now added this note in the Methods section.
(5) Did the authors try model-fitting methods with different initializations to get a distribution of the parameters? Like the paper "Degeneracy in model parameter estimation for multi‐compartmental diffusion in neuronal tissue". For the in vivo data, it is informative to see the model-fitting results.
No, we did not try model-fitting methods with different initializations because such methods provide only a partial description of the solution landscape, which can be interpreted as a partial posterior distribution. Although this approach can help to highlight the problem of degeneracy, it does not provide a complete description of all potential solutions. In contrast, MCMC estimates the full posterior distribution, offering a more accurate and precise characterization of degeneracies and uncertainties compared to model-fitting methods with varying initializations. Hence, we decided to use MCMC as benchmark. We have now added these considerations to the Discussion section.
Reviewer #2 (Public Review):
Summary:
The authors improve the work of Jallais et al. (2022) by including a novel module capable of automatically learning feature selection from different acquisition protocols inside a supervised learning framework. Combining the module above with an estimation framework for estimating the posterior distribution of model parameters, they obtain rich probabilistic information (uncertainty and degeneracy) on the parameters in a reasonable computation time.
The main contributions of the work are:
(1) The whole framework allows the user to avoid manually defining summary statistics, which may be slow and tedious and affect the quality of the results.
(2) The authors tested the proposal by tackling three different biophysical models for brain tissue and using data with characteristics commonly used by the diffusion-MRmicrostructure research community.
(3) The authors validated their method well with the state-of-the-art.
The main weakness is:
(1) The methodology was tested only on scenarios with a signal-to-noise ratio (SNR) equal to 50. It is interesting to show results with lower SNR and without noise that the method can detect the model's inherent degenerations and how the degeneration increases when strong noise is present. I suggest expanding the Figure in Appendix 1 to include this information.
The authors showed the utility of their proposal by computing complex parameter descriptors automatically in an achievable time for three different and relevant biophysical models.
Importantly, this proposal promotes tackling, analysing, and considering the degenerated nature of the most used models in brain microstructure estimation.
We thank the reviewer for these positive remarks.
Concerning the main weakness highlighted by the reviewer: In our submitted work, we presented results both without noise and with a signal-to-noise ratio (SNR) equal to 50 (similar to the SNR in the experimental data analysed). Figure 5 shows exemplar posterior distributions obtained in a noise-free scenario, and Table 1 reports the number of degeneracies for each model on 10000 noise-free simulations. These results highlight that the presence of degeneracies is inherent to the model definition. Figures 3, 6 and 7 present results considering an SNR of 50. We acknowledge that results with lower SNR have not been included in the initial submission. To address this, we added a figure in the appendix illustrating the impact of noise on the posterior distributions. Specifically, Figure 1A of Appendix 2 shows posterior distributions estimated from signals generated using an exemplar set of model parameters with varying noise levels
(no noise, SNR=50 and SNR=25). Figure 1B presents uncertainties values obtained on 1000 simulations for each noise level. We observe that, as the SNR reduces, uncertainty increases. Noise in the signal contributes to irreducible variance. The confidence in the estimates therefore reduces as the noise level increases.
Reviewer #2 (Recommendations For The Authors):
Some suggestions:
Panel A of Figure 2 may deserve a better explanation in the Figure's caption.
We agree that the description of panel A of figure 2 was succinct and added more explanation in the figure’s caption.
The caption of Figure 3 should mention that the panel's titles are the parameters of the used biophysical models.
We added in the caption of figure 3 that the names of the model parameters are indicated in the titles of the panels. We apologise for the confusion it may have created.
In equation (3), the authors should indicate the summation index.
We apologise for not putting the summation index in equation 3. We added it in the revised version.
In line 474, the authors should discuss if the systematic use of the maximum likelihood estimator as an initializer for the sampling does not bias the computed results.
Concerning the MCMC estimations, we followed the recommendations from Harms et al. (2018). They investigated the use of starting from the maximum likelihood estimator (MLE). They concluded that starting from the MLE allows to start in the stationary distribution of the Markov chain, removing the need for some burn-in. Additionally, they showed that initializing the sampling from the MLE has the advantage of removing salt- and pepper-like noise from the resulting mean and standard deviation maps. We have now added this note in the Methods section.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
The manuscript reports a valuable finding on dopamine receptor-mediated regulation, the firing of striatal cholinergic interneurons in both healthy and dyskinesia states, identifying that Kv1 channels play a key role in the burst-dependent pause. The study presents solid experimental data, and provides additional mechanistic insights into how burst activity in SCINs leads to a subsequent pause, highlighting the involvement of D1/D5 receptors. This work will be of interest to researchers studying the pathological mechanisms of Parkinson's disease.
-
Reviewer #1 (Public review):
Summary:
Tubert C. et al. investigated the role of dopamine D5 receptors (D5R) and their downstream potassium channel, Kv1, in the striatal cholinergic neuron pause response induced by thalamic excitatory input. Using slice electrophysiological analysis combined with pharmacological approaches, the authors tested which receptors and channels contribute to the cholinergic interneuron pause response in both control and dyskinetic mice (in the L-DOPA off state). They found that activation of Kv1 was necessary for the pause response, while activation of D5R blocked the pause response in control mice. Furthermore, in the L-DOPA off-state of dyskinetic mice, the absence of the pause response was restored by the application of clozapine. The authors claimed that (1) the D5R-Kv1 pathway contributes to the cholinergic interneuron pause response in a phasic dopamine concentration-dependent manner, and (2) clozapine inhibits D5R in the L-DOPA off state, which restores the pause response.
Strengths:
The electrophysiological and pharmacological approaches used in this study are powerful tools for testing channel properties and functions. The authors' group has well-established these methodologies and analysis pipelines. Indeed, the data presented were robust and reliable.
Weaknesses:
Although the paper has strengths in its methodological approaches, there is a significant gap between the presented data and the authors' claims.
There was no direct demonstration that the D5R-Kv1 pathway is dominant when dopamine levels are high. The term 'high' is ambiguous, and it raises the question of whether the authors believe that dopamine levels do not reach the threshold required to activate D5R under physiological conditions.
Furthermore, the data presented in Figure 6 are confusing. If clozapine inhibits active D5R and restores the pause response, the D5R antagonist SCH23390 should have the same effect. The data suggest that clozapine-induced restoration of the pause response might be mediated by other receptors, rather than D5R alone.
-
Reviewer #2 (Public review):
Summary:
This manuscript by Tubert et al presents the role of the D5 receptor in modulating the striatal cholinergic interneuron (CIN) pause response through D5R-cAMP-Kv1 inhibitory signaling. Their model elucidates the on / off switch of CIN pause, likely due to the different DA affinity between D2R and D5R. This machinery may be crucial in modulating synaptic plasticity in cortical-striatal circuits during motor learning and execution. Furthermore, the study bridges their previous finding of CIN hyperexcitability (Paz et al., Movement Disorder 2022) with the loss of pause response in LID mice.
Strengths:
The study had solid findings, and the writing was logically structured and easy to follow. The experiments are well-designed, and they properly combined electrophysiology recording, optogenetics, and pharmacological treatment to dissect/rule out most, if not all, possible mechanisms in their model.
Weaknesses:
The manuscript is overall satisfying with only some minor concerns that need to be addressed. Manipulation of intracellular cAMP (e.g. using pharmacological analogs or inhibitors) can add additional evidence to strengthen the conclusion.
-
Reviewer #3 (Public review):
Summary:
Tubert et al. investigate the mechanisms underlying the pause response in striatal cholinergic interneurons (SCINs). The authors demonstrate that optogenetic activation of thalamic axons in the striatum induces burst activity in SCINs, followed by a brief pause in firing. They show that the duration of this pause correlates with the number of elicited action potentials, suggesting a burst-dependent pause mechanism. The authors demonstrated this burst-dependent pause relied on Kv1 channels. The pause is blocked by an SKF81297 and partially by sulpiride and mecamylamine, implicating D1/D5 receptor involvement. The study also shows that the ZD7288 does not reduce the duration of the pause and that lesioning dopamine neurons abolishes this response, which can be restored by clozapine.
Weaknesses:
While this study presents an interesting mechanism for SCIN pausing after burst activity, there are several major concerns that should be addressed:
(1) Scope of the Mechanism:
It is important to clarify that the proposed mechanism may apply specifically to the pause in SCINs following burst activity. The manuscript does not provide clear evidence that this mechanism contributes to the pause response observed in behavioral animals. While the thalamus is crucial for SCIN pauses in behavioral contexts, the exact mechanism remains unclear. Activating thalamic input triggers burst activity in SCINs, leading to a subsequent pause, but this mechanism may not be generalizable across different scenarios. For instance, approximately half of TANs do not exhibit initial excitation but still pause during behavior, suggesting that the burst-dependent pause mechanism is unlikely to explain this phenomenon. Furthermore, in behavioral animals, the duration of the pause seems consistent, whereas the proposed mechanism suggests it depends on the prior burst, which is not aligned with in vivo observations. Additionally, many in vivo recordings show that the pause response is a reduction in firing rate, not complete silence, which the mechanism described here does not explain. Please address these in the manuscript.
(2) Terminology:
The use of "pause response" throughout the manuscript is misleading. The pause induced by thalamic input in brain slices is distinct from the pause observed in behavioral animals. Given the lack of a clear link between these two phenomena in the manuscript, it is essential to use more precise terminology throughout, including in the title, bullet points, and body of the manuscript.
(3) Kv1 Blocker Specificity:
It is unclear how the authors ruled out the possibility that the Kv1 blocker did not act directly on SCINs. Could there be an indirect effect contributing to the burst-dependent pause? Clarification on this point would strengthen the interpretation of the results.
(4) Role of D1 Receptors:
While it is well-established that activating thalamic input to SCINs triggers dopamine release, contributing to SCIN pausing (as shown in Figure 3), it would be helpful to assess the extent to which D1 receptors contribute to this burst-dependent pause. This could be achieved by applying the D1 agonist SKF81297 after blocking nAChRs and D2 receptors.
(5) Clozapine's Mechanism of Action:
The restoration of the burst-dependent pause by clozapine following dopamine neuron lesioning is interesting, but clozapine acts on multiple receptors beyond D1 and D5. Although it may be challenging to find a specific D5 antagonist or inverse agonist, it would be more accurate to state that clozapine restores the burst-dependent pause without conclusively attributing this effect to D5 receptors.
-
Author response:
Public Reviews:
Reviewer #1 (Public review):
Summary:
Tubert C. et al. investigated the role of dopamine D5 receptors (D5R) and their downstream potassium channel, Kv1, in the striatal cholinergic neuron pause response induced by thalamic excitatory input. Using slice electrophysiological analysis combined with pharmacological approaches, the authors tested which receptors and channels contribute to the cholinergic interneuron pause response in both control and dyskinetic mice (in the L-DOPA off state). They found that activation of Kv1 was necessary for the pause response, while activation of D5R blocked the pause response in control mice. Furthermore, in the L-DOPA off-state of dyskinetic mice, the absence of the pause response was restored by the application of clozapine. The authors claimed that (1) the D5R-Kv1 pathway contributes to the cholinergic interneuron pause response in a phasic dopamine concentration-dependent manner, and (2) clozapine inhibits D5R in the L-DOPA off state, which restores the pause response.
Strengths:
The electrophysiological and pharmacological approaches used in this study are powerful tools for testing channel properties and functions. The authors' group has well-established these methodologies and analysis pipelines. Indeed, the data presented were robust and reliable.
Thank you for your comments.
Weaknesses:
Although the paper has strengths in its methodological approaches, there is a significant gap between the presented data and the authors' claims.
There was no direct demonstration that the D5R-Kv1 pathway is dominant when dopamine levels are high. The term 'high' is ambiguous, and it raises the question of whether the authors believe that dopamine levels do not reach the threshold required to activate D5R under physiological conditions.
We acknowledge that further work is necessary to clarify the role of the D5R in physiological conditions. While we haven’t found effects of the D1/D5 receptor antagonist SCH23390 on the pause response in control animals (Fig. 3), it is still possible that dopamine levels reach the threshold to stimulate D5R when burst firing of dopaminergic neurons contributes to dopamine release. We believe the pause response depends, among other factors, on the relative stimulation levels of SCIN D2 and D5 receptors, which is likely not an all-or-nothing phenomenon. To reduce ambiguity, we will change the labels referring to dopamine levels in Figure 6F.
Furthermore, the data presented in Figure 6 are confusing. If clozapine inhibits active D5R and restores the pause response, the D5R antagonist SCH23390 should have the same effect. The data suggest that clozapine-induced restoration of the pause response might be mediated by other receptors, rather than D5R alone.
Thank you for letting us clarify this issue. Please note that the levels of endogenous dopamine 24 h after the last L-DOPA challenge in severe parkinsonian mice are expected to be very low. In the absence of an agonist, a pure D1/D5 antagonist would not exert an effect, as demonstrated with SCH23390 alone, which did not have an impact on the SCIN response to thalamic stimulation (Fig. 6). While clozapine can also act as a D1/D5 receptor antagonist, its D1/D5 effects in absence of an agonist are attributed to its inverse agonist properties (PMID: 24931197). Notably, SCH23390 prevented the effect of clozapine, allowing us to conclude that ligand-independent D1/D5 receptor-mediated mechanisms are involved in suppressing the pause response in dyskinetic mice. We will make the point clearer in the Discussion.
Reviewer #2 (Public review):
Summary:
This manuscript by Tubert et al presents the role of the D5 receptor in modulating the striatal cholinergic interneuron (CIN) pause response through D5R-cAMP-Kv1 inhibitory signaling. Their model elucidates the on / off switch of CIN pause, likely due to the different DA affinity between D2R and D5R. This machinery may be crucial in modulating synaptic plasticity in cortical-striatal circuits during motor learning and execution. Furthermore, the study bridges their previous finding of CIN hyperexcitability (Paz et al., Movement Disorder 2022) with the loss of pause response in LID mice.
Strengths:
The study had solid findings, and the writing was logically structured and easy to follow. The experiments are well-designed, and they properly combined electrophysiology recording, optogenetics, and pharmacological treatment to dissect/rule out most, if not all, possible mechanisms in their model.
Thank you for your comments.
Weaknesses:
The manuscript is overall satisfying with only some minor concerns that need to be addressed. Manipulation of intracellular cAMP (e.g. using pharmacological analogs or inhibitors) can add additional evidence to strengthen the conclusion.
Thank you for the suggestion. While we acknowledge that we are not providing direct evidence of the role of cAMP, we chose not to conduct these experiments because cAMP levels influence several intrinsic and synaptic currents beyond Kv1, significantly affecting membrane oscillations and spontaneous firing, as shown in Paz et al. 2021. However, we are modifying the manuscript so there is no misinterpretation about our findings in the current work.
Reviewer #3 (Public review):
Summary:
Tubert et al. investigate the mechanisms underlying the pause response in striatal cholinergic interneurons (SCINs). The authors demonstrate that optogenetic activation of thalamic axons in the striatum induces burst activity in SCINs, followed by a brief pause in firing. They show that the duration of this pause correlates with the number of elicited action potentials, suggesting a burst-dependent pause mechanism. The authors demonstrated this burst-dependent pause relied on Kv1 channels. The pause is blocked by an SKF81297 and partially by sulpiride and mecamylamine, implicating D1/D5 receptor involvement. The study also shows that the ZD7288 does not reduce the duration of the pause and that lesioning dopamine neurons abolishes this response, which can be restored by clozapine.
Weaknesses:
While this study presents an interesting mechanism for SCIN pausing after burst activity, there are several major concerns that should be addressed:
(1) Scope of the Mechanism:
It is important to clarify that the proposed mechanism may apply specifically to the pause in SCINs following burst activity. The manuscript does not provide clear evidence that this mechanism contributes to the pause response observed in behavioral animals. While the thalamus is crucial for SCIN pauses in behavioral contexts, the exact mechanism remains unclear. Activating thalamic input triggers burst activity in SCINs, leading to a subsequent pause, but this mechanism may not be generalizable across different scenarios. For instance, approximately half of TANs do not exhibit initial excitation but still pause during behavior, suggesting that the burst-dependent pause mechanism is unlikely to explain this phenomenon. Furthermore, in behavioral animals, the duration of the pause seems consistent, whereas the proposed mechanism suggests it depends on the prior burst, which is not aligned with in vivo observations. Additionally, many in vivo recordings show that the pause response is a reduction in firing rate, not complete silence, which the mechanism described here does not explain. Please address these in the manuscript.
Thank you for your valuable feedback. While the absence of an initial burst in some TANs in vivo may suggest the involvement of alternative or additional mechanisms, it does not exclude a participation of Kv1 currents. We have seen that subthreshold depolarizations induced by thalamic inputs are sufficient to produce an afterhyperpolarization (AHP) mediated by Kv1 channels (see Tubert et al., 2016, PMID: 27568555). Although such subthreshold depolarizations are not captured in current recordings from behaving animals, intracellular in vivo recordings have demonstrated an intrinsically generated AHP after subthreshold depolarization of SCIN caused by stimulation of excitatory afferents (PMID: 15525771). Additionally, when pause duration is plotted against the number of spikes elicited by thalamic input (Fig. 1G), we found that one elicited spike is followed by an interspike interval 1.4 times longer than the average spontaneous interspike interval. We acknowledge the potential involvement of additional factors, including a decrease of excitatory thalamic input coinciding with the pause, followed by a second volley of thalamic inputs (Fig. 1G-J, after observations by Matsumoto et al., 2001- PMID: 11160526), as well as the timing of elicited spikes relative to ongoing spontaneous firing (Fig. 1D-E). Dopaminergic modulation (Fig. 3) and regional differences among striatal regions (PMID: 24559678) may also contribute to the complexity of these dynamics.
(2) Terminology:
The use of "pause response" throughout the manuscript is misleading. The pause induced by thalamic input in brain slices is distinct from the pause observed in behavioral animals. Given the lack of a clear link between these two phenomena in the manuscript, it is essential to use more precise terminology throughout, including in the title, bullet points, and body of the manuscript.
While we acknowledge that our study does not include in vivo evidence, we believe ex vivo preparations have been instrumental in elucidating the mechanisms underlying the responses observed in vivo. We also agree with previous ex vivo studies in using consistent terminology. However, we will clarify the ex vivo nature of our work in the abstract and bullet points for greater transparency.
(3) Kv1 Blocker Specificity:
It is unclear how the authors ruled out the possibility that the Kv1 blocker did not act directly on SCINs. Could there be an indirect effect contributing to the burst-dependent pause? Clarification on this point would strengthen the interpretation of the results.
Thank you for letting us clarify this issue. In our previous work (Tubert et al., 2016) we showed that the Kv1.3 and Kv1.1 subunits are selectively expressed in SCIN throughout the striatum. Moreover, gabaergic transmission is blocked in our preparations. We are including a phrase to make it clearer in the manuscript.
(4) Role of D1 Receptors:
While it is well-established that activating thalamic input to SCINs triggers dopamine release, contributing to SCIN pausing (as shown in Figure 3), it would be helpful to assess the extent to which D1 receptors contribute to this burst-dependent pause. This could be achieved by applying the D1 agonist SKF81297 after blocking nAChRs and D2 receptors.
Thank you for letting us clarify this point. We show that blocking D2R or nAChR reduces the pause only for strong thalamic stimulation eliciting 4 SCIN spikes (Figure 3G), whereas the D1/D5 agonist SKF81297 is able to reduce the pause induced by weaker stimulation as well (Figure 3C). This may indicate that nAChR-mediated dopamine release induced by thalamic-induced bursts more efficiently activates D2R compared to D5R. We speculate that, in this context, lack of D5R activation may be necessary to keep normal levels of Kv1 currents necessary for SCIN pauses.
(5) Clozapine's Mechanism of Action:
The restoration of the burst-dependent pause by clozapine following dopamine neuron lesioning is interesting, but clozapine acts on multiple receptors beyond D1 and D5. Although it may be challenging to find a specific D5 antagonist or inverse agonist, it would be more accurate to state that clozapine restores the burst-dependent pause without conclusively attributing this effect to D5 receptors.
Thank you for your insightful observation. We acknowledge the difficulty of targeting dopamine receptors pharmacologically due to the lack of highly selective D1/D5 inverse agonists. We used SCH23390, which is a highly selective D1/D5 receptor antagonist devoid of inverse agonist effects, to block clozapine’s ability to restore SCIN pauses (Figure 6C). This indicates that the restoration of SCIN pauses by clozapine depends on D1/D5 receptors. Furthermore, in a previous study, we demonstrated that clozapine’s effect on restoring SCIN excitability in dyskinetic mice (a phenomenon mediated by Kv1 channels in SCIN; Tubert et al., 2016) was not due to its action on serotonin receptors (Paz, Stahl et al., 2022). While our data do not rule out the potential contribution of other receptors, such as muscarinic acetylcholine receptors, we believe they strongly support the role of D1/D5 receptors. To reflect this, we will add a statement discussing the potential contribution of receptors beyond D1/D5.
-
-
www.biorxiv.org www.biorxiv.org
-
Author response:
We thank the editor and reviewers for their feedback. We believe we can address the substantive criticisms in full, first, by providing a more explicit theoretical basis for the method. Then, we believe criticism based on assumptions about phase consistency across time points are not well founded and can be answered. Finally, in response to some reviewer comments, we will improve the surrogate testing of the method.
We will enhance the theoretical justification for the application of higher-order singular value decomposition (SVD) to the problem of irregular sampling of the cortical area. The initial version of the manuscript was written to allow informal access to these ideas (if possible), but the reviewers find a more rigorous account appropriate. We will add an introduction to modern developments in the use of functional SVD in geophysics, meteorology & oceanography (e.g., empirical orthogonal functions) and quantitative fluid dynamics (e.g., dynamic mode decomposition) and computational chemistry. Recently SVD has been used in neuroscience studies (e.g., cortical eigenmodes). To our knowledge, our work is the first time higher-order SVD has been applied to a neuroscience problem. We use it here to solve an otherwise (apparently) intractable problem, i.e., how to estimate the spatial frequency (SF) spectrum on a sparse and highly irregular array with broadband signals.
We will clarify the methodological strategy in more formal terms in the next version of the paper. But essentially SVD allows a change of basis that greatly simplifies quantitative analysis. Here it allows escape from estimating the SF across millions of data-points (triplets of contacts, at each sample), each of which contains multiple overlapping signals plus noise (noise here defined in the context of SF estimation) and are inter-correlated across a variety of known and unknown observational dimensions. Rather than simply average over samples, which would wash out much of the real signal, SVD allows the signals to be decomposed in a lossless manner (up to the choice of number of eigenvectors at which the SVD is truncated). The higher-order SVD we have implemented reduces the size of problem to allow quantification of SF over hundreds of components, each of which is guaranteed certain desirable properties, i.e., they explain known (and largest) amounts of variance of the original data and are orthonormal. This last property allows us to proceed as if the observations are independent. SF estimates are made within this new coordinate system.
We will also more concretely formalise the relation between Fourier analysis and previous observations of eigenvectors of phase that are smooth gradients.
We will very briefly review Fourier methods designed to deal with non-uniform sampling. The problems these methods are designed for fall into the non-uniform part of the spectrum from uniform–non-uniform–irregular–highly-irregular–noise. They are highly suited to, for example, interpolating between EEG electrodes to produce a uniform array for application of the fast Fourier transform (Alamia et al., 2023). However, survey across a range of applied maths fields suggests that no method exists for the degree of irregular sampling found in the sEEG arrays at issue here. In particular, the sparseness of the contact coverage presents an insurmountable hurdle to standard methods. While there exists methods for sparse samples (e.g., Margrave & Fergusen, 1999; Ying 2009), these require well-defined oscillatory behavior, e.g., for seismographic analysis. Given the problems of highly irregular sampling, sparseness of sampling and broadband, nonstationary signals, we have attempted a solution via the novel methods introduced in the current manuscript. We were able to leverage previous observations regarding the relation between eigenvectors of cortical phase and Fourier analysis, as we outline in the manuscript.
We will extend the current 1-dimensional surrogate data to better demonstrate that the method does indeed correctly detect the ordinal relations in power on different parts of the SF spectrum. We will include the effects of a global reference signal. Simulations of cortical activity are an expensive way to achieve this goal. While the first author has published in this area, such simulations are partly a function of the assumptions put into them (i.e., spatial damping, boundary conditions, parameterization of connection fields). We will therefore use surrogate signals derived from real cortical activity to complete this task.
Some more specific issues raised:<br /> (1) Application of the method to general neuroscience problems:<br /> The purpose of the manuscript was to estimate the SF spectrum of phase in the cortex, in the range where it was previously not possible. The purpose was not specifically to introduce a new method of analysis that might be immediately applicable to a wide range of available data-sets. Indeed, the specifics of the method are designed to overcome an otherwise intractable disadvantage of sEEG (irregular spatial sampling) in order to take advantage of its good coverage (compared to ECoG) and low volume conduction compared to extra-cranial methods. On the other hand, the developing field of functional SVD would be of interest to neuroscientists, as a set of methods to solve difficult problems, and therefore of general interest. We will make these points explicit in the next version of the manuscript. In order to make the method more accessible, we will also publish code for the key routines (construction of triplets of contacts, Morlet wavelets, calculation of higher-order SVD, calculation of SF).
(2) Novelty:<br /> We agree with the third reviewer: if our results can convince, then the study will have an impact on the field. While there is work that has been done on phase interactions at a variety of scales, such as from the labs of Fries, Singer, Engels, Nauhaus, Logothetis and others, it does not quantify the relative power of the different spatial scales. Additionally, the research of Freeman et al. has quantified only portions of the SF spectrum of the cortex, or used EEG to estimate low SFs. We would appreciate any pointers to the specific literature the current research contributes to, namely, the SF spectrum of activity in the cortex.
(3) Further analyses:<br /> The main results of the research are relatively simple: monotonically falling SF-power with SF; this effect occurs across the range of temporal frequencies. We provide each individual participant’s curves in the supplementary Figures. By visual inspection, it can be seen that the main result of the example participant is uniformly recapitulated. One is rarely in this position in neuroscience research, and we will make this explicit in the text.
The research stands or falls by the adequacy of the method to estimate the SF curves. For this reason most statistical analyses and figures were reserved for ruling out confounds and exploring the limits of the methods. However, for the sake of completeness, we will now include the SF vs. SF-power correlations and significance in the next version, for each participant at each frequency.
Since the main result was uniform across participants, and since we did not expect that there was anything of special significance about the delayed free recall task, we conclude that more participants or more tasks would not add to the result. As we point out in the manuscript, each participant is a test of the main hypothesis. The result is also consistent with previous attempts to quantify the SF spectrum, using a range of different tasks and measurement modalities (Barrie et al., 1996; Ramon & Holmes 2015; Alexander et al., 2019; Alexander et al., 2016; Freeman et al., 2003; Freeman et al. 2000). The search for those rare sEEG participants with larger coverage than the maximum here is a matter of interest to us, but will be left for a future study.
(4) Sampling of phase and its meaningfulness:<br /> The wavelet methods used in the present study have excellent temporal resolution but poor frequency resolution. We additionally oversample the frequency range to produce visually informative plots (usually in the context of time by frequency plots, see Alexander et al., 2006; 2013; 2019). But it is not correct that the methods for estimating phase assume a narrow frequency band. Rather, the poor frequency resolution of short time-series Morlet wavelets means the methods are robust to the exact shape of the waveforms; the signal need be only approximately sinusoidal; to rise and fall. The reason for using methods that have excellent resolution in the time-domain is that previous work (Alexander et al., 2006; Patten et al. 2012) has shown that traveling wave events can last only one or two cycles, i.e., are not oscillatory in the strict sense but are non-stationary events. So while short time-window Morlet wavelets have a disadvantage in terms of frequency resolution, this means they precisely do not have the problem of assuming narrow-band sinusoidal waveforms in the signal. We strongly disagree that our analysis requires very strong assumptions about oscillations (see last point in this section).
Our hypothesis was about the SF spectrum of the phase. When the measurement of phase is noise-like at some location, frequency and time, then this noise will not substantially contribute to the low SF parts of the spectrum compared to high SFs. Our hypothesis also concerned whether it was reasonable to interpret the existing literature on low SF waves in terms of cortically localised waves or small numbers of localised oscillators. This required us to show that low SFs dominate, and therefore that this signal must dominate any extra-cranial measurements of apparent low SF traveling waves. It does not require us to demonstrate that the various parts of the SF spectrum are meaningful in the sense of functionally significant. This has been shown elsewhere (see references to traveling waves in manuscript, to which we will also add a brief survey of research on phase dynamics).
The calculation of phase can be bypassed altogether to achieve the initial effect described in the introduction to the methods (Fourier-like basis functions from SVD). The observed eigenvectors, increasing in spatial frequency with decreasing eigenvalues, can be reproduced by applying Gaussian windows to the raw time-series (D. Alexander, unpublished observation). For example, undertaking an SVD on the raw time-series windowed over 100ms reproduces much the same spatial eigenvectors (except that they come in pairs, recapitulating the real and imaginary parts of the signal). This reproducibility is in comparison to first estimating the phase at 10Hz using Morlet wavelets, then applying the SVD to the unit-length complex phase values.
(5) Other issues to be addressed and improved:<br /> clarity on which experiments were analyzed (starting in the abstract) discussion of frequencies above 60Hz and caution in interpretation due to spike-waveform artefact or as a potential index of multi-unit spiking discussion of whether the ad hoc, quasi-random sampling achieved by sEEG contacts somehow inflates the low SF estimates
References (new)<br /> Patten TM, Rennie CJ, Robinson PA, Gong P (2012) Human Cortical Traveling Waves: Dynamical Properties and Correlations with Responses. PLoS ONE 7(6): e38392. https://doi.org/10.1371/journal.pone.0038392<br /> Margrave GF, Ferguson RJ (1999) Wavefield extrapolation by nonstationary phase shift, GEOPHYSICS 64:4, 1067-1078<br /> Ying Y (2009) Sparse Fourier Transform via Butterfly Algorithm SIAM Journal on Scientific Computing, 31:3, 1678-1694
-
eLife Assessment
This study introduces a novel method for estimating spatial spectra from irregularly sampled intracranial EEG data, revealing cortical activity across all spatial frequencies, which supports the global and integrated nature of cortical dynamics. The study showcases important technical innovations and rigorous analyses, including tests to rule out potential confounds; however, the lack of comprehensive theoretical justification and assumptions about phase consistency across time points renders the strength of evidence incomplete. The dominance of low spatial frequencies in cortical phase dynamics continues to be of importance, and further elaboration on the interpretation and justification of the results would strengthen the link between evidence and conclusions.
-
Reviewer #1 (Public review):
Summary:
The paper uses rigorous methods to determine phase dynamics from human cortical stereotactic EEGs. It finds that the power of the phase is higher at the lowest spatial phase.
Strengths:
Rigorous and advanced analysis methods.
Weaknesses:
The novelty and significance of the results are difficult to appreciate from the current version of the paper.
(1) It is very difficult to understand which experiments were analysed, and from where they were taken, reading the abstract. This is a problem both for clarity with regard to the reader and for attribution of merit to the people who collected the data.
(2) The finding that the power is higher at the lowest spatial phase seems in tune with a lot of previous studies. The novelty here is unclear and it should be elaborated better. I could not understand reading the paper the advantage I would have if I used such a technique on my data. I think that this should be clear to every reader.
(3) It seems problematic to trust in a strong conclusion that they show low spatial frequency dynamics of up to 15-20 cm given the sparsity of the arrays. The authors seem to agree with this concern in the last paragraph of page 12. They also say that it would be informative to repeat the analyses presented here after the selection of more participants from all available datasets. It begs the question of why this was not done. It should be done if possible.
(4) Some of the analyses seem not to exploit in full the power of the dataset. Usually, a figure starts with an example participant but then the analysis of the entire dataset is not as exhaustive. For example, in Figure 6 we have a first row with the single participants and then an average over participants. One would expect quantifications of results from each participant (i.e. from the top rows of GFg 6) extracting some relevant features of results from each participant and then showing the distribution of these features across participants. This would complement the subject average analysis.
(5) The function of brain phase dynamics at different frequencies and scales has been examined in previous papers at frequencies and scales relevant to what the authors treat. The authors may want to be more extensive with citing relevant studies and elaborating on the implications for them. Some examples below:<br /> Womelsdorf T, et alScience. 2007<br /> Besserve M et al. PloS Biology 2015<br /> Nauhaus I et al Nat Neurosci 2009
-
Reviewer #2 (Public review):
Summary:
In this paper, the authors analyze the organization of phases across different spatial scales. The authors analyze intracranial, stereo-electroencephalogram (sEEG) recordings from human clinical patients. The authors estimate the phase at each sEEG electrode at discrete temporal frequencies. They then use higher-order SVD (HOSVD) to estimate the spatial frequency spectrum of the organization of phase in a data-driven manner. Based on this analysis, the authors conclude that most of the variance explained is due to spatially extended organizations of phase, suggesting that the best description of brain activity in space and time is in fact a globally organized process. The authors' analysis is also able to rule out several important potential confounds for the analysis of spatiotemporal dynamics in EEG.
Strengths:
There are many strengths in the manuscript, including the authors' use of SVD to address the limitation of irregular sampling and their analyses ruling out potential confounds for these signals in the EEG.
Weaknesses:
Some important weaknesses are not properly acknowledged, and some conclusions are over-interpreted given the evidence presented.
The central weakness is that the analyses estimate phase from all signal time points using wavelets with a narrow frequency band (see Methods - "Numerical methods"). This step makes the assumption that phase at a particular frequency band is meaningful at all times; however, this is not necessarily the case. Take, for example, the analysis in Figure 3, which focuses on a temporal frequency of 9.2 Hz. If we compare the corresponding wavelet to the raw sEEG signal across multiple points in time, this will look like an amplitude-modulated 9.2 Hz sinusoid to which the raw sEEG signal will not correspond at all. While the authors may argue that analyzing the spatial organization of phase across many temporal frequencies will provide insight into the system, there is no guarantee that the spatial organization of phase at many individual temporal frequencies converges to the correct description of the full sEEG signal. This is a critical point for the analysis because while this analysis of the spatial organization of phase could provide some interesting results, this analysis also requires a very strong assumption about oscillations, specifically that the phase at a particular frequency (e.g. 9.2 Hz in Figure 3, or 8.0 Hz in Figure 5) is meaningful at all points in time. If this is not true, then the foundation of the analysis may not be precisely clear. This has an impact on the results presented here, specifically where the authors assert that "phase measured at a single contact in the grey matter is more strongly a function of global phase organization than local". Finally, the phase examples given in Supplementary Figure 5 are not strongly convincing to support this point.
Another weakness is in the discussion on spatial scale. In the analyses, the authors separate contributions at (approximately) > 15 cm as macroscopic and < 15 cm as mesoscopic. The problem with the "macroscopic" here is that 15 cm is essentially on the scale of the whole brain, without accounting for the fact that organization in sub-systems may occur. For example, if a specific set of cortical regions, spanning over a 10 cm range, were to exhibit a consistent organization of phase at a particular temporal frequency (required by the analysis technique, as noted above), it is not clear why that would not be considered a "macroscopic" organization of phase, since it comprises multiple areas of the brain acting in coordination. Further, while this point could be considered as mostly semantic in nature, there is also an important technical consideration here: would spatial phase organizations occurring in varying subsets of electrodes and with somewhat variable temporal frequency reliably be detected? If this is not the case, then could it be possible that the lowest spatial frequencies are detected more often simply because it would be difficult to detect variable organizations in subsets of electrodes?
Another weakness is disregarding the potential spike waveform artifact in the sEEG signal in the context of these analyses. Specifically, Zanos et al. (J Neurophysiol, 2011) showed that spike waveform artifacts can contaminate electrode recordings down to approximately 60 Hz. This point is important to consider in the context of the manuscript's results on spatial organization at temporal frequencies up to 100 Hz. Because the spike waveform artifact might affect signal phase at frequencies above 60 Hz, caution may be important in interpreting this point as evidence that there is significant phase organization across the cortex at these temporal frequencies.
A last point is that, even though the present results provide some insight into the organization of phase across the human brain, the analyses do not directly link this to spiking activity. The predictive power that these spatial organizations of phase could provide for spiking activity - even if the analyses were not affected by the distortion due to the narrow-frequency assumption - remains unknown. This is important because relating back to spiking activity is the key factor in assessing whether these specific analyses of phase can provide insight into neural circuit dynamics. This type of analysis may be possible to do with the sEEG recordings, as well, by analyzing high-gamma power (Ray and Maunsell, PLoS Biology, 2011), which can provide an index of multi-unit spiking activity around the electrodes.
-
Reviewer #3 (Public review):
Summary:
The authors propose a method for estimation of the spatial spectra of cortical activity from irregularly sampled data and apply it to publicly available intracranial EEG data from human patients during a delayed free recall task. The authors' main findings are that the spatial spectra of cortical activity peak at low spatial frequencies and decrease with increasing spatial frequency. This is observed over a broad range of temporal frequencies (2-100 Hz).
Strengths:
A strength of the study is the type of data that is used. As pointed out by the authors, spatial spectra of cortical activity are difficult to estimate from non-invasive measurements (EEG and MEG) due to signal mixing and from commonly used intracranial measurements (i.e. electrocorticography or Utah arrays) due to their limited spatial extent. In contrast, iEEG measurements are easier to interpret than EEG/MEG measurements and typically have larger spatial coverage than Utah arrays. However, iEEG is irregularly sampled within the three-dimensional brain volume and this poses a methodological problem that the proposed method aims to address.
Weaknesses:
The used method for estimating spatial spectra from irregularly sampled data is weak in several respects.
First, the proposed method is ad hoc, whereas there exist well-developed (Fourier-based) methods for this. The authors don't clarify why no standard methods are used, nor do they carry out a comparative evaluation.
Second, the proposed method lacks a theoretical foundation and hinges on a qualitative resemblance between Fourier analysis and singular value decomposition.
Third, the proposed method is not thoroughly tested using simulated data. Hence it remains unclear how accurate the estimated power spectra actually are.
In addition, there are a number of technical issues and limitations that need to be addressed or clarified (see recommendations to the authors).
My assessment is that the conclusions are not completely supported by the analyses. What would convince me, is if the method is tested on simulated cortical activity in a more realistic set-up. I do believe, however, that if the authors can convincingly show that the estimated spatial spectra are accurate, the study will have an impact on the field. Regarding the methodology, I don't think that it will become a standard method in the field due to its ad hoc nature and well-developed alternatives.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
The authors show MRI relaxation time changes that are claimed to originate from cell membrane potential changes. This would be very important if true because it may provide a mechanism whereby membrane potential changes could be inferred noninvasively. However, the membrane potential manipulations applied here will induce cell swelling, and cell swelling has been previously shown to affect relaxation time. Therefore, the claim that the relaxation time changes observed in this manuscript are due to cell membrane potential changes is inadequately supported.
-
Reviewer #1 (Public review):
Summary:
This paper examines changes in relaxation time (T1 and T2) and magnetization transfer parameters that occur in a model system and in vivo when cells or tissue are depolarized using an equimolar extracellular solution with different concentrations of the depolarizing ion K+. The motivation is to explain T2 changes that have previously been observed by the authors in an in vivo model with neural stimulation (DIANA) and to try provide a mechanism to explain those changes.
Strengths:
The authors argue that the use of various concentrations of KCL in the extracellular fluid depolarize or hyperpolarize the cell pellets used and that this change in membrane potential is the driving force for the T2 (and T1-supplementary material) changes observed. In particular, they report an increase in T2 with increasing KCL concentration in the extracellular fluid (ECF) of pellets of SH-SY5Y cells. To offset the increasing osmolarity of the ECF due to the increase in KCL, the NaCL molarity of the ECF is proportionally reduced. The authors measure the intracellular voltage using patch clamp recordings, which is a gold standard. With 80 mM of KCL in the ECF, a change in T2 of the cell pellets of ~10 ms is observed with the intracellular potential recorded as about -6 mv. A very large T1 increase of ~90 ms is reported under the same conditions. The PSR (ratio of hydrogen protons on macromolecules to free water) decreases by about 10% at this 80 mM KCL concentration. Similar results are seen in a Jurkat cell line and similar, but far smaller changes are observed in vivo, for a variety of reasons discussed. As a final control, T1 and T2 values are measured in the various equimolar KCL solutions. As expected, no significant changes in T1 and T2 of the ECF were observed for these concentrations.
Weaknesses:
While the concepts presented are interesting, and the actual experimental methods seem to be nicely executed, the conclusions are not supported by the data for a number of reasons. This is not to say that the data isn't consistent with the conclusions, but there are other controls not included that would be necessary to draw the conclusion that it is membrane potential that is driving these T1 and T2 changes. Unfortunately for these authors, similar experiments conducted in 2008 (Stroman et al. Magn. Reson. in Med. 59:700-706) found similar results (increased T2 with KCL) but with a different mechanism, that they provide definite proof for. This study was not referenced in the current work.
It is well established that cells swell/shrink upon depolarization/hyperpolarization. Cell swelling is accompanied by increased light transmittance in vivo, and this should be true in the pellet system as well. In a beautiful series of experiments, Stroman et al. (2008) showed in perfused brain slices that the cells swell upon equimolar KCL depolarization and the light transmittance increases. The time course of these changes is quite slow, of the order of many minutes, both for the T2-weighted MRI signal and for the light transmittance. Stroman et al. also show that hypoosmotic changes produce the exact same timecourse as the KCL depolarization changes (and vice versa for the hyperosmotic changes - which cause cell shrinkage). Their conclusion, therefore, was that cell swelling (not membrane potential) was the cause of the T2-weighted changes observed, and that these were relatively slow (on the scale of many minutes).
What are the implications for the current study? Well, for one, the authors cannot exclude cell swelling as the mechanism for T2 changes, as they have not measured that. It is however well established that cell swelling occurs during depolarization, so this is not in question. Water in the pelletized cells is in slow/intermediate exchange with the ECF, and the solutions for the two compartment relaxation model for this are well established (see Menon and Allen, Magn. Reson. in Med. 20:214-227 (1991). The T2 relaxation times should be multiexponential (see point (3) further below). The current work cannot exclude cell swelling as the mechanism for T2 changes (it is mentioned in the paper, but not dealt with). Water entering cells dilutes the protein structures, changes rotational correlation times of the proteins in the cell and is known to increase T2. The PSR confirms that this is indeed happening, so the data in this work is completely consistent with the Stroman work and completely consistent with cell swelling associated with depolarization. The authors should have performed light scattering studies to demonstrate the presence or absence of cell swelling. Measuring intracellular potential is not enough to clarify the mechanism.
So why does it matter whether the mechanism is cell swelling or membrane potential? The reason is response time. Cell swelling due to depolarization is a slow process, slower than hemodynamic responses that characterize BOLD. In fact, cell swelling under normal homeostatic conditions in vivo is virtually non-existent. Only sustained depolarization events typically associated with non-naturalistic stimuli or brain dysfunction produce cell swelling. Membrane potential changes associated with neural activity, on the other hand, are very fast. In this manuscript, the authors have convincingly shown a signal change that is virtually the same as what was seen in the Stroman publication, but they have not shown that there is a response that can be detected with anything approaching the timescale of an action potential. So one cannot definitely say that the changes observed are due to membrane potential. One can only say they are consistent with cell swelling, regardless of what causes the cell swelling.
For this mechanism to be relevant to explaining DIANA, one needs to show that the cell swelling changes occur within a millisecond, which has never been reported. If one knows the populations of ECF and pellet, the T2s of the ECF and pellet and the volume change of the cells in the pellet, one can model any expected T2 changes due to neuronal activity. I think one would find that these are minuscule within the context of an action potential, or even bulk action potential.
There are a few smaller issues that should be addressed.<br /> (1) Why were complicated imaging sequences used to measure T1 and T2? On a Bruker system it should be possible to do very simple acquisitions with hard pulses (which will not need dictionaries and such to get quantitative numbers). Of course, this can only be done sample by sample and would take longer, but it avoids a lot of complication to correct the RF pulses used for imaging, which leads me to the 2nd point.<br /> (2) Figure S1 (H) is unlike any exponential T2 decay I have seen in almost 40 years of making T2 measurements. The strange plateau at the beginning and the bump around TE = 25 ms are odd. These could just be noise, but the fitted curve exactly reproduces these features. A monoexponential T2 decay cannot, by definition, produce a fit shaped like this.<br /> (3) As noted earlier, layered samples produce biexponential T2 decays and monoexponential T1 decays. I don't quite see how this was accounted for in the fitting of the data from the pellet preparations. I realize that these are spatially resolved measurements, but the imaging slice shown seems to be at the boundary of the pellet and the extracellular media and there definitely should be a biexponential water proton decay curve. Only 5 echo times were used, so this is part of the problem, but it does mean that the T2 reported is a population fraction weighted average of the T2 in the two compartments.<br /> (4) Delta T1 and T2 values are presented for the pellets in wells, but no absolute values are presented for either the pellets or the KCL solutions that I could find.
-
Reviewer #2 (Public review):
Summary:
Min et al. attempt to demonstrate that magnetic resonance imaging (MRI) can detect changes in neuronal membrane potentials. They approach this goal by studying how MRI contrast and cellular potentials together respond to treatment of cultured cells with ionic solutions. The authors specifically study two MRI-based measurements: (A) the transverse (T2) relaxation rate, which reflects microscopic magnetic fields caused by solutes and biological structures; and (B) the fraction or "pool size ratio" (PSR) of water molecules estimated to be bound to macromolecules, using an MRI technique called magnetization transfer (MT) imaging. They see that depolarizing K+ and Ba2+ concentrations lead to T2 increases and PSR decreases that vary approximately linearly with voltage in a neuroblastoma cell line and that change similarly in a second cell type. They also show that depolarizing potassium concentrations evoke reversible T2 increases in rat brains and that these changes are reversed when potassium is renormalized. Min et al. argue that this implies that membrane potential changes cause the MRI effects, providing a potential basis for detecting cellular voltages by noninvasive imaging. If this were true, it would help validate a recent paper published by some of the authors (Toi et al., Science 378:160-8, 2022), in which they claimed to be able to detect millisecond-scale neuronal responses by MRI.
Strengths:
The discovery of a mechanism for relating cellular membrane potential to MRI contrast could yield an important means for studying functions of the nervous system. Achieving this has been a longstanding goal in the MRI community, but previous strategies have proven too weak or insufficiently reproducible for neuroscientific or clinical applications. The current paper suggests remarkably that one of the simplest and most widely used MRI contrast mechanisms-T2 weighted imaging-may indicate membrane potentials if measured in the absence of the hemodynamic signals that most functional MRI (fMRI) experiments rely on. The authors make their case using a diverse set of quantitative tests that include controls for ion and cell type-specificity of their in vitro results and reversibility of MRI changes observed in vivo.
Weaknesses:
The major weakness of the paper is that it uses correlational data to conclude that there is a causational relationship between membrane potential and MRI contrast. Alternative explanations that could explain the authors' findings are not adequately considered. Most notably, depolarizing ionic solutions can also induce changes in cellular volume and tissue structure that in turn alter MRI contrast properties similarly to the results shown here. For example, a study by Stroman et al. (Magn Reson Med 59:700-6, 2008) reported reversible potassium-dependent T2 increases in neural tissue that correlate closely with light scattering-based indications of cell swelling. Phi Van et al. (Sci Adv 10:eadl2034, 2024) showed that potassium addition to one of the cell lines used here likewise leads to cell size increases and T2 increases. Such effects could in principle account for Min et al.'s results, and indeed it is difficult to see how they would not contribute, but they occur on a time scale far too slow to yield useful indications of membrane potential. The authors' observation that PSR correlates negatively with T2 in their experiments is also consistent with this explanation, given the inverse relationship usually observed (and mechanistically expected) between these two parameters. If the authors could show a tight correspondence between millisecond-scale membrane potential changes and MRI contrast, their argument for a causal connection or a useful correlational relationship between membrane potential and image contrast would be much stronger. As it is, however, the article does not succeed in demonstrating that membrane potential changes can be detected by MRI.
-
Author response:
Public Reviews:
Reviewer #1 (Public review):
Summary:
This paper examines changes in relaxation time (T1 and T2) and magnetization transfer parameters that occur in a model system and in vivo when cells or tissue are depolarized using an equimolar extracellular solution with different concentrations of the depolarizing ion K+. The motivation is to explain T2 changes that have previously been observed by the authors in an in vivo model with neural stimulation (DIANA) and to try provide a mechanism to explain those changes.
Strengths:
The authors argue that the use of various concentrations of KCL in the extracellular fluid depolarize or hyperpolarize the cell pellets used and that this change in membrane potential is the driving force for the T2 (and T1-supplementary material) changes observed. In particular, they report an increase in T2 with increasing KCL concentration in the extracellular fluid (ECF) of pellets of SH-SY5Y cells. To offset the increasing osmolarity of the ECF due to the increase in KCL, the NaCL molarity of the ECF is proportionally reduced. The authors measure the intracellular voltage using patch clamp recordings, which is a gold standard. With 80 mM of KCL in the ECF, a change in T2 of the cell pellets of ~10 ms is observed with the intracellular potential recorded as about -6 mv. A very large T1 increase of ~90 ms is reported under the same conditions. The PSR (ratio of hydrogen protons on macromolecules to free water) decreases by about 10% at this 80 mM KCL concentration. Similar results are seen in a Jurkat cell line and similar, but far smaller changes are observed in vivo, for a variety of reasons discussed. As a final control, T1 and T2 values are measured in the various equimolar KCL solutions. As expected, no significant changes in T1 and T2 of the ECF were observed for these concentrations.
Weaknesses:
While the concepts presented are interesting, and the actual experimental methods seem to be nicely executed, the conclusions are not supported by the data for a number of reasons. This is not to say that the data isn't consistent with the conclusions, but there are other controls not included that would be necessary to draw the conclusion that it is membrane potential that is driving these T1 and T2 changes. Unfortunately for these authors, similar experiments conducted in 2008 (Stroman et al. Magn. Reson. in Med. 59:700-706) found similar results (increased T2 with KCL) but with a different mechanism, that they provide definite proof for. This study was not referenced in the current work.
It is well established that cells swell/shrink upon depolarization/hyperpolarization. Cell swelling is accompanied by increased light transmittance in vivo, and this should be true in the pellet system as well. In a beautiful series of experiments, Stroman et al. (2008) showed in perfused brain slices that the cells swell upon equimolar KCL depolarization and the light transmittance increases. The time course of these changes is quite slow, of the order of many minutes, both for the T2-weighted MRI signal and for the light transmittance. Stroman et al. also show that hypoosmotic changes produce the exact same timecourse as the KCL depolarization changes (and vice versa for the hyperosmotic changes - which cause cell shrinkage). Their conclusion, therefore, was that cell swelling (not membrane potential) was the cause of the T2-weighted changes observed, and that these were relatively slow (on the scale of many minutes).
What are the implications for the current study? Well, for one, the authors cannot exclude cell swelling as the mechanism for T2 changes, as they have not measured that. It is however well established that cell swelling occurs during depolarization, so this is not in question. Water in the pelletized cells is in slow/intermediate exchange with the ECF, and the solutions for the two compartment relaxation model for this are well established (see Menon and Allen, Magn. Reson. in Med. 20:214-227 (1991). The T2 relaxation times should be multiexponential (see point (3) further below). The current work cannot exclude cell swelling as the mechanism for T2 changes (it is mentioned in the paper, but not dealt with). Water entering cells dilutes the protein structures, changes rotational correlation times of the proteins in the cell and is known to increase T2. The PSR confirms that this is indeed happening, so the data in this work is completely consistent with the Stroman work and completely consistent with cell swelling associated with depolarization. The authors should have performed light scattering studies to demonstrate the presence or absence of cell swelling. Measuring intracellular potential is not enough to clarify the mechanism.
We appreciate the reviewer’s comments. We agree that changes in cell volume due to depolarization and hyperpolarization significantly contribute to the observed changes in T2, PSR, and T1, especially in pelletized cells. For this reason, we already noted in the Discussion section of the original manuscript that cell volume changes influence the observed MR parameter changes, though this study did not present the magnitude of the cell volume changes. In this regard, we thank the reviewer for introducing the work by Stroman et al. (Magn Reson Med 59:700-706, 2008). When discussing the contribution of the cell volume changes to the observed MR parameter changes, we will additionally discuss the work of Stroman et al. in the revised manuscript.
In addition, we acknowledge that the title and main conclusion of the original manuscript may be misleading, as we did not separately consider the effect of cell volume changes on MR parameters. To more accurately reflect the scope and results of this study and to consider the reviewer 2’s suggestion, we will adjust the title to “Responses to membrane potential-modulating ionic solutions measured by magnetic resonance imaging of cultured cells and in vivo rat cortex” and will also revise the relevant phrases in the main text.
Finally, when [K+]-induced membrane potential changes are involved, there seems to be factors other than cell volume changes also appear to influence T2 changes. Our ongoing study shows that there are differences in T2 changes (for the same volume changes) between two different situations: pure osmotic volume changes vs. [K+]-induced volume changes (e.g., hypoosmotic vs. depolarization). Furthermore, this study suggests that mechanisms such as changes in free (primarily intracellular) and bound water within a voxel play an important role in generating this T2 difference. Our group is preparing a manuscript for this follow-up study and will report on it shortly.
So why does it matter whether the mechanism is cell swelling or membrane potential? The reason is response time. Cell swelling due to depolarization is a slow process, slower than hemodynamic responses that characterize BOLD. In fact, cell swelling under normal homeostatic conditions in vivo is virtually non-existent. Only sustained depolarization events typically associated with non-naturalistic stimuli or brain dysfunction produce cell swelling. Membrane potential changes associated with neural activity, on the other hand, are very fast. In this manuscript, the authors have convincingly shown a signal change that is virtually the same as what was seen in the Stroman publication, but they have not shown that there is a response that can be detected with anything approaching the timescale of an action potential. So one cannot definitely say that the changes observed are due to membrane potential. One can only say they are consistent with cell swelling, regardless of what causes the cell swelling.
For this mechanism to be relevant to explaining DIANA, one needs to show that the cell swelling changes occur within a millisecond, which has never been reported. If one knows the populations of ECF and pellet, the T2s of the ECF and pellet and the volume change of the cells in the pellet, one can model any expected T2 changes due to neuronal activity. I think one would find that these are minuscule within the context of an action potential, or even bulk action potential.
In the context of cell swelling occurring at rapid response times, if we define cell swelling simply as an “increase in cell volume,” there are several studies reporting transient structural (or volumetric) changes (e.g., ~nm diameter change over ~ms duration) in neuron cells during action potential propagation (Akkin et al., Biophys J 93:1347-1353, 2007; Kim et al., Biophys J 92:3122-3129, 2007; Lee et al., IEEE Trans Biomed Eng 58:3000-3003, 2011; Wnek et al., J Polym Sci Part B: Polym Phys 54:7-14, 2015; Yang et al., ACS Nano 12:4186-4193, 2018). These studies show a good correlation between membrane potential changes and cell volume changes (even if very small) at the cellular level within milliseconds.
As mentioned in the Response 1 above, this study does not address rapid dynamic membrane potential changes on the millisecond scale, which we explicitly discussed as one of the limitations in the Discussion section of the original manuscript. For this reason, we do not claim in this study that we provide the reader with definitive answers about the mechanisms involved in DIANA. Rather, as a first step toward addressing the mechanism of DIANA, this study confirms that there is a good correlation between changes in membrane potential and measurable MR parameters (e.g., T2 and PSR) when using ionic solutions that modulate membrane potential. Identifying T2 changes that occur during millisecond-scale membrane potential changes due to rapid neural activation will be further addressed in future studies.
There are a few smaller issues that should be addressed.
(1) Why were complicated imaging sequences used to measure T1 and T2? On a Bruker system it should be possible to do very simple acquisitions with hard pulses (which will not need dictionaries and such to get quantitative numbers). Of course, this can only be done sample by sample and would take longer, but it avoids a lot of complication to correct the RF pulses used for imaging, which leads me to the 2nd point.
We appreciate the reviewer’s suggestion regarding imaging sequences. We would like to clarify that dictionaries were used for fitting in vivo T2 decay data, not in vitro data. Sample-by-sample nonlocalized acquisition with hard pulses may be applicable for in vitro measurements. However, for in vivo measurements, a slice-selective multi-echo spin-echo sequence was necessary to acquire T2 maps within a reasonable scan time. Our choice of imaging sequence was guided by the need to spatially resolve MR signals from specific regions of interests while balancing scan time constraints.
(2) Figure S1 (H) is unlike any exponential T2 decay I have seen in almost 40 years of making T2 measurements. The strange plateau at the beginning and the bump around TE = 25 ms are odd. These could just be noise, but the fitted curve exactly reproduces these features. A monoexponential T2 decay cannot, by definition, produce a fit shaped like this.
The T2 decay curves in Figure S1(H) indeed display features that deviate from a simple monoexponential decay. In our in vivo experiments, we used a multi-echo spin-echo sequence with slice-selective excitation and refocusing pulses. In such sequences, the echo train is influenced by stimulated echoes and imperfect slice profiles. This phenomenon is inherent to the pulse sequence rather than being artifacts or fitting errors (Hennig, Concepts Magn Reson 3:125-143, 1991; Lebel and Wilman, Magn Reson Med 64:1005-1014, 2010; McPhee and Wilman, Magn Reson Med 77:2057-2065, 2017). Therefore, we fitted the T2 decay curve using the technique developed by McPhee and Wilman (2017).
(3) As noted earlier, layered samples produce biexponential T2 decays and monoexponential T1 decays. I don't quite see how this was accounted for in the fitting of the data from the pellet preparations. I realize that these are spatially resolved measurements, but the imaging slice shown seems to be at the boundary of the pellet and the extracellular media and there definitely should be a biexponential water proton decay curve. Only 5 echo times were used, so this is part of the problem, but it does mean that the T2 reported is a population fraction weighted average of the T2 in the two compartments.
We understand the reviewer’s concern regarding potential biexponential decay due to the presence of different compartments. In our experiments, we carefully positioned the imaging slice sufficiently remote from the pellet-media interface. This approach ensures that the signal predominantly arises from the cells (and interstitial fluid), excluding the influence of extracellular media above the cell pellet. We will clearly describe the imaging slice in the revised manuscript. As mentioned in our Methods section, for in vitro experiments, we repeated a single-echo spin-echo sequence with 50 difference echo times. While Figure 1C illustrates data from five echo times for visual clarity, the full dataset with all 50 echo times was used for fitting. We will clarify this point in the revised manuscript to avoid any misunderstanding.
(4) Delta T1 and T2 values are presented for the pellets in wells, but no absolute values are presented for either the pellets or the KCL solutions that I could find.
As requested by the reviewer, we will include the absolute values in the revised manuscript.
Reviewer #2 (Public review):
Summary:
Min et al. attempt to demonstrate that magnetic resonance imaging (MRI) can detect changes in neuronal membrane potentials. They approach this goal by studying how MRI contrast and cellular potentials together respond to treatment of cultured cells with ionic solutions. The authors specifically study two MRI-based measurements: (A) the transverse (T2) relaxation rate, which reflects microscopic magnetic fields caused by solutes and biological structures; and (B) the fraction or "pool size ratio" (PSR) of water molecules estimated to be bound to macromolecules, using an MRI technique called magnetization transfer (MT) imaging. They see that depolarizing K+ and Ba2+ concentrations lead to T2 increases and PSR decreases that vary approximately linearly with voltage in a neuroblastoma cell line and that change similarly in a second cell type. They also show that depolarizing potassium concentrations evoke reversible T2 increases in rat brains and that these changes are reversed when potassium is renormalized. Min et al. argue that this implies that membrane potential changes cause the MRI effects, providing a potential basis for detecting cellular voltages by noninvasive imaging. If this were true, it would help validate a recent paper published by some of the authors (Toi et al., Science 378:160-8, 2022), in which they claimed to be able to detect millisecond-scale neuronal responses by MRI.
Strengths:
The discovery of a mechanism for relating cellular membrane potential to MRI contrast could yield an important means for studying functions of the nervous system. Achieving this has been a longstanding goal in the MRI community, but previous strategies have proven too weak or insufficiently reproducible for neuroscientific or clinical applications. The current paper suggests remarkably that one of the simplest and most widely used MRI contrast mechanisms-T2 weighted imaging-may indicate membrane potentials if measured in the absence of the hemodynamic signals that most functional MRI (fMRI) experiments rely on. The authors make their case using a diverse set of quantitative tests that include controls for ion and cell type-specificity of their in vitro results and reversibility of MRI changes observed in vivo.
Weaknesses:
The major weakness of the paper is that it uses correlational data to conclude that there is a causational relationship between membrane potential and MRI contrast. Alternative explanations that could explain the authors' findings are not adequately considered. Most notably, depolarizing ionic solutions can also induce changes in cellular volume and tissue structure that in turn alter MRI contrast properties similarly to the results shown here. For example, a study by Stroman et al. (Magn Reson Med 59:700-6, 2008) reported reversible potassium-dependent T2 increases in neural tissue that correlate closely with light scattering-based indications of cell swelling. Phi Van et al. (Sci Adv 10:eadl2034, 2024) showed that potassium addition to one of the cell lines used here likewise leads to cell size increases and T2 increases. Such effects could in principle account for Min et al.'s results, and indeed it is difficult to see how they would not contribute, but they occur on a time scale far too slow to yield useful indications of membrane potential. The authors' observation that PSR correlates negatively with T2 in their experiments is also consistent with this explanation, given the inverse relationship usually observed (and mechanistically expected) between these two parameters. If the authors could show a tight correspondence between millisecond-scale membrane potential changes and MRI contrast, their argument for a causal connection or a useful correlational relationship between membrane potential and image contrast would be much stronger. As it is, however, the article does not succeed in demonstrating that membrane potential changes can be detected by MRI.
We appreciate the reviewer’s comments. We agree that changes in cell volume due to depolarization and hyperpolarization significantly contribute to the observed MR parameter changes. For this reason, we have already noted in the Discussion section of the original manuscript that cell volume changes influence the observed MR parameter changes. In this regard, we thank the reviewer for introducing the work by Stroman et al. (Magn Reson Med 59:700-706, 2008) and Phi Van et al. (Sci Adv 10:eadl2034, 2024). When discussing the contribution of the cell volume changes to the observed MR parameter changes, we will additionally discuss both work of Stroman et al. and Phi Van et al. in the revised manuscript.
In addition, this study does not address rapid dynamic membrane potential changes on the millisecond scale, which we explicitly discussed as one of the limitations of this study in the Discussion section of the original manuscript. For this reason, we do not claim in this study that we provide the reader with definitive answers about the mechanisms involved in DIANA. Rather, as a first step toward addressing the mechanism of DIANA, this study confirms that there is a good correlation between changes in membrane potential and measurable MR parameters (although on a slow time scale) when using ionic solutions that modulate membrane potential. Identifying T2 changes that occur during millisecond-scale membrane potential changes due to rapid neural activation will be further addressed in future studies.
Together, we acknowledge that the title and main conclusion of the original manuscript may be misleading. To more accurately reflect the scope and results of this study and to consider the reviewer’s suggestion, we will adjust the title to “Responses to membrane potential-modulating ionic solutions measured by magnetic resonance imaging of cultured cells and in vivo rat cortex” and will also revise the relevant phrases in the main text.
-
-
tw-preview.dev.amust.local tw-preview.dev.amust.local
-
beta
we decided yesterday to just use the build number and no "beta" anywhere. Please remove "beta" everywhere and use "current version (0.1.17)" or something like that
-
-
www.youtube.com www.youtube.comYouTube1
-
for - webcast - youtube channel - Adventures in Awareness - Bernard Kastrup - from - essay - The end of scarcity? From Polycrisis to Planetary Phase Shift - Nafeez Ahmed
from - essay - The end of scarcity? From Polycrisis to Planetary Phase Shift - Nafeez Ahmed - https://hyp.is/XajXQJKfEe-CsteXYeBHhw/ageoftransformation.org/the-end-of-scarcity-from-polycrisis-to-planetary-phase-shift/
-
-
patterns.sociocracy30.org patterns.sociocracy30.org
-
(sense-making)
The closing bracket is included in the bold which is inconsistent with the following paragraphs. The
**
in markdown should be moved left of the bracket. There is a PR for this (and a couple of other typo fixes) already here: https://github.com/S3-working-group/s3-practical-guide/pull/31
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
The authors have provided a valuable addition to the literature on large-scale electrophysiological experiments across many labs. The evidence that the authors provided was incomplete - while some comparisons with analyses outside of the manuscript's approaches were provided, a more complete manuscript would have compared with alternative standardized analyses. In particular, alternative spike sorting metrics and the alternative of GLM-based analysis of data would have made the interpretation of the results clearer.
-
Reviewer #1 (Public review):
Summary:
The authors explore a large-scale electrophysiological dataset collected in 10 labs while mice performed the same behavioral task, and aim to establish guidelines to aid reproducibility of results collected across labs. They introduce a series of metrics for quality control of electrophysiological data and show that histological verification of recording sites is important for interpreting findings across labs and should be reported in addition to planned coordinates. Furthermore, the authors suggest that although basic electrophysiology features were comparable across labs, task modulation of single neurons can be variable, particularly for some brain regions. The authors then use a multi-task neural network model to examine how neural dynamics relate to multiple interacting task- and experimenter-related variables, and find that lab-specific differences contribute little to the variance observed. Therefore, analysis approaches that account for correlated behavioral variables are important for establishing reproducible results when working with electrophysiological data from animals performing decision-making tasks. This paper is very well-motivated and needed. However, what is missing is a direct comparison of task modulation of neurons across labs using standard analysis practice in the fields, such as generalized linear model (GLM). This can potentially clarify how much behavioral variance contributes to the neural variance across labs; and more accurately estimate the scale of the issues of reproducibility in behavioral systems neuroscience, where conclusions often depend on these standard analysis methods.
Strength:
(1) This is a well-motivated paper that addresses the critical question of reproducibility in behavioural systems neuroscience. The authors should be commended for their efforts.
(2) A key strength of this study comes from the large dataset collected in collaboration across ten labs. This allows the authors to assess lab-to-lab reproducibility of electrophysiological data in mice performing the same decision-making task.
(3) The authors' attempt to streamline preprocessing pipelines and quality metrics is highly relevant in a field that is collecting increasingly large-scale datasets where automation of these steps is increasingly needed.
(4) Another major strength is the release of code repositories to streamline preprocessing pipelines across labs collecting electrophysiological data.
(5) Finally, the application of MTNN for characterizing functional modulation of neurons, although not yet widely used in systems neuroscience, seems to have several advantages over traditional methods.
Weaknesses:
(1) In several places the assumptions about standard practices in the field, including preprocessing and analyses of electrophysiology data, seem to be inaccurately presented:
a) The estimation of how much the histologically verified recording location differs from the intended recording location is valuable information. Importantly, this paper provides citable evidence for why that is important. However, histological verification of recording sites is standard practice in the field, even if not all studies report them. Although we appreciate the authors' effort to further motivate this practice, the current description in the paper may give readers outside the field a false impression of the level of rigor in the field.
b) When identifying which and how neurons encode particular aspects of stimuli or behaviour in behaving animals (when variables are correlated by the nature of the animals behaviour), it has become the standard in behavioral systems neuroscience to use GLMs - indeed many labs participating in the IBL also has a long history of doing this (e.g., Steinmetz et al., 2019; Musall et al., 2023; Orsolic et al., 2021; Park et al., 2014). The reproducibility of results when using GLMs is never explicitly shown, but the supplementary figures to Figure 7 indicate that results may be reproducible across labs when using GLMs (as it has similar prediction performance to the MTNN). This should be introduced as the first analysis method used in a new dedicated figure (i.e., following Figure 3 and showing results of analyses similar to what was shown for the MTNN in Figure 7). This will help put into perspective the degree of reproducibility issues the field is facing when analyzing with appropriate and common methods. The authors can then go on to show how simpler approaches (currently in Figures 4 and 5) - not accounting for a lot of uncontrolled variabilities when working with behaving animals - may cause reproducibility issues.
When the authors introduce a neural network approach (i.e. MTNN) as an alternative to the analyses in Figures 4 and 5, they suggest: 'generalized linear models (GLMs) are likely too inflexible to capture the nonlinear contributions that many of these variables, including lab identity and spatial positions of neurons, might make to neural activity'). This is despite the comparison between MTNN and GLM prediction performance (Supplement 1 to Figure 7) showing that the MTNN is only slightly better at predicting neural activity compared to standard GLMs. The introduction of new models to capture neural variability is always welcome, but the conclusion that standard analyses in the field are not reproducible can be unfair unless directly compared to GLMs.
In essence, it is really useful to demonstrate how different analysis methods and preprocessing approaches affect reproducibility. But the authors should highlight what is actually standard in the field, and then provide suggestions to improve from there.
(2) The authors attempt to establish a series of new quality control metrics for the inclusion of recordings and single units. This is much needed, with the goal to standardize unit inclusion across labs that bypasses the manual process while keeping the nuances from manual curation. However, the authors should benchmark these metrics to other automated metrics and to manual curation, which is still a gold standard in the field. The authors did this for whole-session assessment but not for individual clusters. If the authors can find metrics that capture agreed-upon manual cluster labels, without the need for manual intervention, that would be extremely helpful for the field.
(3) With the goal of improving reproducibility and providing new guidelines for standard practice for data analysis, the authors should report of n of cells, sessions, and animals used in plots and analyses throughout the paper to aid both understanding of the variability in the plots - but also to set a good example.
Other general comments:
(1) In the discussion (line 383) the authors conclude: 'This is reassuring, but points to the need for large sample sizes of neurons to overcome the inherent variability of single neuron recording'. - Based on what is presented in this paper we would rather say that their results suggest that appropriate analytical choices are needed to ensure reproducibility, rather than large datasets - and they need to show whether using standard GLMs actually allows for reproducible results.
(2) A general assumption in the across-lab reproducibility questions in the paper relies on intralab variability vs across-lab variability. An alternative measure that may better reflect experimental noise is across-researcher variability, as well as the amount of experimenter experience (if the latter is a factor, it could suggest researchers may need more training before collecting data for publication). The authors state in the discussion that this is not possible. But maybe certain measures can be used to assess this (e.g. years of conducting surgeries/ephys recordings etc)?
(3) Figure 3b and c: Are these plots before or after the probe depth has been adjusted based on physiological features such as the LFP power? In other words, is the IBL electrophysiological alignment toolbox used here and is the reliability of location before using physiological criteria or after? Beyond clarification, showing both before and after would help the readers to understand how much the additional alignment based on electrophysiological features adjusts probe location. It would also be informative if they sorted these penetrations by which penetrations were closest to the planned trajectory after histological verification.
(4) In Figures 4 and 6: If the authors use a 0.05 threshold (alpha) and a cell simply has to be significant on 1/6 tests to be considered task modulated, that means that they have a false positive rate of ~30% (0.05*6=0.3). We ran a simple simulation looking for significant units (from random null distribution) from these criteria which shows that out of 100.000 units, 26500 units would come out significant (false error rate: 26.5%). That is very high (and unlikely to be accepted in most papers), and therefore not surprising that the fraction of task-modulated units across labs is highly variable. This high false error rate may also have implications for the investigation of the spatial position of task-modulated units (as effects of the spatial position may drown in falsely labelled 'task-modulated' cells).
(5) The authors state from Figure 5b that the majority of cells could be well described by 2 PCs. The distribution of R2 across neurons is almost uniform, so depending on what R2 value one considers a 'good' description, that is the fraction of 'good' cells. Furthermore, movement onset has now been well-established to be affecting cells widely and in large fractions, so while this analysis may work for something with global influence - like movement - more sparsely encoded variables (as many are in the brain) may not be well approximated with this suggestion. The authors could expand this analysis into other epochs like activity around stimulus presentation, to better understand how this type of analysis reproduces across labs for features that have a less global influence.
(6) Additionally, in Figure 5i: could the finding that one can only distinguish labs when taking cells from all regions, simply be a result of a different number of cells recorded in each region for each lab? It makes more sense to focus on the lab/area pairing as the authors also do, but not to make their main conclusion from it. If the authors wish to do the comparison across regions, they will need to correct for the number of cells recorded in each region for each lab. In general, it was a struggle to fully understand the purpose of Figure 5. While population analysis and dimensionality reduction are commonplace, this seems to be a very unusual use of it.
(7) In the discussion the authors state: "This approach, which exceeds what is done in many experimental labs". Indeed this approach is a more effective and streamlined way of doing it, but it is questionable whether it 'exceeds' what is done in many labs. Classically, scientists trace each probe manually with light microscopy and designate each area based on anatomical landmarks identified with nissl or dapi stains together with gross landmarks. When not automated with 2-PI serial tomography and anatomically aligned to a standard atlas, this is a less effective process, but it is not clear that it is less precise, especially in studies before neuropixels where active electrodes were located in a much smaller area. While more effective, transforming into a common atlas does make additional assumptions about warping the brain into the standard atlas - especially in cases where the brain has been damaged/lesioned. Readers can appreciate the effectiveness and streamlining provided by these new tools without the need to invalidate previous approaches.
(8) What about across-lab population-level representation of task variables, such as in the coding direction for stimulus or choice? Is the general decodability of task variables from the population comparable across labs?
-
Reviewer #2 (Public review):
Summary:
The authors sought to evaluate whether observations made in separate individual laboratories are reproducible when they use standardized procedures and quality control measures. This is a key question for the field. If ten systems neuroscience labs try very hard to do the exact same experiment and analyses, do they get the same core results? If the answer is no, this is very bad news for everyone else! Fortunately, they were able to reproduce most of their experimental findings across all labs. Despite attempting to target the same brain areas in each recording, variability in electrode targeting was a source of some differences between datasets.
Major Comments:
The paper had two principal goals:<br /> (1) to assess reproducibility between labs on a carefully coordinated experiment<br /> (2) distill the knowledge learned into a set of standards that can be applied across the field.<br /> The manuscript made progress towards both of these goals but leaves room for improvement.
(1) The first goal of the study was to perform exactly the same experiment and analyses across 10 different labs and see if you got the same results. The rationale for doing this was to test how reproducible large-scale rodent systems neuroscience experiments really are. In this, the study did a great job showing that when a consortium of labs went to great lengths to do everything the same, even decoding algorithms could not discern laboratory identity was not clearly from looking at the raw data. However, the amount of coordination between the labs was so great that these findings are hard to generalize to the situation where similar (or conflicting!) results are generated by two labs working independently.
Importantly, the study found that electrode placement (and thus likely also errors inherent to the electrode placement reconstruction pipeline) was a key source of variability between datasets. To remedy this, they implemented a very sophisticated electrode reconstruction pipeline (involving two-photon tomography and multiple blinded data validators) in just one lab-and all brains were sliced and reconstructed in this one location. This is a fantastic approach for ensuring similar results within the IBL collaboration, but makes it unclear how much variance would have been observed if each lab had attempted to reconstruct their probe trajectories themselves using a mix of histology techniques from conventional brain slicing, to light sheet microscopy, to MRI imaging.
This approach also raises a few questions. The use of standard procedures, pipelines, etc. is a great goal, but most labs are trying to do something unique with their setup. Bigger picture, shouldn't highly "significant" biological findings akin to the discovery of place cells or grid cells, be so clear and robust that they can be identified with different recording modalities and analysis pipelines?
Related to this, how many labs outside of the IBL collaboration have implemented the IBL pipeline for their own purposes? In what aspects do these other labs find it challenging to reproduce the approaches presented in the paper? If labs were supposed to perform this same experiment, but without coordinating directly, how much more variance between labs would have been seen? Obviously investigating these topics is beyond the scope of this paper. The current manuscript is well-written and clear as is, and I think it is a valuable contribution to the field. However, some additional discussion of these issues would be helpful.
(2) The second goal of the study was to present a set of data curation standards (RIGOR) that could be applied widely across the field. This is a great idea, but its implementation needs to be improved if adoption outside of the IBL is to be expected. Here are three issues:
(a) The GitHub repo for this project (https://github.com/int-brain-lab/paper-reproducible-ephys/) is nicely documented if the reader's goal is to reproduce the figures in the manuscript. Consequently, the code for producing the RIGOR statistics seems mostly designed for re-computing statistics on the existing IBL-formatted datasets. There doesn't appear to be any clear documentation about how to run it on arbitrary outputs from a spike sorter (i.e. the inputs to Phy).
(b) Other sets of spike sorting metrics that are more easily computed for labs that are not using the IBL pipeline already exist (e.g. "quality_metrics" from the Allen Institute ecephys pipeline [https://github.com/AllenInstitute/ecephys_spike_sorting/blob/main/ecephys_spike_sorting/modules/quality_metrics/README.md] and the similar module in the Spike Interface package [https://spikeinterface.readthedocs.io/en/latest/modules/qualitymetrics.html]). The manuscript does not compare these approaches to those proposed here, but some of the same statistics already exist (amplitude cutoff, median spike amplitude, refractory period violation).
(c) Some of the RIGOR criteria are qualitative and must be visually assessed manually. Conceptually, these features make sense to include as metrics to examine, but would ideally be applied in a standardized way across the field. The manuscript doesn't appear to contain a detailed protocol for how to assess these features. A procedure for how to apply these criteria for curating non-IBL data (or for implementing an automated classifier) would be helpful.
Other Comments:
(1) How did the authors select the metrics they would use to evaluate reproducibility? Was this selection made before doing the study?
(2) Was reproducibility within-lab dependent on experimenter identity?
(3) They note that UCLA and UW datasets tended to miss deeper brain region targets (lines 185-188) - they do not speculate why these labs show systematic differences. Were they not following standardized procedures?
(4) The authors suggest that geometrical variance (difference between planned and final identified probe position acquired from reconstructed histology) in probe placement at the brain surface is driven by inaccuracies in defining the stereotaxic coordinate system, including discrepancies between skull landmarks and the underlying brain structures. In this case, the use of skull landmarks (e.g. bregma) to determine locations of brain structures might be unreliable and provide an error of ~360 microns. While it is known that there is indeed variance in the position between skull landmarks and brain areas in different animals, the quantification of this error is a useful value for the field.
(5) Why are the thalamic recording results particularly hard to reproduce? Does the anatomy of the thalamus simply make it more sensitive to small errors in probe positioning relative to the other recorded areas?
-
-
viewer.athenadocs.nl viewer.athenadocs.nl
-
β
Shows the systematic risk
-
-
www.youtube.com www.youtube.com
-
Royal Typewriter KMG Mainspring Drawband Tightened Adjusted Tension by [[Phoenix Typewriter]]
On the left rear corner underneath the carriage when moved to the right, one can easily see the mainspring and drawband assembly. Just behind it is a worm drive operated by a screw. Turning this screw counterclockwise will advance the worm drive to the left and increase the tension on the mainspring.
-
-
vanbrrlekom.github.io vanbrrlekom.github.io
-
the same
similar?
-
s the spread of proportions of responses
this is already an interpretation. you should first say what is shown descriptively.
-
We tested whether this was the case by
wordy
-
however, only illustrates the total number of categorizations across all participants
I wonder whether this should be combined with the next paragraph because they belong together.
-
he multiple categories condition
start sentence with this because it helps the reader
-
We fit the data to Bayesian mixed-effects models. In all models, facial gender (0 to 100 in seven steps) and response options (one-dimensional, two-dimensional) were included as fixed effects. Additionally, all models included varying intercepts for both participants and trials and varying slopes for facial gender. Exploratory plotting of the data suggested that the relationship between facial gender and rated gender was non was non-linear, suggesting that modelling which treated facial gender as a linear predictor would be misspecified
see my comments above.
maybe just say "same as in Study 1?)
-
dichotomizing the variables
del. does not add anything
-
I don’t know
in the above section, the use of italics helps here without making the reader seasick
-
in a lab
lab? The monkey lab :)
-
separate continua
unclear what this refers to: The woman/man continua or the two conditions. If only continua, then does order of trials apply only to the two-dimensional condition?
-
he woman continuum and once using the man continuum. In the woman continuum, the anchors were marked not woman and woman. In the man continuum the anchors were marked not man and man.
more efficient in one sentence: once on a woman continuum (anchors were not woman and woman) and once on a man ...
(you have to add italics, of course :)
-
by the researchers
I would just give the ref
-
the
what is "the lab"? why bring up lab at all?
-
-
ageoftransformation.org ageoftransformation.org
-
Evolution therefore involves an increasing complexification of both the hardware (material-biological organisation) and software (cognitive capabilities) of life.
The prediction of greater complexity arises from Free Energy Principle and Active Inferencen not Evolution.
-
Evolutionary competition between species selects for living systems which can maximise the efficiency with which they harness and dissipate energy in harmony with environmental conditions. At every new stage of this process, life is navigating its relationship with earth in an evolutionary dance between its hardware and software.The X-curve at the
I like his attempt to use evolution-related vocabulary to frame his narrative; he has a different intuition of Friston's free energy principle than myself. my understanding of friston does not depend of evolution-related memes for its dynamics
-
Life can therefore be understood as an energy-dissipating system that contributes to increasing entropy in the universe by extracting ‘free energy’ in the environment and dissipating it as heat, all as efficiently as possible through paths of least energy.
Perhaps another way to describe what is happening instead of "energy-dissipation" is "Life translates 'free (random) energy' into patterns which can persist and evolve into meta-patterns and so on"
Or "Emergence arises from patterns able to form & self-sustain in the presence of continuous dynamics (energy) "
-
Life can therefore be understood as an energy-dissipating system that contributes to increasing entropy in the universe by extracting ‘free energy’ in the environment and dissipating it as heat, all as efficiently as possible through paths of least energy. Through millions of years of evolution, this has driven living systems - from cells to the biosphere - to increase in complexity, resulting in higher forms of energy
Surprised to see Karl Friston (who identified the Free Energy Principle) has yet to be mentioned anywhere so far.
-
In The Demon in the Machine: How Hidden Webs of Information Are Finally Solving the Mystery of Life, theoretical physicist Paul Davies of Arizona State University argues that life can be defined as an astounding combination of ‘hardware’ and ‘software’. The ‘hardware’ is a configuration of matter which harnesses energy from its environment with surprising efficiency and dissipates it as waste back into the environment. The ‘software’ consists of the complex information structures – such as the genetic coding – by which that configuration of matter and energy is organised and instructed to self-reproduce.
Beautifully summarized. I was introduced to this "biophysics of intelligence" lens by the podcast Machine Learning Street Talk's interview of Karl Friston. https://youtu.be/V_VXOdf1NMw? The first 10 minutes are particularly of interest to technologists interested in the future of AI beyond LLMs (e.g. chat gpt)
-
planetary phase shift
Conceptually interesting . Skeptical that the precision of understanding and mechanisms from physics and chemistry apply as precisely to planetary dynamics as he suggests, but this claim is a logical inference derived from the free energy principle as posited by Karl Friston
-
-
www.oudaily.com www.oudaily.com
-
Alexis Washington
Seth, I hope you don't hate this for not being level lol (it's the only one in this gallery)
-
-
www.pnas.org www.pnas.org
-
for: Major Evolutionary Transitions in individuality, MET, MET in Individuality
- Abstract
- The evolution of life on earth has been driven by a small number of major evolutionary transitions.
- These transitions have been characterized by individuals that could previously replicate independently, cooperating to form a new, more complex life form.
- For example,
- archaea and eubacteria formed eukaryotic cells, and
- cells formed multicellular organisms.
- However, not all cooperative groups are en route to major transitions.
- How can we explain why major evolutionary transitions have or haven’t taken place on different branches of the tree of life?
- We break down major transitions into two steps:
- the formation of a cooperative group and
- the transformation of that group into an integrated entity.
- We show how these steps require
- cooperation,
- division of labor,
- communication,
- mutual dependence, and
- negligible within-group conflict.
- We break down major transitions into two steps:
- We find that certain ecological conditions and the ways in which groups form have played recurrent roles in driving multiple transitions.
- In contrast, we find that other factors have played relatively minor roles at many key points, such as
- within-group kin discrimination and
- mechanisms to actively repress competition.
- More generally, by identifying the small number of factors that have driven major transitions, we provide a simpler and more unified description of how life on earth has evolved.
- Abstract
-
-
www.youtube.com www.youtube.com
-
Royal KMM KMG Typewriter Feet Spacers Original Smashed Rubber Replaced by [[Phoenix Typewriter]]
Squished rubber feet spacers on the Royal standard typewriters can cause interfere with the universal bar and when they do, they'll need replacement.
This is the same sort of interference seen on Olympia SM3s due to their squished/flattened rubber gaskets, though the symptoms are different.
"Phoenix typewriter. Have a Royal day!" <br /> A slightly different sign off from Duane's usual... :)
-
-
-
ROYAL KMM Replacing Type Bar Link Remove Arm Repaired Typewriter by [[Phoenix Typewriter]]
This is roughly what I expected to be the case. I've got to shift the fulcrum pivot wire so I can reattach my Q and @ on a Royal KMG.
Roughly similar to Gerren's video on swapping out typefaces, but with a slightly different technique for speed of doing that. See: https://hypothes.is/a/I_-9rBV2Ee-eMotzy9_Z-Q
-
-
www.youtube.com www.youtube.com
-
How To Guide- Swapping Type Bars on a Manual Typewriter -Full tips and tricks by [[The HotRod Typewriter Co.]]
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
In this paper, Seon and Chung investigate changes in own risk-taking behavior, when they are being observed by a "risky" or "safe" player. Using computational modeling and model-informed fMRI, the authors present solid evidence that participants adjust their choice congruent with the other player's type (either risky or safe). The conclusions of the paper are an important contribution to the field of social decision-making as they show a differentiated adjustment of choices and not just a universally riskier choice behavior when being observed as has been claimed in previous studies.
-
Reviewer #1 (Public review):
Summary:
Seon and Chung's study investigates the hypothesis that individuals take more risks when observed by others because they perceive others to be riskier than themselves. To test this, the authors designed an innovative experimental paradigm where participants were informed that their decisions would be observed by a "risky" player and a "safe" player. Participants underwent fMRI scanning during the task.
Strengths:
The research question is sound, and the experimental paradigm is well-suited to address the hypothesis.
Weaknesses:
I have several concerns. Most notably, the manuscript is difficult to read in parts, and I suggest a thorough revision of the writing for clarity, as some sections are nearly incomprehensible. Additionally, key statistical details are missing, and I have reservations about the choice of ROIs.
-
Reviewer #2 (Public review):
Summary:
This study aims to investigate how social observation influences risky decision-making. Using a gambling task, the study explored how participants adjusted their risk-taking behavior when they believed their decisions were being observed by either a risk-averse or risk-seeking partner. The authors hypothesized that individuals would simulate the choices of their observers based on learned preferences and integrate these simulated choices into their own decision-making. In addition to behavioral experiments, the study employed computational modeling to formalize decision processes and fMRI to identify the neural underpinnings of risky decision-making under social observation.
Strengths:
The study provides a fresh perspective on social influence in decision-making, moving beyond the simple notion that social observation leads to uniformly riskier behavior. Instead, it shows that individuals adjust their choices depending on their beliefs about the observer's risk preferences, offering a more nuanced understanding of how social contexts shape decision-making. The authors provide evidence using comprehensive approaches, including behavioral data based on a well-designed task, computational modeling, and neuroimaging. The three models are well selected to compare at which level (e.g., computing utility, risk preference shift, and choice probability) the social influence alters one's risky decision-making. This approach allows for a more precise understanding of the cognitive processes underlying decision-making under social observation.
Weaknesses:
While the neuroimaging results are generally consistent with the behavioral and computational findings, the strength of the neural evidence could be improved. The authors' claims about the involvement of the TPJ and mPFC in integrating social information are plausible, but further analysis, such as model comparisons at the neuroimaging level, is needed to decisively rule out alternative interpretations that other computational models suggest.
-
Reviewer #3 (Public review):
Summary:
This is an important paper using a novel paradigm to examine how observation affects the social contagion of risk preferences. There is a lot of interest in the field about the mechanisms of social influence, and adding in the factor of whether observation also influences these contagion effects is intriguing.
Strengths:
(1) There is an impressive combination of a multi-stage behavioural task with computational modelling and neuroimaging.
(2) The analyses are well conducted and the sample size is reasonable.
Weaknesses:
(1) Anatomically it would be helpful to more explicitly distinguish between dmPFC and vmPFC. Particularly at the end of the introduction when mPFC and vmPFC are distinguished, as the vmPFC is in the mPFC.
(2) The authors' definition of ROIs could be elaborated on further. They suggest that peaks are selected from neurosynth for different terms, but were there not multiple peaks identified within a functional or anatomical brain area? This section could be strengthened by confirming with anatomical ROIs where available, such as the atlases here http://www.rbmars.dds.nl/lab/CBPatlases.html and the Harvard-Oxford atlases.
(3) How did the authors ensure there were enough trials to generate a reliable BOLD signal? The scanned part of the study seems relatively short.
(4) It would be helpful to add whether any brain areas survived whole-brain correction.
(5) There is a concern that mediation cannot be used to make causal inferences and much larger samples are needed to support claims of mediation. The authors should change the term mediation in order to not imply causality (they could talk about indirect effects instead) and highlight that the mediation analyses are exploratory as they would not be sufficiently powered (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2843527/).
(6) The authors may want to speculate on lifespan differences in this susceptibility to risk preferences given recent evidence that older adults are relatively more susceptible to impulsive social influence (Zhu et al, 2024, comms psychology).
-
-
runestone.academy runestone.academy
-
Parentheses have the highest precedence and can be used to force an expression to evaluate in the order you want. Since expressions in parentheses are evaluated first, 2 * (3-1) is 4, and (1+1)**(5-2) is 8. You can also use parentheses to make an expression easier to read, as in (minute * 100) / 60, even though it doesn’t change the result. Exponentiation has the next highest precedence, so 2**1+1 is 3 and not 4, and 3*1**3 is 3 and not 27. Can you explain why? Multiplication and both division operators have the same precedence, which is higher than addition and subtraction, which also have the same precedence. So 2*3-1 yields 5 rather than 4, and 5-2*2 is 1, not 6. Operators with the same precedence (except for **) are evaluated from left-to-right. In algebra we say they are left-associative. So in the expression 6-3+2, the subtraction happens first, yielding 3. We then add 2 to get the result 5. If the operations had been evaluated from right to left, the result would have been 6-(3+2), which is 1.
CORRECT ORDER:*>>->+
-
-
library.achievingthedream.org library.achievingthedream.org
-
The fight for liberty led some Americans to manumit their slaves, and most of the new northern states soon passed gradual emancipation laws. Manumission also occurred in the Upper South, but in the Lower South, some masters revoked their offers of freedom for service, and other freedmen were forced back into bondage. The Revolution’s rhetoric of equality created a “revolutionary generation” of slaves and free blacks that would eventually encourage the antislavery movement. Slave revolts began to incorporate claims for freedom based on revolutionary ideals. In the long-term, the Revolution failed to reconcile slavery with these new egalitarian republican societies, a tension that eventually boiled over in the 1830s and 1840s and effectively tore the nation in two in the 1850s and 1860s.
The manumission of slaves was actually one of the major factors that eventually sprung the Anti-Slavery movement into action and conflict of the northern and southern states.
-
Political and social life changed drastically after independence. Political participation grew as more people gained the right to vote. In addition, more common citizens (or “new men”) played increasingly important roles in local and state governance. Hierarchy within the states underwent significant changes. Locke’s ideas of “natural law” had been central to the Declaration of Independence and the state constitutions. Society became less deferential and more egalitarian, less aristocratic and more meritocratic.
The article mentions Lockes' "natural law" which is the central idea that all people of all genders and race had the right to life, liberty, and property. The fundamentals of freedom and our constitution today.
-
The new states drafted written constitutions, which, at the time, was an important innovation from the traditionally unwritten British Constitution. Most created weak governors and strong legislatures with regular elections and moderately increased the size of the electorate. A number of states followed the example of Virginia, which included a declaration or “bill” of rights in their constitution designed to protect the rights of individuals and circumscribe the prerogative of the government. Pennsylvania’s first state constitution was the most radical and democratic. They created a unicameral legislature and an Executive Council but no genuine executive. All free men could vote, including those who did not own property. Massachusetts’ constitution, passed in 1780, was less democratic but underwent a more popular process of ratification. In the fall of 1779, each town sent delegates — 312 in all — to a constitutional convention in Cambridge. Town meetings debated the constitution draft and offered suggestions. Anticipating the later federal constitution, Massachusetts established a three-branch government based on checks and balances between the branches. Unlike some other states, it also offered the executive veto power over legislation. 1776 was the year of independence, but it was also the beginning of an unprecedented period of constitution-making and state building.
This time was crucial for the building of the first United States government but also excluded the lives of many who weren't "white".
-
Like the earlier distinction between “origins” and “causes,” the Revolution also had short- and long-term consequences. Perhaps the most important immediate consequence of declaring independence was the creation of state constitutions in 1776 and 1777. The Revolution also unleashed powerful political, social, and economic forces that would transform the post-Revolution politics and society, including increased participation in politics and governance, the legal institutionalization of religious toleration, and the growth and diffusion of the population. The Revolution also had significant short-term effects on the lives of women in the new United States of America. In the long-term, the Revolution would also have significant effects on the lives of slaves and free blacks as well as the institution of slavery itself. It also affected Native Americans by opening up western settlement and creating governments hostile to their territorial claims. Even more broadly, the Revolution ended the mercantilist economy, opening new opportunities in trade and manufacturing.
The American Revolution had everlasting consequences on the country as a whole and affecting the lives of woman, slaves, and Native Americans.
-
-
Local file Local file
-
no longer the sign ofselfl ess devotion or of an elevated sou
suffering for love used to be seen as a good thing
-
-
accessmedicine.mhmedical.com accessmedicine.mhmedical.com
-
The lung injury may be direct, as occurs in toxic inhalation, or indirect, as occurs in sepsis
Yes
-
-
www.researchgate.net www.researchgate.net
-
Dissections provided evidence that 69% (295/427) of thecaptured females had sperm plugs in their reproductive tracts,indicating that these crabs mated this year. The percentage offemales with swollen spermathecae but no sperm plugs (in-dicating mating success during a previous year) was 14%. Thismeans that 356/427 (83%) of the females examined couldproduce viable eggs in the year of collection. Crabs from eachof the 3 ports showed similar mating trends, and can thus begrouped together for analysis (Table 1
Question #4) In the article, "ovigerous" is defined as referring to female crabs that are carrying eggs. The term "spermatheca" refers to the reproductive organ in female crabs that stores sperm. The text explains that "presence of sperm in the spermathecae indicates both recent and past mating success." The evidence that a female Dungeness crab had recently mated is the presence of a sperm plug in her reproductive tract. The study found that 69% of the captured females had sperm plugs, indicating they mated in the year of collection. Additionally, it reported that 83% of the females could produce viable eggs based on the presence of either sperm plugs or swollen spermathecae. Regarding mating frequency, the text notes that female crabs do not extrude eggs every year. Some females may skip egg production in certain years, as indicated by previous studies referenced in the article. However, the data suggest that female crabs that are mature and have molted typically mate after each molt, leading to variable annual reproductive success.
Question) Given that a significant percentage of female Dungeness crabs were found to have recently mated, yet not all extrude eggs annually, what further research could be conducted to understand the factors influencing the variability in reproductive success among these females, and how might environmental conditions or population dynamics play a role in their mating and egg-laying behaviors?
-
-
link.springer.com link.springer.com
-
n Ulithi Atoll, Federated States of Micronesia, we have documented an unusual phase shift from reefs with a diverse stony coral assemblage to reefs dominated by a single species of stony coral: Montipora sp.—a coral-to-coral phase shift.
The shift to one coral species (Montipora) over others is rare. How does this phase shift impact other organisms in the reef, like fish and invertebrates? Could it lead to a decline in species that depend on diverse coral for shelter?
-
-
-
“I believe Donald Trump is a danger to the well being and security of America,” she said.
"I believe" really helps us see that this is very opinionated before we actually read the quote.
-
If Trump wins, Harris said, “He’s going to sit there, unstable and unhinged, plotting his revenge, plotting his retribution, creating an enemies list.”
Any proof that he is going to do this if he wins? Or is this just what she/ the left thinks will happen?
-
-
-
To change that port, use the arg -ipfs-gateway-address /ip4/127.0.0.1/tcp/8080 with a different port value
ipfs gateway port setting
-
-
www.americanyawp.com www.americanyawp.com
-
The British Empire competed with French, Spanish, Portuguese, Dutch, and even Scottish explorers to claim land in North America and the Caribbean – much of it already settled by Native Americans. This diverse territory would continue to be contested throughout the eighteenth century
This sentence shows the who, it's important to remember who in history we are talking about. One of the biggest reasons it is important to keep track of who is because cultural differences. For example the Dutch culture is significantly different from the Spanish. Knowing that all these explorers wanted to claim land helps better understand the background. Lastly, knowing what is being claimed allows the reader to feel more involved and better comprehend what occurred.
-
-
www.americanyawp.com www.americanyawp.com
-
but one slave trader alleged that before 1788, the ship carried as many as 609 enslaved Africans.
This part of a sentence may not seem too significant by itself but the year and the number of enslaved is highly important. Knowing when slavery took place is super relevant. on the other hand, knowing that 609 enslaved Africans is what a ship could carry at a time truly puts things in perspective. This shows how bad slavery could be when used to its fullest worst potential. Slavery was horrible and unfortunate affected many people.
-
The Brookes print dates to after the Regulated Slave Trade Act of 1788, but still shows enslaved Africans chained in rows using bilboes, which were iron leg shackles used to chain pairs of enslaved people together during the Middle Passage throughout the seventeenth and eighteenth centuries.
To me this sentence is a good definition to give information on what slave ships were like. Knowing what the Brookes were like and the trade act can be important to help better understand slavery. When reading through knowing about the iron leg shackles helps put things in perspective. Slavery was horrible and reading definitions or better explained sentences helps to better grasp what took place.
-
-
eliterature.org eliterature.org
-
This fanciful scenario is meant to suggest that the place of writing is again in turmoil, roiled now not by the invention of print books but the emergence of electronic literature. Just as the history of print literature is deeply bound up with the evolution of book technology as it built on wave after wave of technical innovations, so the history of electronic literature is entwined with the evolution of digital computers as they shrank from the room-sized IBM 1401 machine on which I first learned to program (sporting all of 4K memory) to the networked machine on my desktop, thousands of times more powerful and able to access massive amounts of information from around the globe. The questions that troubled the Scriptorium are remarkably similar to issues debated today within literary communities. Is electronic literature really literature at all? Will the dissemination mechanisms of the Internet and World Wide Web, by opening publication to everyone, result in a flood of worthless drivel? Is literary quality possible in digital media, or is electronic literature demonstrably inferior to the print canon? What large-scale social and cultural changes are bound up with the spread of digital culture, and what do they portend for the future of writing?
The piece stands out for its evocative visual style that merges primitive aesthetics with a contemporary sensibility and for its deep historical roots. Its refusal to delineate clearly between character perception and objective reality challenges readers to reconsider the nature of truth within storytelling. In my opinion, Leishman's work is refreshingly original and transformative in the digital literature landscape—it pushes boundaries and invites a level of interaction and immersion that is profoundly thought-provoking and representative of the medium's potential to redefine literary engagement.
-
-
runestone.academy runestone.academy
-
The modulus operator turns out to be surprisingly useful. For example, you can check whether one number is divisible by another—if x % y is zero, then x is divisible by y. Also, you can extract the right-most digit or digits from a number. For example, x % 10 yields the right-most digit of x (in base 10). Similarly x % 100 yields the last two digits.
basicallly % calculates the remainder of the division
-
-
-
Payment will go to the US Treasury and other federal agencies directly affected by the collision or involved in the response.
Will this be used for compensation to families, rebuilding, etc?
-
-
www.cnn.com www.cnn.com
-
When asked about mounting criticism from opponents who suggested reconsidering the Menendez brothers’ sentence was a political move, Gascón said, “There’s nothing political about this,”
This quote is an example of an opinion being made by the person being interviewed. Despite this it is a fact that this information was said and used by the journalist.
-
-
pmc.ncbi.nlm.nih.gov pmc.ncbi.nlm.nih.gov
-
These results suggest that even the absence of clear numerical trophic cascades, demographic rates of non-target species may be influenced by the removal of top predators.
The study’s findings indicate that the presence or absence of top predators can affect factors such as size and mortality rates of lower trophic level species, underscoring the nuanced dynamics of coral reef communities.
-
For example, when predatory fish are removed by fishing, demographic traits such as size and longevity may increase for non-targeted species in lower trophic levels because of reductions of predation intensity or predation risk.
This sentence articulates an important prediction regarding the ecological consequences of fishing on coral reef ecosystems, specifically concerning the demographic responses of non-targeted prey species following the removal of top predators.
-
Even though numerical trophic cascades are rare in marine systems, indirect effects on demography or life histories of prey species may occur, and such indirect effects may have strong impacts throughout the community.
This sentence highlights a critical aspect of ecological dynamics in marine systems, specifically within coral reefs. It acknowledges that while traditional models of trophic cascades—where predator removal leads to increased prey populations—are infrequent in marine environments, the indirect consequences of these changes can still significantly affect community structure and function.
-
. However, these effects are likely not restricted to changes in abundance or size, but may include changes in demography and life histories that are more difficult to detect, and yet may still strongly influence the ecology of these systems.
This sentence emphasizes the complexity of ecological impacts caused by fishing in coral reef ecosystems. While many studies have focused on the more apparent consequences of fishing—such as changes in the abundance and size of target fish species—the authors argue that the effects extend beyond these metrics.
-
Through detailed analyses of focal species, we found that size and longevity of a top predator were lower at fished Kiritimati than at unfished Palmyra.
This sentence highlights the key findings of the study regarding the impact of fishing on top predator species in two contrasting environments: Kiritimati, where human fishing activities are prevalent, and Palmyra, a protected area with no fishing.
-
-
www.cnn.com www.cnn.com
-
She added that federal regulation that would require companies to abide by specific protocols is also needed to keep up the rapid pace of AI advancement.
This can be taken as a fact because there is evidence that will be able to back it up.
-
But not all users dislike the feature.
This is a conclusion. The user's who are in more favor of the AI tool do not pay the fee to remove it as stated above.
-
-
bgr.com bgr.com
-
A reliable source told the blog that the film is a vampire movie set in the 1920s.
The fact that the journalists mentioned that his source was reliable could be biased. We don't know this source so it could be a bad one. or even worse it was just a straight lie.
-
The same report learned one tidbit about Christopher Nolan’s plot. Apparently, the film isn’t set in the present day, but it’s unclear if that means it’ll be about the future or the past.
This quote gives us a better understanding on how much the journalists knows about the film. Before this all that was known was the release date and stars. The fact that the setting isn't really known shows how early on this film is.
-
Nolan has partnered with Universal Pictures for the film, which has a July 17th, 2026 release date.
This journalist Is primarily focused on human interest in this article. Christopher Nolan is a big name director so even just a release date can spark mass interest into any of his films
-
-
docdrop.org docdrop.org
-
gently
This adjective wasn't necessary, they could have just said, "The metal rod was slid into the graduated cylinder".
-
-
www.americanyawp.com www.americanyawp.com
-
Bay shillings
Bay Shillings is a term from the massachusetts colony. It was a silver coin used in the 17th century.
-
-
www.inc.com www.inc.com
-
I mean, it seems obvious that Amazon would pay attention to what Walmart is doing. The two companies are, quite literally, massive competitors.
another conclusion - nothing to prove this
-
mazon Just Announced an Unexpected New Benefit That Prime Members Are Going to Love A new Amazon Prime fuel discount is a direct copy of Walmart’s popular benefit.
Right out the gate with the title there is a conclusion being made. The fact is that Amazon just announced a new benefit, the conclusion is that Prime Members are going to love it. There is no way to prove that, it is a conclusion.
-
-
www.americanyawp.com www.americanyawp.com
-
Laws of Nature and of Nature’s God
Refering to the Bible and human nature.
-
-
www.americanyawp.com www.americanyawp.com
-
siege and capitulation of Charleston
Armies would lay siege to heavily fortified areas, usually castles or cities, to force thier enemy into submission by cutting them off from the rest of thier army and slowly starving them out.
-
plundered
Basically looting
-
Colonel M’Girth and his soldiers
Colonel is a military rank.
-
inmates could expect no mercy.
Who were the inmates, what crimes did they commit?
-
-
www.americanyawp.com www.americanyawp.com
-
Once upon a time, in the month of bleak winds, a Pawkunnawkut Indian named Tackanash, who lived upon the main land, near the brook which was ploughed out by the great trout, was caught with his dog upon one of the pieces of floating ice, and carried in spite of his endeavours to Martha’s Vinyard Island….
i agree that legends were very popular and they remind of beowolf as well
-
He said that much people grew up around him, men who lived by hunting and fishing, while their women planted the corn, and beans, and pumpkins. They had powwows, he said, who dressed themselves in a strange dress, muttered diabolical words, and frightened the Indians till they gave them half their wampum. Our fathers knew by this, that they were their ancestors, who were always led by the priests—the more fools they! Once upon a time
we need to consider not only moshup’s point of view but also the tribe that he claims to have protected and killed the bird in favor of but we do not get there side of the story
-
Moshup said, a great bird whose wings were the flight of an arrow wide, whose body was the length of ten Indian strides, and whose head when he stretched up his neck peered over the tall oak-woods, came to Moshup’s neighbourhood. At first, he only carried away deer and mooses; at last, many children were missing. This continued for many moons. Nobody could catch him, nobody could kill him. The Indians feared him, and dared not go near him; he in his turn feared Moshup,
this story is significant to the historical context of the native Americans as it describes there history and how happy they were until a great bird came to disrupt there life, we don’t know what this bird represents but it ruined there lives and stole there children and moshup put a stop to it and killed the bird
-
which he caught by wading after them into the great sea, and tossing them out, as the Indian boys do black bugs from a puddle.
comparing the actions of a giant to be similar to native American practices which might imply this giant used to be a native tribe member himself before coming to this island.
-
ancient giant who lived on Martha’s Vineyard Island and offered stories about the history of the region.
how old is the giant and how did he get to this island? and why does he choose to stay there?
-
The Legend of Moshup, 1830
this is when english speaker obtained the story but the natives could have been telling this story for hundreds of years at this point making it an ancient folk tale
-
I hear the stranger ask, “Who was he?” I hear my brothers ask, “Was he a spirit from the shades of departed men,
the audience of this story in context is both tacknashes tribe and the other tribes that may have heard it from him and his tribe but outside context both we and future native tribes are the target audience
-
he found the man whose existence had been doubted by many of the Indians, and believed to have been only seen by deceived eyes, heard by foolish ears, and talked of by lying tongues, living in a deep cave near the end of the island, nearest the setting sun.
when the tales are passed down from generation to generation it is believed they are only stories to scare children or to pass the time but often the hold a lesson or some manner of truth and tackanash found this out.
-
This was the beginning of fog, which since, for the long space between the Frog-month and the Hunting-month, has at times obscured Nope and all the shores of the Indian people.
i believe what is being implied here is that when moshup smokes his pipe since it is so big and he smokes for so long it creates an obscuring fog which is actually just smoke.
-
If it is not true, I am not the liar…”
is he implying in some way that while smoking he could have went to this island and ate the children himself, not being the hero in the story but the villian?
-
hills of the thunder?
what are the hills of thunder and are they connected to spirits in some way?
-
Folk tales offer a valuable window into the ways that Native Americans understood themselves and the wider world.
the native American people and access to there folk tales were very rare and restricted as the natives at first only shared the stories among themselves so these stories to us are newer and less understood.
-
-
Local file Local file
-
migrant flows from the NTCA have continuedunabated
I didn't even know NTCA meant until now.
-
As a result of the economicrecession of 2007–2009 and an increase in enforcement at the border, unauthor-ized migration from Mexico has steadily declined
This is actually something that makes sense but also it was an also critical time because the unemployment rate had increased and started decreasing by 2010
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
hackers stealing the names, Social Security numbers, and birthdates of 143 million Americans from Equifax
The social security breach that happened recently was incredibly unwarranted, especially since there was a huge responsibility on the protection of social security information. It climaxed to the point where everyone was informed online to shut down their credit cards immediately, as having your Social Security stolen leaves you in an extremely vulnerable position.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
There might be some things that we just feel like aren’t for public sharing (like how most people wear clothes in public, hiding portions of their bodies)
I think that a less obvious reason for privacy on social media is the fear of garnering an online presence that isn't true to who you actually are as a person. More specifically, if someone were to post certain aspects like their body, expensive clothes, or expensive food for example, a false narrative that the user is uber-rich may be fostered and ultimately may affect the user's relationships with others in real life.
-
-
Local file Local file
-
She doesn't cling to her own comfort;thus problems are no problem for her
I immediately thought of this quotes from Buddha "In the end, only three things matter: how much you loved, how gently you lived, and how gracefully you let go of things not meant for you." and "The root of all suffering is attachment.". I believe that in order to achieve fulfillment and peace one should go with the flow of life and to not cling to her own comfort and expectations.
-
The Master never reaches for the great;thus she achieves greatness.
There is this saying "Exactly what you run away from you end up chasing" and I believe the opposite works as well. What you do not chase you attract.
-
Think of the small as largeand the few as many
This is all about noticing small components that make up the whole picture and appreciating those.
-
-
cah.ucf.edu cah.ucf.edu
-
I had chosen to write aboutbetrayal for our creative writing exercise, and instead ofdissuading nine-year-old me from the mature topic, shegave me books to read, offered out-of-class writingassignments that she would grade on her own time, andnominated my name for every speaking role in anyschool event. Miss Sandy was the most underpaid andunderappreciated teacher, and yet she taught me farbeyond the 4th grade. In fact, the lessons she impartedstill influence my learning now, and one day thesenuances of language will be impressed onto my children.
Another moment in which her life lesson has benefited her to become a better writer gaining more languages to use for herself and as she states for her kids.
-
-
www.cnet.com www.cnet.com
-
Usernames will need to be unique and have two numbers appended to the end of them, which Signal states was done in order to help keep usernames "egalitarian and to minimize spoofing."
-