1. May 2026
    1. 2c. Enable the profile:Select the profile → click Enable → click Test Access

      We don't have any option to test the access at profile level

    1. I would like to thank the authors for the thorough evaluation of the Cell-/SO-Check device. I believe that this work is very important as it will help other researchers assess the suitability of the device for their own work. However, I believe that not all of the conclusions drawn by the authors are supported by the data shown.

      The authors state that no linear relationship between spectrophotometer zinc and serum zinc was found (r = 0.03, p-value = 0.800) and that the spectrophotometer method failed to perform better than chance at identifying individuals with zinc deficiency (ROC-AUC = 0.547). Nevertheless, the authors come to the conclusion that "The Cell-/SO-Check device may be used to rank children in population-based studies in SSA according to their zinc status, [...].". This conclusion seems to be based on two arguments: The first argument is that, according to the authors, the difference between spectrophotometer zinc and serum zinc appears to be consistent, so that it may be possible to correct for the bias. The second argument is a high specificity of 90.91% of the spectrophotometer method when distinguishing between individuals with and without zinc deficiency.

      With regard to the first argument, the authors explain:

      "Nevertheless, the P-value for the paired samples t-test of both zinc and ferritin was significant (>0.05), suggesting that the bias albeit present, was consistent. This does not necessarily stipulate that the spectrometer and laboratory methods are incomparable. It could be a matter of calibration discrepancies. In fact, according to Bland & Altman, consistent bias can be adjusted for by subtracting the mean difference of the ferritin or zinc measurements by the spectrometer and the laboratory method from the measurement by the spectrometer method [23]."

      It is not clear to me what is meant by "paired samples t-test" as I did not find such a test described in the methods section. I'm assuming that this paragraph refers to the simple linear regression of the paired differences on the mean of the paired measurements, so that a significant p-value (< 0.05) in this context likely indicates that a trend is present in the data shown in Fig. 3a. However, to my understanding, the trend seen in Fig. 3a is not at all an indication of consistency of the bias, but merely a result of the fact that the variance of the serum zinc data is much higher than the variance of the data measured using spectrophotometry. The higher variance of the serum zinc data causes the mean of the paired measurements to be mostly controlled by serum zinc, meaning that a low mean of the paired measurements is likely associated with low serum zinc. If serum zinc is low, there is a larger chance that the corresponding spectrophotometer zinc concentration will exceed this value, even if the spectrophotometer data was simply random numbers drawn from a normal distribution with constant mean and standard deviation. The pattern observed would therefore be expected even for random data.

      In order for the discrepancy between spectrophotometer zinc and serum zinc to be correctable, there would still need to be a correlation between spectrophotometer zinc and serum zinc. A significant linear correlation was not found as shown by the Pearson correlation coefficient of 0.03 and the respective p-value of 0.8. Visual inspection of the data shown in the scatter plot in Fig. 2a also does not indicate that any other type of relationship between serum zinc and spectrophotometer zinc is present. The data therefore does not seem to support the authors' argument that correction of a presumed consistent bias could make this method suitable for determining zinc status of children in a population-based study.

      The second argument presented by the authors in support of the conclusion is that a high specificity of 90.91% with a corresponding sensitivity of 33.3% was found when distinguishing between individuals with and without zinc deficiency. I might be missing something but to me it seems that the given specificity and sensitivity are in contradiction to the numbers shown in Tab. 2. According to Tab. 2, of the 72 study participants, 6 individuals were identified as zinc-deficient by a serum zinc concentration < 7.7 micromol/L (the gold standard the spectrophotometer method is tested against) whereas 17 individuals were identified as zinc-deficient by spectrophotometry. If the sensitivity was 33.3%, that means that 2 of the 17 individuals flagged as zinc-deficient by spectrophotometry were "correctly" classified as zinc-deficient. This means that 15 individuals were "incorrectly" classified as zinc deficient and only 51 participants of the 66 individuals without zinc deficiency were "correctly" classified as not zinc deficient. This would result in a specificity of 77% which is very close to what would be expected by chance and consistent with the ROC-AUC very close to 0.5.

      Did the authors use a different cutoff value to calculate sensitivity and specificity than the one used in Tab. 2, possibly the "optimal" cutoff value determined from the ROC-plot? Even if a specific cutoff value can be found that would yield these results, it seems that this is very likely due to chance considering the ROC-AUC close to 0.5 and the same cutoff value is therefore unlikely to perform as well in future studies. Could the authors please clarify what cutoff values were used for detecting zinc deficiency using spectrophotometry for the data in Tab. 2 and for the stated specificity of 90.91% and sensitivity of 33.3%?

      In summary, even though I understand the author's point that a measuring device used in a population-based study does not have to fulfill the same requirements as a device intended for clinical use, the data shown in this article seems to be consistent with the interpretation that the spectrophotometer data does not contain any information about the zinc status of the study participants and is therefore not suitable for measuring zinc status in the population investigated, not even for the purpose of a population-based study.

    1. R0:

      Reviewer #1:

      the authors have done appreciable work in collecting data on the health care issues wrt to Rabies prevention in Tanzania. I have the following observations. Some more statistics ie tables etc may be added to the article to justify the results and discussion. The tables should then be discussed in detail with conclusion pointing out the steps needed to be taken to improve the prevalent chaos in their health care system. costs per patient could be estimated inclusive of loss of wages etc. this will further improve the impact of the article. I couldnt open the supplementary GITHUB site for more results as it seemed to be a paid site.

      Reviewer #2:

      REVIEW of the article "Access to Rabies Post-Exposure Prophylaxis in Tanzania: A Mixed-Methods and Theoretically Informed Study to Inform Policy and Practice"

      This article is a comprehensive study addressing a critical global public health issue: access to rabies post-exposure prophylaxis (PEP) in Tanzania. The study adheres strictly to scientific research principles and is highly relevant, clearly structured, and methodologically rigorous. The use of mixed methods (quantitative analysis of large-scale data from the Integrated Bite Case Management (IBCM) platform and qualitative analysis of healthcare worker communications) within a theoretical model of access to healthcare is a key strength of the study, allowing us not only to document the facts but also to understand the mechanisms and context of barriers. This article makes a significant contribution to both the academic understanding of health systems issues in low- and middle-income countries and to intervention planning, particularly in light of the launch of funding for PEP programs by GAVI (the Vaccine Alliance).

      The relevance of this study is undeniable. Rabies remains a deadly, yet 100% preventable, disease that claims tens of thousands of lives annually, primarily among the poorest rural populations in Africa and Asia. The announcement of the "Zero by 30" goal and GAVI's decision to support the procurement of rabies vaccines create a historic window of opportunity. At this moment, field-based evidence is crucial, demonstrating that the mere availability of vaccines in a country is insufficient. This study brings this issue to the forefront by systematically identifying a cascade of interconnected barriers.

      The scientific novelty lies in the following:

      Application and development of the theoretical model. The authors not only utilize Penchansky and Thomas's classic model (five dimensions of access), but also empirically substantiate and add a sixth dimension—"Appropriateness." This crucial conceptual addition highlights provider competence as a separate, critical access factor, as clearly demonstrated by data on the inappropriate use of RIG or tetanus toxoid.

      Depth and Richness of Data. The combination of data on over 10,000 patients with an analysis of real-life healthcare worker conversations and concerns (>3,000 messages) provides a unique, comprehensive picture of the situation. This allows us to move beyond dry statistics ("5% experienced interruptions") to understanding the human tragedies and professional dilemmas behind these figures.

      Documenting Emerging Threats. The study documents the emergence of a new, dangerous practice—the use of rabies immunoglobulin (RIG, Equirab) as a substitute for vaccination, which has resulted in fatalities. This is a crucial observation for surveillance systems and regulatory agencies.

      The study's methodological framework is exemplary for public health research.

      Design: Mixed methods are entirely justified for studying the complex phenomenon of access. The quantitative component objectively captures the scale of the problem (distances, completion rates, interruptions), while the qualitative component reveals the causes, context, and subjective experiences.

      Theoretical Framework: The use of a structured conceptual framework (expanded healthcare access framework) lends systematicity to the analysis, prevents fragmented findings, and allows for the classification of barriers into clear categories, which is useful for developing targeted interventions.

      Triangulation: The authors consciously employ triangulation, systematically comparing findings from quantitative and qualitative data. This enhances the validity and reliability of the results, allowing for identification of areas where the data complement or corroborate each other.

      Ethics: The study has all necessary ethical approvals, and the process for obtaining informed consent from participants (healthcare workers) is described.

      Despite its undeniable strengths, the study has a number of limitations, some of which the authors honestly and thoroughly describe in the relevant section. This enhances the credibility of the study.

      The absence of a patient voice. This is the main methodological limitation, noted by the authors themselves. The study brilliantly analyzes the system from the service provider's perspective, but the "measure" of acceptability—that is, how much patients trust the service, how comfortable they are, and whether the services meet their cultural expectations—remains unexplored. This is an important gap, as cultural beliefs and stigma can be a significant barrier.

      Sample bias. The data only cover those who reached official medical facilities included in the KUSU. The most vulnerable groups—those who immediately turned to traditional healers—didn't have the means to even pay for the first visit.

      Reviewer #3:

      The authors employed a mixed-methods approach, integrating quantitative data from the Integrated Bite Case Management platform with qualitative data from hotline calls and peer-support chats among health and veterinary workers, to identify barriers and facilitators affecting bite victims’ access to post-exposure prophylaxis across four regions of Tanzania. This work addresses a critical and timely public health issue, as inadequate knowledge of post-bite treatment and constraints in access to rabies biologics remain major contributors to preventable rabies deaths in dog-endemic regions, including Tanzania. I have just one major concern with the manuscript in its current version. While the authors correctly explain early in the Introduction (line 69 and Fig. 2) that post-exposure prophylaxis (PEP) comprises of immediate wound washing, a series of rabies vaccinations, and administration of rabies immunoglobulin (RIG) for previously unvaccinated patients with severe (Category III) exposures, this framework is not consistently or adequately reflected in the subsequent analysis and interpretation. Specifically, across four of the five dimensions of the expanded healthcare access framework, the discussion focuses exclusively on rabies vaccines, with little to no consideration of rabies immunoglobulin (RIG). For patients with severe (Category III) exposures, RIG is arguably the most critical component of PEP, and its omission substantially weakens the analysis. Issues related to availability, accessibility, and affordability should explicitly address RIG, as barriers to RIG access often differ from—and are more severe than—those associated with vaccines. Without clearly incorporating RIG into these dimensions, the manuscript provides an incomplete assessment of the factors influencing access to life-saving PEP. I understand the challenges surrounding RIG access, particularly as the authors allude to in line 363, where they note that the purchase of RIG had not previously been seen in Tanzania. Given this context, the implications of RIG unavailability should be discussed explicitly and in greater depth in the Discussion section. Doing so would help clarify how historical absence, procurement barriers, cost, cold-chain requirements, and limited clinician familiarity contribute to ongoing and future challenges in delivering complete PEP. A clearer, more focused discussion of RIG would strengthen the manuscript by providing a more realistic and policy-relevant assessment of barriers to comprehensive rabies prevention in Tanzania.

      Academic Editor:

      I would like to sincerely apologise for the delay you have incurred with your submission. It has been exceptionally difficult to secure reviewers to evaluate your study. We have now received three completed reviews; the comments are available below. The reviewers have raised significant scientific concerns about the study that need to be addressed in a revision.

      R1:

      Reviewer #1:

      good article

      Reviewer #2:

      This article's conclusions and table discuss the need to decentralize PEP services to priority facilities.

      I suggest the authors provide more specific criteria or models. For example: "In each ward with a population greater than X and/or located more than Y km from an existing PEP site, at least one medical facility (dispensary or health center) should be designated for rabies storage and vaccination. The selection of these facilities should be based on accessibility to the public (24/7) and the availability of trained personnel." This transforms a general recommendation into a measurable standard.

      Reviewer #3:

      Thank you for addressing reviewer comments from the previous round.

      R2:

      Reviewer #1:

      All comments have been addressed.

    1. R0:

      Reviewer #1:

      This paper examines factors associated with Shigella-attributed diarrhea among children aged 6–35 months in Malawi, including a novel assessment of seasonal effect modification. The analyses are technically rigorous and appropriately applied to the observational dataset, and the findings provide valuable evidence to guide targeted interventions, including the forthcoming Shigella vaccine rollout. I recommend publication pending a few minor revisions noted below.

      • The introduction explicitly situates the burden of Shigella in an economic context, but are there any other lenses, perhaps more human-centric, through which we can think about the implications of this burden? • Please include references for the categorization methods of diarrhea severity and WASH in the “Predictor Variables” section of the Methods. • How did you approach producing age group bins for this study? Was it data driven or decided a priori based on some contextual motivation? • Please add a statement justifying Poisson regression as the chosen analytic method. • Given that some samples were tested by culture, some by qPCR, and some by both it would be beneficial to add more clarification in the methods about the different testing procedures and results classification. For the samples tested by both methods, what happened if one result was positive and the other negative? In the discussion you state “Notably, 43% of qPCR-positive cases were also culture-positive, supporting the clinical relevance of the qPCR-detected cases”, however only 43% overlap between the two methods actually seems quite low – can you point to any other studies that have looked at this? • I believe when using generalized estimating equations (GEE) to account for clustering it is standard to report the number of clusters and distribution of cluster size. • These analyses rely on an assumption of missing data at random/completely at random, however the complete case analysis conducted excludes 32% of children with missing vaccination data. Please elaborate on this missingness in the limitations beyond reduction in sample size to include the potential bias introduced if the data is not in fact missing randomly. Alternatively, it may be worthwhile to consider inclusion of the observations with unknown vaccination status as a third category – which may still have relevant interpretation given the reality of often not knowing children’s vaccination status when designing interventions. • Was there any consideration of prior antibiotic use among patients reporting to the clinic with diarrhea? Please elaborate how this may or may not influence these results (perhaps in the limitations). • Please standardize spelling of “enrollment” throughout the manuscript (sometimes one vs two ls). • In Table 1 the Wasting “None” group percentage needs a decimal instead of a comma.

      Reviewer #2:

      The findings are interesting but it need through revision considering the following critical points. • Clearly mention the inclusion and exclusion criteria for selection of patients in the current study? • What was the limitation of the study? • Shigella were isolated and identified using culturing. What was the specie distribution of Shigella? • Mention the duration of the study (months/years) in the abstract. • Briefly describe how the culture and qPCR were used for detection of Shigella. Is Shigella DNA directly detected in fecal sample or it is detected from culture? • Define the criteria of Household drinking water source categorization: Improved, Unimproved?? • The abbreviations used in the tables should be defined in the table’s foot note.

      Academic Editor:

      Two reviewers have evaluated your manuscript and provided their comments below. In particular, please provide more detail in the methods section on the microbiological methods for Shigella detection and add information on any missing critical variables e.g. in the table footnotes. In addition and given the high missingness for vaccination status, conducting a sensitivity analysis for the multivariable models while excluding vaccination would aid with the interpretation of the study findings.

    1. R0:

      Reviewer #1: This study is very interesting, and there are still few publications that link medicines quality with treatment outcomes. However, there are some comments to improve the information and discussion in the paper.

      The comments are as follows: * Line 33: of antimicrobials in low- and middle-income countries SF. What does countries SF mean?

      • Line 91 Graph: we reanalyzed data from seven clinical trials of ofloxacin as a treatment for typhoid: Is this clinical trial data from the same country as the population/API distribution included in the model? It's described as being from Vietnam, but one API distribution is from Bangladesh.

      • Line 129: probability to investigate....... There is no dot after probability.

      • How did you measured or calculated for treatment outcomes from Bangladesh in the model? It's known that its API is substandard (less than 60%). LINE 199, while the clinical trial used for modeling is from Vietnam.

      • Line 246: When %API followed the ciprofloxacin distribution (mean <100% %; Fig 3B) Why does Fig 3B use SF Cipro survey distribution data as the API estimate when it actually uses clinical trial data from OFLO? -> This is discussed in the limitations section.

      • Line277- 278: insufficient doses does not mean that the medicines is substandard or falsified. How to explain the correlation with research titles related to the impact of substandard or counterfeit drugs?

      • Line 297: The potential risks of SF fluoroquinolones are also likely to have increased over time as the fluoroquinolone MIC of S. Typhi increases. Why does the potential of fluoroquinolones to act as SF increase as the fluoroquinolone MIC of S. Typhi increases? Can you explain this further?

      • Line 337 Our framework for quantifying the impact of SF antimicrobials demonstrates that the population-level impacts of SF antimicrobials on typhoid outcomes may be heightened when the MIC of S. Typhi is high and the SF antimicrobials have low %API.

      So, the thought process is this: SF medicine for antibiotics is assessed based on the %API results. Disease severity is assessed by the MIC. The assumption is that when the %API is outside the normal range, it is insufficient to kill the bacteria, thus prolonging the patient's symptoms. For typhoid, there are two drugs: Cipro and Oflo. The SF survey results only show one study for Oflo, while there are seven studies for Cipro. Clinical trials were conducted only for Oflo, with seven studies.

      Question: • Why is the SF Cipro survey discussed if it only uses clinical trial data for OFLO? -> This is discussed in the limitations section. • Is there any data on the relationship between %API and outcome, or the relationship between %API and the effect on MIC? If not, how is the impact of %API on MIC and clinical trial outcome calculated? Is it by estimating the %API to be a lower or higher dose? • The distribution of SF Meds Survey results is used to estimate the relative risk of outcome according to Step 3 in Figure 1. Estimate the population-level relative risk of outcome due to medicines with incorrect %API, but is the probability (in addition to the distribution range) of the SF drug %API based on that distribution considered in the estimate/weighting when calculating the relative risk?

      • Finding step 3: we estimated a 4.1h (90% CrI: 0.75–8.7) reduction in fever clearance time for non-susceptible infections with an MIC of 1 mg/L (line 243) -> This is only for the target dose of 52 mg/kg

      • Finding step 3: we estimated an 11h (90% CrI: 1.9–25) increase in fever clearance time for an MIC of 1 mg/L (line 246) -> This is only for the target dose of 52 mg/kg ​

      Reviewer #2: 1. Analysis assumes that all antimicrobials administered in the clinical trials contained 100% API. Given that this assumption underpins the entire modeling framework, how do you justify its validity, and can you provide quantitative sensitivity analyses to demonstrate how violations of this assumption would affect your conclusions? 2. The clinical data are restricted to Vietnam (1992–2001). How can the findings be considered applicable to current global health contexts, particularly in sub-Saharan Africa, where antimicrobial resistance patterns and treatment practices differ substantially? 3. The medicine quality dataset appears sparse, heterogeneous, and geographically uneven. How do you address potential selection bias and lack of representativeness, and what is the impact of these limitations on the robustness of your estimates? 4. Given that ciprofloxacin data are used as a proxy for ofloxacin due to limited availability, how do you justify this substitution pharmacologically and epidemiologically, and have you assessed the uncertainty introduced by this assumption? 5. The manuscript lacks sufficient detail on model diagnostics (e.g., convergence metrics, posterior predictive checks). Can you provide comprehensive diagnostic results to demonstrate that the Bayesian models are well-calibrated and robust? 6. Approximately 5.9% of observations were excluded due to missing MIC values. How might this exclusion bias your findings, and have you explored imputation methods or sensitivity analyses to assess the impact? 7. The reconstruction of the administered dose based on tablet rounding assumptions introduces potential measurement error. How have you quantified and accounted for this uncertainty in your models? 8. Some results rely on extrapolation beyond observed dose ranges. How do you ensure that these extrapolations are methodologically valid and not driving key conclusions, particularly in low-dose scenarios? 9. The underlying individual-level clinical data are not fully publicly available, which may not comply with PLOS data-sharing requirements. How do you plan to ensure full reproducibility of results, and can the data be deposited in a controlled-access repository? 10. The reported effects on fever clearance time are relatively modest in absolute terms. Could you better justify the clinical and public health significance of these findings and clarify how they meaningfully inform policy or intervention strategies?

    1. A Plan represents developed intent — what you've decided to do about a Finding (or several). A Plan is bound to one or more Findings A Plan captures inputs and assumptions ("we're applying this only in us-east-1", "we'll use the standard remediation Playbook") A Plan evolves through revisions as you refine it A Plan is not an execution — it's the intent. The execution happens via the Bundle materialized from the Plan

      test_comment

    1. I try my best to stick to the grocery list but this was a really good deal for strawberries. I did that thing mixing 1 part white vinegar and 3 parts water, soak for 10 minutes and dry thoroughly to extend their freshness.

      Tip for preserving freshness of strawberries: Soak in 1 part white vinegar, 3 parts water, for 10 minutes.

    1. 因此,在外人看来,神友就是一群现实生活失败,却没有能力逃离中国的恨国蛆,“下辈子北欧”是他们最大的梦想。

      因此,在外人看来,兔友就是一群现实生活失败,却觉得自己能成为体制内赵家人的爱国蛆,“这辈子当上赵家人”是他们最大的梦想。

    2. 要想变成北欧人,条件也相当苛刻,无论是以结婚、工作还是投资的方式入籍,对神友们难度都太大。

      想要变成赵家人,条件也相当苛刻,无论是结婚,工作还是给钱的方式入籍,对兔友们难度都太大

    3. 北欧地区对华人的种族歧视尤为严重,如瑞典警察将一对中国老夫妇扔到坟地,瑞典电视台公然辱华......

      支那盐碱地对支那人的种族其实尤为严重,如一公司HR专门不招收荷兰农逼,毛泽东公然屠支辱华...

    Annotators

    1. Rebuilding mindset begins with helping students notice their own progress. e Reframing mistakes as information is an essential part of having a posi- tive academic mindset.

      This is something that we need to do every day in the classroom. We need to help students change their mindset and little by little believe in themselves as we show that we believe in them.

    2. When teachers frame student differences as deficits rather than as assets, a microaggression is ignited for the student. Too often, teachers who are not working to become culturally responsive misinterpret cul- tural differences as deficits, dysfunctions, or disadvantages in students, leading the teacher to react negatively toward the student rather than respond positively

      We need to be aware and think before act and say something that can be hurtful to our students. These attitudes have an impact on our students learning.

    3. This means that as c Iturally responsive teachers our focus has Neer ones # to_be.on. shifting mindset rather than on trying. to force engagement or _cajole students’ motivation.

      I think that we need to encourage our students to see their full potential. This has to be little by little through conversations based on trust.

    4. s warm demanders, our job is to get students to recognize that put- ting forth the effort is worth the work. We do this by helping each student cultivate an academic mindset.

      This is important to do with our students. They need to know that we believe in them and that we are holding them accountable.

    1. We assayed the RNA expression levels of the genes encoding the sodium channel modulator neurotoxin Nv1

      Would similar RNA levels be found in animals like platypus.

    2. Some animals produce a mixture of toxins, commonly known as venom, to protect themselves from predators and catch prey.

      If this experiment were ever expanded upon, would venomous species in Australia be perfect to use? There are many poisonous and venomous species on the continent. Specifically, the male platypus.

    1. eLife Assessment

      This manuscript addresses an important question in clinical neuroscience: the use of the theta/beta ratio as a biomarker of attention deficit hyperactivity disorder (ADHD). The study takes an exceptional "multiverse" analysis approach to show that aperiodic activity differences between healthy controls and people with ADHD are driving the apparent theta/beta ratio differences. From a neuroscientific perspective, this is a critical finding because it has a major impact on guiding research on the diagnosis and treatment of ADHD.

    2. Reviewer #1 (Public review):

      Summary:

      The authors address whether theta/beta ratio /TBR) can be used as a clinical biomarker for ADHD.

      Strengths:

      The data were acquired independently from 2 separate datasets, and there are sufficient subjects for adequate statistical power. The authors applied up-to-date EEG data preprocessing, state-of-the-art feature extraction, and statistical analyses, using a multiverse approach. By testing and comparing all meaningful approaches, defined a priori in the previous meta-analysis, the author convincingly demonstrates that TBR cannot be used as a clinical biomarker, and previous positive results can be explained by interactions between different factors (alpha peak frequency, aperiodic component, age).

      Weaknesses:

      There are no apparent issues with data, separate datasets, large sample sizes, and state-of-the-art data analysis.

    3. Reviewer #2 (Public review):

      Summary:

      This manuscript examines whether the theta-beta ratio as derived from EEG data relates to ADHD diagnoses. To do so, it performs a multiverse analysis across a large number of analytical choices, applied to a large EEG dataset, and corroborated in an additional validation set. The results overall show that the TBR is not a reliable indicator of ADHD diagnosis. In discussing the patterns of results across analytical choices, the authors also demonstrate some key points about what appears to be driving the ratio measures, noting that significant results appear to be driven by choices regarding aperiodic-correction and the use of individualized alpha frequencies, suggesting TBR measures can be affected by these features rather than reflecting theta and/or beta activity.

      Strengths:

      This manuscript addresses a clearly posed and important question in the literature, addressing a longstanding discussion on the relationship between TBR and ADHD, and uses a large dataset and an expansive analysis approach to provide a definitive answer. The strengths of the approach allow for a clear answer, providing a notable contribution to the field.

      Weaknesses:

      I find no notable weaknesses in the current manuscript nor any major issues that I think challenge the key findings of this manuscript.

    4. Reviewer #3 (Public review):

      Summary:

      In this manuscript, Strzelczyk, Vetsch, and Langer tackle an incredibly important question in clinical neuroscience: the use of the theta/beta ratio as a biomarker of attention deficit hyperactivity disorder (ADHD). The theta/beta ratio is argued to be so reliable as an ADHD biomarker that, in the United States, the Food and Drug Administration has approved its use as a biomarker for ADHD diagnosis. However, there is mounting evidence that the theta/beta ratio is likely not really measuring the relative power between two oscillations - the theta rhythm and the beta rhythm - but rather reflects differences in a singular, non-oscillatory aperiodic process. In this very convincing study, Strzelczyk and colleagues take a "multiverse" analysis approach to show that aperiodic activity differences between healthy controls and people with ADHD are driving the apparent theta/beta ratio differences. While in a vacuum, where a measure is a measure and if it's related to a diagnosis it's still useful no matter what, this distinction might not seem important, from a neuroscientific perspective this is a critical distinction, because the ratio between two oscillations has fundamentally very different underlying physiological mechanisms than aperiodic differences, and this framing has a major impact on guiding research on the diagnosis and treatment of ADHD.

      Strengths:

      While smaller studies and analyses have already hinted at similar results as shown here, the current study's multiverse analysis approach is comprehensive, convincing, and very well done. The large sample size of 1,499 participants is very impressive, as is the use of an independent validation sample of 381 participants.

      Overall, the technical and statistical aspects are very well done: the multiverse approach, the validation set, the resampling methods, and even the shiny apps. The authors should be applauded for being so thorough and making their data and analyses publicly accessible.

      Weaknesses:

      To be clear, I see no breaking weaknesses in the theoretical foundations, methods, statistical analyses, or interpretations. All of my recommendations below are for the sake of clarity, which I believe is especially important because this is such an important paper that many people should read.

      Comments:

      (1) Some figures are mislabeled. For example, Supplementary Figure 1 says (C) are scalp topographies, but those are (A), while (C) shows power spectra, but it's unclear what (C) is. I assume it's only the aperiodic part of the spectrum (oscillations removed)? But it would be better to plot on a log-log scale if so.

      In fact, I recommend showing all spectra on a log-log scale.

      (2) Supplementary Figure 6 is also mislabeled, saying (A) shows age (it does not) and so on.

      (3) In Supplementary Figure 7, is (B) the aperiodic-removed spectrum? The authors are very inconsistent with what they're showing in these spectral plots, and not actually explaining what they're showing: raw spectra, semi-logged or not, aperiodic-removed or oscillations-removed, etc.

      (4) For the HBN data, it is said that, "electrode impedances were kept below 40 kΩ, lower than EGI's standard recommendation of 50 (Net Station Acquisition Technical Manual)." For the validation data: "... electrode impedances were maintained below 5 kΩ." These are big impedance threshold differences. Of course, these recommendations differ by recording system, the use of active electrodes, and so on. But such differences can certainly influence signal-to-noise. The fact that the results are so consistent between them is a strength that perhaps should be explicitly called out.

      (5) The authors cite a lot of foundational / related work here, such as Finley et al, but they should also cite several other highly relevant ones:

      - Saad et al., "Is the Theta/Beta EEG Marker for ADHD Inherently Flawed?", J Atten Disord, 2015

      - Donoghue, Dominguez, Voytek, "Electrophysiological frequency band ratio measures conflate periodic and aperiodic neural activity", eNeuro, 2020

      - Karalunas et al., "Electroencephalogram aperiodic power spectral slope can be reliably measured and predicts ADHD risk in early development", Develop Psychobiol, 2022

      - Donoghue, "A systematic review of aperiodic neural activity in clinical investigations", Eur J Neurosci 2025

    1. The standard collection of Berkeley's work is the nine volume The Works of George Berkeley, Bishop of Cloyne edited by Jessup and Luce, and the standard collection of Hume's work is the eight volume Clarendon Hume Edition Series under Beauchamp, Norton, and Stewart as general editors. For the Berkeley, your professor probably has in mind the collection Philosophical Works; Including the Works on Vision edited by Ayers. For the Hume, the Selby-Bigge/Nidditch editions were standard until recently and remain widely used. For the Kant, the standard edition is The Cambridge Edition of the Works of Immanuel Kant series which is under the general editorship of Guyer and Wood but which includes work by other translators as well. And the Pluhar translation of Kant's Critique of Pure Reason, which was the previous standard, remains widely used.

      via u/wokeupabug at https://www.reddit.com/r/askphilosophy/comments/1i9c0ni/the_definitive_edition_of_george_berkeleys_work/

    1. eLife Assessment

      This valuable study presents a comprehensive comparison of human and macaque monkey behavior across a range of visual perceptual phenomena. The use of a unified oddball visual search paradigm enables direct cross-species comparison while minimizing task-related confounds. It provides solid evidence that visual perception is largely similar between these two species, with some interesting exceptions. These insights into qualitative and quantitative differences between species are relevant for evaluating macaques as a model organism for understanding human vision.

    2. Reviewer #1 (Public review):

      Summary:

      The authors set out to conduct a behavioral comparison of macaque and human vision across a wide range of visual properties. Such a comparison is critical for evaluating the use of macaques as a model system for understanding human vision and the underlying neural mechanisms. This goal represents a unique endeavour since prior studies have typically focused on only highly specific tasks. While the authors found consistent coarse representational structure for objects, evidence for Weber's Law and amodal completion, there was divergence for mirror image confusion and the use of global or local image properties.

      Strengths:

      There are three major strengths of the study. First, the authors employed a behavioral paradigm (oddball search) that allowed them to test multiple perceptual phenomena without having to train the macaques on the specific type of stimuli tested. Second, humans and macaques could be tested in an identical manner. Third, the authors tested a range of different visual properties and phenomena, allowing a broader comparison between species.

      There are also some weaknesses to the study (described below), but that doesn't change the fact that the authors have demonstrated and validated a novel approach for systematic and comprehensive comparisons of vision across species.

      Weaknesses:

      The weaknesses of the study arise in part because of the breadth of the work. In cases where there are divergences between the two species, it would be helpful to know what might account for such divergence, to have more depth. Is it really a species difference, or could there be a different account? For example, does the difference in mirror image confusion arise because the stimuli were objects that would have been highly familiar to the humans but not the macaques? Further, the authors often used small sets of stimuli (e.g. 8 objects only in the test of object similarity; a small set of highly specific occlusion stimuli), and how well the findings will generalize beyond those stimuli is unclear.

      The authors discuss the implications of training macaques to perform specific tasks on specific stimuli in comparing across species. While I agree that extensive training in monkeys could change perception, it is important to also consider that humans have been extensively trained through the types of visual tasks we conduct throughout our lives, so I'm not sure it is universally true that the best comparison is between humans and untrained monkeys. But this just consideration just highlights the general problem of comparing across species.

    3. Reviewer #2 (Public review):

      Summary:

      The macaque monkey is often considered as the animal model of choice to study the neural correlates of visual perception, due to the close similarities to humans in terms of anatomy, physiology and behaviour (Van Essen and Dierker, 2007; DiCarlo et al., 2012; Roelfsema and Treue, 2014; Picaud et al., 2019; Van Essen et al., 2019; Hesse and Tsao, 2020). Quite some studies have been performed to compare the behaviour of macaque monkeys and humans on visual perception tasks. However, it remains difficult to compare the results of these studies as the methods that are used differ significantly between these studies. Furthermore, behavioural studies of macaque monkeys often involve extensive training as the tasks were relatively hard, making it difficult to compare the results with humans, who generally require very little training. The authors present a set of experiments to compare visual perception between macaque monkeys and humans, using the exact same behavioral task that is easy to learn and therefore requires very little training. As expected, they overall find that the two species behave similarly. However, they find a number of interesting exceptions.

      Strengths:

      A major strength of the current study is the relatively large number of tasks that were tested in the same subjects. This is made possible by using the oddball visual search task, which macaque monkeys can learn very quickly. This means that few trials are sufficient to obtain a significant difference between conditions, minimizing learning effects. Although this type of task has been used in previous studies (Sablé-Meyer et al., 2021), the current manuscript makes better use of the advantages and explains them more explicitly.

      In addition, the study finds a number of interesting differences between macaque monkeys and humans. In particular, while humans can dissociate horizontally mirrored images better than vertically mirrored images, monkeys show no difference between these two conditions (Experiment 4). Also, while humans dissociate images better based on the global shape of a stimulus, monkeys dissociate images better based on local elements of a stimulus (Experiments 5 and 6). Although these findings are largely a replication of previous results, they have not yet been studied together with other tasks within the same individual subjects, and the low number of trials avoids any learning effects.

      Weaknesses:

      A weakness of the study is that while the objects that were used can be considered to be familiar to humans, they are not familiar to macaque monkeys.

      In Experiment 4, humans can be expected to have 3D representations of familiar objects such as a Roman helmet or an office chair. Humans can therefore be expected to have view-invariant representations of these objects, predominantly for rotations around the vertical axis of the object (as movements are most common in the horizontal plane). This can explain why only humans confuse objects more often when mirrored vertically than when mirrored horizontally.

      Similarly, in Experiment 5, humans can be expected to be familiar with abstract geometric shapes such as squares and circles, while monkeys likely are not. This could explain why monkeys find it hard to recognize these geometric shapes in the global shape of the stimuli, even when thin grey lines are drawn to connect the local elements that constitute the global shape (Experiment 6). Instead, the combination of local shapes can be expected to form a texture that might be more easily recognized by the monkeys.

      More generally, as proposed by Fagot et al, it might well be that monkeys tend to conceive stimuli as a combination of low-level visual features, instead of as references to objects in the outside world, as humans have learned to do (Fagot et al., 2010). This line of critique would be relevant to take into account.

      Another weakness could be that only three monkeys are tested, while 24 human subjects are tested. According to some theoretical work, a finding in 3 animals is not sufficient to make a claim about an animal species (Fries and Maris, 2022). However, it seems that the results are largely consistent between the different monkeys. Moreover, the results generally agree with the results from previous literature.

      The conclusions by the authors are therefore largely supported by the results. Some results could be strengthened by additional experiments, or at least a more extensive discussion of the potential weaknesses.<br /> The potential impact of the paper is significant, as a start of a comprehensive comparison of visual perception between humans and macaque monkeys, which can inspire other labs to contribute to. This comparison can also be extended to other animal species (e.g. crows and rodents), as well as to different types of artificial neural networks (Leibo et al., 2018).

    4. Reviewer #3 (Public review):

      Summary:

      In this study, Cherian and colleagues compare visual perception between humans and monkeys using a common oddball visual search task across a battery of perceptual phenomena. By keeping the task constant and varying only the stimulus sets, the authors aim to isolate perceptual similarities and differences between species. Across six experiments, they report that monkeys and humans share similarities in coarse object representations, Weber's law, and amodal completion, but differ in mirror confusion and global/local processing.

      Strengths:

      A major strength of the study is the unified experimental framework. The authors designed the experiments such that the task procedures are identical across conditions and species, differing only in the images shown. This is a significant methodological advantage, as it minimizes task-related confounds that often complicate cross-species and cross-experiment comparisons. As a result, observed similarities and differences can be more directly attributed to perceptual processes rather than differences in training or task demands. This allows for a more comprehensive evaluation of visual perception than is typical in the literature, where individual studies often focus on a single effect with specialized training. The data are carefully collected, and the analyses are systematic and appropriate for the questions posed.

      Weaknesses:

      Despite its strengths, the study is largely descriptive and provides limited mechanistic or theoretical explanation for the observed similarities and differences. While the authors document several convergences and divergences between humans and monkeys, there is relatively little discussion of why these patterns arise or how they relate to existing theories of visual processing. As a result, it is difficult to assess the broader implications or generalizability of the findings beyond the specific task and stimuli used.

      Relatedly, the rationale for selecting the particular set of perceptual phenomena is not fully developed. Some tasks appear motivated by prior work comparing humans and deep neural networks, but it is unclear whether this set constitutes a representative or theoretically grounded sampling of visual perception. Without a clearer justification, it is difficult to interpret the absence or presence of specific effects (e.g., mirror confusion or global advantage) as reflecting fundamental species similarities/differences.

  2. pressbooks.library.torontomu.ca pressbooks.library.torontomu.ca
    1. Us is wid yuh, Tea Cake. You know dat already. Dat Turner woman is real smart, accordin’ tuh her notions. Reckon she done heard ’bout dat money yo’ wife got in de bank and she’s bound tuh rope her intuh her family one way or another.”

      Mrs. Turner knows about Tea Cake’s wife’s money and will cleverly manipulate her into her family to get it

    2. great deal of the old crowd were back. But there were lots of new ones too. Some of these men made passes at Janie, and women who didn’t know took out after Tea Cake. Didn’t take them long to be put right, however. Still and all, jealousies arose now and then on both sides. When Mrs. Turner’s brother came and she brought him over to be introduced, Tea Cake had a brainstorm. Before the week was over he had whipped Janie. Not because her behavior justified his jealousy, but it relieved that awful fear inside him. Being able to whip her reassured him in possession. No brutal beating at all. He just slapped her around a bit to show he was boss. Everybody talked about it next day in the fields. It aroused a sort of envy in both men and women. The way he petted and pampered her as if those two or three face slaps had nearly killed her made the women see visions and the helpless way she hung on him made men dream dreams.

      Tea cake hit Janie because he had felt jealous and he had wanted to feel in control.

    1. eLife Assessment

      This valuable study examined the geometry of visual object representations across hierarchically organized stages of the mouse visual cortex. The use of large-scale training and recording techniques provides solid evidence for changes along the hierarchy that may contribute to invariant object recognition. These findings, particularly if they could be supported by further analyses and clarifications to rule out alternative explanations, including influences of low-level features on behavior and neural activity, help establish the potential usefulness of the mouse to understand the neural basis of object recognition.

    2. Reviewer #1 (Public review):

      Summary:

      This paper describes a complex series of studies that measure and explain object recognition in mice. The authors demonstrate that mice are capable of solving an object recognition task, that object identity is decodable in different regions of cortex, and the decodability, to some extent, can be captured by extant theory on object manifolds in deep neural networks. The authors further add some correlational analysis of the time courses of object discriminability to bolster their claim of an object processing hierarchy in the mouse cortex.

      The behavioral and neural data described in this paper are likely to be of interest to the general neuroscience community. That said, I have some issues with the analyses, modeling, and image dataset that I'll detail below.

      Strengths:

      (1) The behavioral work is incredibly cool. Getting mice to solve this task is a real achievement and opens up new avenues for mice as a model for complex visual tasks.

      (2) Similarly, the neural recordings are astounding in their scale.

      (3) This could be the most complete demonstration of a primate-analogous object processing network in the mouse.

      Weaknesses:

      No new weaknesses were noted by this reviewer.

    3. Reviewer #2 (Public review):

      Summary:

      The paper argues that mice are capable of some view-invariant object recognition and that some of their visual areas (especially LM, LI, and AL) carry linearly-decodable signals that could, in principle, help in this process. Further, it argues that the population code in those areas makes linear decodability easier in two ways (fewer dimensions and a smaller radius).

      Strengths:

      It is very useful to see the performance of the mice in this difficult task, and to compare it to the performance of neurons in the mouse visual system. It is also useful to see analyses of the neural code that seek to understand how the code in some visual areas may be particularly suited to decoding object identity.

      Weaknesses:

      Though the paper has improved from the previous submission, there are still some open questions, especially about whether some lower-level properties of the neurons (such as receptive field location) might explain the differences between visual areas. This and other concerns are outlined below.

      (1) Do the signals from the visual areas outperform or underperform the mice? It is hard to tell, because for mice we get numbers in percent correct (Figure 1e, based on 2 alternatives), whereas for visual areas we get numbers in bits (Figure 2c, where it is not clear whether there are 2 or 4 alternatives). This makes it very hard to compare the two. The authors should provide a statement or figure where readers can compare the two. Also, if the behavioral data are obtained with 2AFC, why not run the analyses as 2AFC too?

      (2) Differences in discriminability across objects (Figure 1f). Are these differences also seen for the model based on the difference of Gaussians? (The authors should add those predictions to the plot.) If so, this could further point to possible low-level explanations. It is already quite interesting that the difference of Gaussians model predicts ~58% accuracy, which is not far from the ~65% accuracy of the mice.

      (3) Similarly, in a later figure about decoding visual cortical activity, the authors should show a similar breakdown by object. Are certain objects more decodable than others?

      (4) Number of neurons. It is wonderful to see so many neurons (489182, i.e., an average of ~15k per mouse). But might the same neurons have been recorded multiple times? Has a tool like ROICat or similar been run to exclude this? If not, that is ok, but the authors should add a sentence in Results to indicate that these are not unique neurons (some neurons may be duplicates or triplicates).

      (5) Retinotopy: "within the same ∼20o area of visual space". This is a useful analysis, but which 20 deg area was considered? Was it the one in front of the mice? This would be surprising, because some of the regions do not cover that area (Zhuang et al, eLife 2017). Was a different area chosen? What are its coordinates in azimuth and elevation? And how does it compare to the region where the stimulus was shown during imaging? The Methods do not explain where the stimulus was placed (only that it was in front of the left eye). This information should be added. Also, the screen covered ~120 deg of visual space (63 cm monitor placed 15 cm away), so the emphasis on a 20 deg area is not clear. The authors should provide a figure showing coverage of the screen by each visual area and the position of the stimuli presented during imaging.

      (6) If during imaging the stimuli were presented slightly above the horizontal meridian, then a possible explanation for the superiority of LM, AL, and LI is that their receptive fields tend to be in the upper visual field, whereas the rest of the higher visual areas tend to have receptive fields in the lower visual field (Zhuang et al, eLife 2017).

      (7) Dimensionality: "number of directions in which this variability is spread". Unless I missed the explanation, the Methods don't provide any information on how the dimensionality is computed. Is it done with cross-validation? If not, noise can be interpreted as having high dimension. There are methods to estimate dimensionality with cross-validation, thus excluding the contribution of noise (e.g., Stringer et al 2019). The authors should confirm that this was done with cross-validation and provide information in the Methods.

      (8) Temporal dynamics: "evidence for temporal integration during a trial". Are there really dynamics in the visual responses that last on the scale of seconds? This would be remarkable. Image recognition is usually thought to be done in 100 ms. The long scales presented here are more likely associated with behavioral responses or state responses, or similar. Might there be different brain state correlates in the different cases? For instance, pupil dilation might be different.

      (9) Methods: "to ensure animals were in an attentive state (eyes clear and open)". This sounds peculiar. Did the mice ever close their eyes? If so, that's a discovery. Mice keep their eyes open at all times, even when they are sleeping. So, using eye closure for online detection of "inattentive states" does not seem to make sense. (Also, and this is a minor point: why stop a scan when the animal is "inattentive"? Wouldn't one want to acquire the associated data for comparison? Is the point to save disk space?). This whole set of statements is a bit concerning.

    1. This means building a classroom culture that cele-brates the opportunity to get feedback and reframes errors. as information.

      It is important to create a culture in the classroom where feedback is an important part of learning. This is the way to be better.

    2. Find a way to organize the classroom schedule so that you can have perio ic conferences or check-ins with students.

      I think that this is the difficult part. It has been difficult to get to all of them and have the conversation about their goals in a set amount of time.

    3. The pact is a formal agreement between teacher and student to work on a learning goal and a relational covenant between them.

      I have been able to set some reading goals wit some of my students and this has worked. They feel responsible for their own learning and they want to be successful.

    4. The student with this limited outlook believes effort is useless. He begins to cover up, hide, or act out because he believes failure on an assignment or task might expose him as "dumb" to his peers, leaving him vulnerable to teasing and being ostracized.

      This has happened at my school with some students. As they grow, they are more aware of skills and feel behind. They start to close up.

    5. Their awareness of their own lack of academic proficiency leads to a lack of confidence as learners. Unfortunately, many culturally and linguistically diverse stu-dents start to believe these skill gaps are evidence of their own innate intellectual deficits. They internalize the negative verbal and nonverbal messages adults at school send to them in the form of low expectations, unchallenging remedial content, and an overemphasis on compliant behavior

      This is really sad and I have seen it happen again and again.

    6. Based on your finding·s~-:iii_~n.,tj[!f..,'!_fl.!~al~~!!lt ou can make to build trust with your focal student. Think about one small change you would like to make that you believe would shift the nature \

      Important to have in mind.

    7. Share a new skill or process you are learning-not the finished product but the less-than-perfect beginning and middle parts. '-----• Share your interests with the whole class and then find fellow fans among individual students with whom you share an interest in the ~ same sports teams, movies, or hobb

      I have tried some of these strategies. They are valuable.

    8. In addition to building trusf through acts of caring and authentic listen-ing, we can build trust by being more authentic, vulnerable, and in sync with our students.

      I think it is important for students and families to see us as human beings too.

    9. Trust, therefore, frees up the brain for other activities such as crea-tivity, learning, and higher order thinking.

      It is important for our students to trust us so they can learn and feel valued. We need to be careful not to make our students feel threatened.

    10. ealized that culturally responsive relationships aren't just something nice to have. They are critical. The only way to get students to open up to us is to show we authentically care about who they are, what they have to say, and how they feel

      We need to show our students that we genuinely care.

    1. The strategy is based on neuroscience findings that tell us that if we are able to put as little as 10 seconds in-between the time we get trig-gered and our reaction, we can preempt an amygdala hijack and avoid responding negatively. The following box gives an overview of the S.O.D.A. strategy.

      A strategy that I want to incorporate in my practice.

    2. For a culturally responsive teacher who is working to empower dependent learners who may be resistant out of fear, this prac-tice is critical

      We are models in the classroom. We need to model how to have self control and how to react in different situations.

    3. The culturally responsive teacher's ability to manage her emotions is paramount because she is the "emotional thermostat" of the classroom and can influence students' mood and productivity.

      Something we need to have in mind daily.

    4. As a result, teachers' deficit-oriented attributions of student performance influence their instructional decision making, resulting in giving students less opportunity for engaging curricula, interesting tasks, and culturally congruent ways of learning.

      We have the power to support our students, to give them what they really need valuing them and what they know.

    5. A critical first step for teachers is to under-stand how their own cultural values shape their expectations in the classroom-from how they expect children to behave socially,

      This is the first step. This is a practice we need to follow actively. we need to understand our culture and biases first to be able to understand others and be aware.

    6. This means we each must do the "inside-out" work ~ •:-. . t required: de.~~loping t!!e right mindset, eng!lg!ng,_p. self-reflecti~:~~king , ;;;· ~ our implicit bia~~,,_P.racticj_n,&,.§~}~~,1!:..<?.!i.?HaJ.i!.::Y:¥.'.~~~~S.1• c!. !!.clh..q\qLqg an )" ~~-·~ i~_§tan.ce,.rngg.1;dio..g,tb.,~)wp~c;(gL2J!!..~~rn~Y-9~-°-,~~s.tud~1;_

      I agree. We need to examine our biases and how this impacts our students.

    1. We also aim to interrogate the process of individu-alization that these subjects go through

      I find something strange in the quietness of this process: a radical political and economic shift appears almost ordinary in everyday life.

    Annotators

    1. Yanzi, a care worker in her late fifties, explains: ‘As they say, “one will neverfully appreciate the great kindness of their parents until they raise children themselves”

      I think this is a really good quote to support the idea that older eldercare workers are objectively better.

    2. aged 40 to 60. This preference isdriven not only by their presumed increased caregiving experience and resilience, butalso by the belief that, as both parents of adult children and children of elderly parents,they embody a stronger sense of ‘filial heart’.

      I wonder if this is due to generational beliefs. Is it because Chinese culture believes that filial piety is more engrained/valued/practiced in older generations?

    3. a key indicator of competence ineldercare

      A western point of view of competence in eldercare would likely be related to productivity or effectiveness meanwhile the Chinese point of view of competence holds a more emotional value.

    4. For many elder people, including elder migrant workers, who wouldpreviously have been the ‘information have-less’ (Qiu 2009), the smartphone is theirfirst-ever personal internet access to leapfrog into the digital age.

      I used to work in a assisted living facility and I often had to help the residents with technology issues. I think the digitalization can also cause the elderly to be more vulnerable. I remember specifically I had to show a resident how to identify text based scams.