10,000 Matching Annotations
  1. Last 7 days
    1. On 2020-04-16 23:50:15, user Brian T wrote:

      MISLEADING UV DOSE! DO NOT FOLLOW THIS 5 uW/cm2 dosing!

      The UVA/B meter used in this study “General UVAB 137 digital light meter (General Tools and Instruments New York, NY)” (look up General Tools UV513AB Digital UVA/UVB Meter on Amazon) advertises measurement from 280-400nm, barely overlapping the wavelengths used in this study, 260 – 285 nm. Furthermore, this meter does not report on the energy at a given wavelength. Its possible that this study is grossly underreporting the dose of UV needed because their meter doesn’t read many of the wavelegths used.

      Previous research on SARS and MERS used 254 nm and noted much, much higher energy needed to kill these corona viruses vs this study's reported 5 uW/cm2):

      • MERS is inactivated at 90 uW/cm2 x 60 mins[1], dose of 0.324 J/cm2<br /> • SARS is inactivated at 4016 uW/cm2 x 6 min[2] , dose of 1.446 J/cm2

      [1] http://www.diniesturkiye.co...

      [2] https://www.sciencedirect.c...

    1. On 2020-04-17 20:25:02, user Mortal Wombat wrote:

      Hold on, they made no adjustment for self-selection of symptomatic people in their study?

      Researchers, at least please tell us the number of people shown the Facebook ad so we can have some sense of the potential for self-selection -- i.e. how many saw it but chose not to participate.

      This seems problematic given the population demographic adjustments that were necessary. The researchers say that white women were heavily oversampled while hispanics and Asians were heavily undersampled, and that population adjustments led them to adjust the observed prevalence of 1.5% up to a population-weighted prevalence of 2.81%.

      It would appear highly likely that the relatively affluent population (white women) would have the interest and capacity to be a roughly random sampling -- people with no prior symptoms just interested in knowing. Whereas for lower-income populations, there may be less ability to simply participate out of interest, and may have been a higher self-selection drive of the previously symptomatic to get themselves tested.

      Thus I find the upward adjustment in the numbers quite suspect. To come up with an overall result that's _higher_ than the raw outcome of the study when you know that people who've been sick will be the most motivated to get themselves tested just seems perverse.

      Did the study not even ask people whether they had been sick over the past couple months? Why not? That at least could've given some sense of whether self-selection was biasing results in the samples.

    2. On 2020-04-18 02:01:37, user mendel wrote:

      First, he picked the county that had the earliest cases in California and had the outbreak the first, ensuring that the population would be undertested. This means that it's likely that every other county in California has fewer unregistered infections than Santa Clara.

      Second, study participants were people who responded to a facebook ad. This is a self-selected sample, and this property completely kills the usefulness of the study all by itself. This is a beginner's error! People who think they had Covid-19 and didn't get tested or know someone who did are much more likely to respond to such an ad than people who did not. (By comparison, the Gangelt study contacted 600 carefully chosen households per mail, and 400 responded. Still somewhat self-selected, but not as badly.)

      Third, age is the one most common predictor of mortality. He did not weigh the results by age, and old people are underrepresented in the study. Anything he says about mortality is completely useless if we don't know how prevalent the infection was in the older population. (In Germany, cases show that the prevalence among tested older people was low initially and took a few weeks to rise.)

      Fourth, instead he weighs prevalence by zip code--why? This exacerbates statistical variations, since there were only 50 positive results, and Santa Clara has ~60 zip codes. If you have a positive result fall on a populous zip code by chance where only a few participants participated, then the numbers are skewed up. They must have seen this happen because their estimated prevalence is almost twice as high as the raw prevalence.

      Fifth, the specificity of the test is "99.5% (95 CI 98.3-99.9%)". This means that theoretically, if the specificity was 98.5%, all of the 50 positive results could be false positives, and nobody in the sample would have had any Covid-19. This means the result is not statistically significant even if the sample had been well chosen (which it wasn't). (It's not even significant at the 90% level.)

      Sixth, they used a notoriously inaccurate "lateral flow assay" instead of an ELISA test and did not validate their positive samples (only 50) with a more sensitive test -- why not?

      Seventh, The Covid-19-antibody test can create false positives if it cross-reacts with other human coronavirus antibodies, i.e. if you test the samples of people who had a cold, your speficity will suffer. Therefore, a manufacturer could a) test blood donor samples, they not allowed to give blood if they have been sick shortly before; b) test samples taken in the summer when people are less likely to have colds than in March.

      To state the previous three points this in another way, a large number of positive results (a third if the specificy is actually 99.5%, but probably more than that) are fake, and depending on which zip codes they randomly fall in, they could considerably skew the results.

    3. On 2020-04-18 06:35:20, user DomesticEnemy wrote:

      About 5 weeks ago based on a study of the Diamond Princess cruise ship I estimated the death rate to be around 0.23% or so. Welcome.

    4. On 2020-04-18 18:39:18, user jj wrote:

      Where is the discussion of selection bias? You invite folks to get tested by advertising on Facebook... I think there will be an over-representation of folks who fear they have COVID-19 based on their recent interactions in places with or around COVID-19 cases.

      Without randomization to eliminate self-selection bias, the authors should not be making any far-reaching conclusions that are now being picked up and reported by the media without providing proper interpretation.

      I think this publication should be rejected for not doing this study properly.. and then seeking publicity!

    5. On 2020-04-23 05:57:15, user David Feist wrote:

      It is always good to compare data within nations. But in fact preliminary, linear regression analysis, from a fellow maths major, now seems to indicate that the lockdowns had no statistically significant effect within the USA: https://www.spiked-online.c....

      This Santa Clara study indicates why Sweden, Japan, South Korea and Australia have not had public health apocalypses, with no lockdowns; the mortality rate was miscalculated.

    6. On 2020-04-17 21:38:11, user Michael Stein wrote:

      There could be a very large upward bias due to the participants in the study being people who responded to the Facebook ad. It stands to reason that people who suspected they might have been exposed to the virus would be more likely to respond to such an ad. The fact that randomization was used to select who got the ads and that corrections for demographics were made does not address this potentially serious source of bias. There is little doubt that many more people have been infected than the official numbers, but I find the factor of 50-85 rather hard to believe in a place like Santa Clara County that has not been overrun by cases.

    7. On 2020-04-18 04:34:06, user Zev Waldman MD wrote:

      I agree with prior commenters that people who suspected that had or were exposed to Covid would be more likely to seek antibody testing. I see that participants were asked about prior symptoms, but it would also have been nice to ask about prior possible exposure concerns, If both numbers are very low, that would provide some reassurance about this possible bias.

      I really wanted to address another issue: the calculation of the infection fatality rate, i.e., estimated deaths/cases. It seems that a lot more thought went into trying to get an accurate case count than an accurate death count. They seem to take it as a given that 50 people died of Covid in the county as of April 10; however, like case counts, there are multiple reasons to suspect this number of deaths might be higher:

      1. Reporting of deaths is well-known to be delayed - i.e., date of reporting does not equal date of death
      2. People who actually died of Covid may never have been tested, and thus may not be included as cases or deaths

      3. The doubling time of deaths that was used to project to April 22 is based also on reported deaths; if reporting of deaths is delays, the doubling time may appear slower.

      I did appreciate the authors' efforts to validate the antibody testing. That's useful information.

      I worry that, because these results support their prior beliefs, some readers may take these results at face value and push them for policymakers to use before they have been more widely vetted by the scientific community.

    8. On 2020-04-18 05:16:02, user rodger bodoia wrote:

      Deeply flawed methodology. Others have noted (as did the authors) the obvious inherent bias towards those seeking antibody testing (maybe they had symptoms, maybe they knew someone who had symptoms). Also note the bias that is inherent in the method of using Facebook as the messenger with a brief period between posting on FB and the actual testing. We would need significant information on the other behaviors of people who use FB this frequently and whether they are more or less likely to have engaged in practices that would have put them at risk of acquiring the virus.<br /> Back of the envelope "smell test": 48,000 infections and only 69 deaths (as of April 17) is an infection fatality rate of 0.14%. This is inconsistent with Diamond Princess data, even if we adjust for age differences. Also compare with https://www.nejm.org/doi/fu... in which they did UNIVERSAL screening of obstetric patients from March 22 to April 4 in NYC and found 15% positivity of SARS-CoV-2. Without lots of population-weighted adjustments we can interpret this as pretty good evidence of roughly 15% prevalence in NYC (say roughly 1.2 million infections) and roughly 9,000 deaths for infection fatality rate of 0.75%

    9. On 2020-04-18 18:45:15, user S. MonDragon wrote:

      Dear Dr. Jay et al.,

      I am curious about a couple of other scenarios related to your study. Do those that have SARS-CoV-2 antibodies, also show any other antibodies that might be of particular research interest? And further, how many of these people actually had any symptoms? For example, how many of those who had COVID-19 antibodies also had antibodies for other types of coronaviruses, including SARS-CoV-1. Did the presence or absence of these other antibodies seem to have an effect on symptom severity? I guess what I am asking is, why do some people have such severe symptoms while others can walk around without even knowing that they may have the virus? And, can your samples help us to answer some of these questions?

    10. On 2020-04-19 00:34:15, user SonoranSeeker wrote:

      Considering that this virus is extremely contagious, two or three times that of the flu, and considering that this extremely contagious virus was circulating unabated for a relatively long time, this study is probably pretty accurate. It is also in line with the study in Germany and modelling of virus spread based on previous corona virus characteristics. <br /> This could be why there were so many deaths in such a short time. Let's hope it burned hot, but will flare out just as fast.

    11. On 2020-04-19 02:02:27, user defragmentingthecode wrote:

      THe CDC's guidelines for reporting Covid19 deaths is "where the disease caused or isassumed to have caused or contributed to death".

      I really don't know how accurate any studies are when we don't know how many Covid deaths were assumed, and how many deaths were due to the patient's co-morbidities rather than the presence of virus? Surely, we could have got this bit right?

      Here is the CDC link. https://www.cdc.gov/nchs/da...

    12. On 2020-04-19 05:26:28, user chalkful wrote:

      You can talk about selection bias, and that’s valid. But nit-picking every part of this study down to the assumptions made about manufacturer specifications seems ridiculous and very hypocritical when similar assumptions were made with RT-PCR tests that were rushed to market with questionable, if any, validation, and which are relied upon to make public policy decisions which dictate the lives of millions, even billions.

      Did you apply precisely the same level of detailed, critical analysis and nitpick every minute inconsistency in all the other COVID-related “peer reviewed” studies which were rushed to print, and which were gloom-and-doom?

      Why choose to write off all conclusions drawn by a study with such a substantial effect size because of minor statistical errors? As a non-academic, the study appears largely methodologically sound, despite a few flaws, and it is not logical to throw the baby out with the bath water. Even assuming the infection rate is half of what is concluded, that is still significant and the first study, and conclusion, of its kind.

      I suspect that, with as with most things COVID-related, much of this is due more to politics than pure intellectual rigor, and that the conclusions drawn by the study shake the foundation of what many believe, which scares them.

    13. On 2020-04-20 02:47:08, user Comfrey's Gone wrote:

      Probably related to Dean Karlen's observations below - but in working through the statistical appendix, it seems like the calculation of the standard error is independent of the number of samples (371 or 401) used by the manufacturer/Stanford team to evaluate the number of false positives.

      To determine the standard error, the authors first compute the cumulative variance by combining variances from each source of uncertainty (finite sample of respondents of 3,330, finite sample for false positives in the serology test and finite sample for false negatives). These separate variances are the variances of the binomial distribution (p(1-p)), not rescaled by the inverse of the sample size. The authors then take this cumulative variance and divide by the number of respondents (3,330), and apply the square root to arrive at the standard error. (.0039 = sqrt(.034/3330)).

      Instead, when the cumulative variance is computed in the equation for Var(Pi) above, I believe that each of the contributing terms should be multiplied by its appropriate 1/N (where N is the relevant sample size, e.g. 3,330 for the Var(q) term, and 371 or 401 for the Var(s) term.)

      One way to assess that the 'N' rescaling doesn't seem right is to think about the limit in which the number of respondents being tested is infinite, the sample size for determining the number of false negatives is also infinite, but there is a finite sample (e.g. 401) used to determine the number of false positives. If you trace through the appendix calculation, you'll then find (if I've done it correctly) that the standard error for 'Pi' (the infection rate) would then be zero, although some error certainly should exist, due to the uncertainty in false positives.

      Other commenters have also raised concerns about the normality assumption in computing the standard error, but the way in which scaling by sqrt(N) has been applied here has a large impact on the calculation of the standard error and resulting confidence intervals.

    14. On 2020-04-20 03:50:27, user Tomas Hull wrote:

      Those who insist on the selection bias of this study: Would you rather see the ads targeting people working in hospitals and covid19 assessment centres, or those providing essential services to those institutions, like mailmen, delivery men, garbage men, cleaning and maintenance, and so on? <br /> How about people in self-isolation, COVID-19 observation and ICU wards? <br /> Would this kind selection bias satisfy anybody?

    1. On 2020-04-22 23:37:10, user Glenn Korbel wrote:

      In the absence of tests for antibodies, which they don't have they are simply guessing/Yhere is NO way to predict deaths without knowing how many people have already been infected.<br /> None.

    1. On 2020-04-21 19:40:16, user Brandon B wrote:

      Risk of ventilation was 6.9% in HQ + AZ group and 14.1% in no Tx group. That is double. It was stated that these numbers are similar in the article. Not significant?

    1. On 2020-06-23 17:39:31, user Liam Golding wrote:

      Hi great work by your team.

      I'm curious whether you standardize the log inactivation to untreated masks or to viral stock added. You note that for bacterial contaminants that untreated coupons are compared to treated to obtain log reduction values. But, for example, you note that "For each decontamination method, each sample used for treatment had a corresponding no-treatment control. No-virus blank masks were also included to identify possible contamination." Was the control viral load extracted then compared to treatments to obtain log reduction values, or was a known quantity of viral load added to controls and used to determine the log reduction?

      **Edit: you draw mention to this in the Material and Methods.

      However, compared to other studies (Mills et al., 2018; Lore et al., 2011) your method of extracting viral load is minimal to say the least. Generally, coupons are cut; placed in a 15/50ml tube with ~ 15ml extraction solution then vortexed/mixed for 20 minutes. Can you comment on why you chose 1 minute vortex with 1.3mL solution over the common OP?

    1. On 2020-04-07 21:56:51, user VesnaV wrote:

      The idea of the article is very interesting. But I am afraid that the trends are changing. It would be very useful to update the data on covid-19 and replicate the analysis. Could you please do it? Thanks a lot!

    1. On 2020-06-24 18:16:30, user Gerard Cangelosi wrote:

      Nice study, and a very valuable addition. I collaborated on one of the previous studies you cited (Tu YP et al, 2020). May I suggest an alternative explanation for the difference between your findings and ours? You used all-purpose flock swabs, and we used foam swabs. These differences aren't trivial (e.g. see https://www.medrxiv.org/con... "https://www.medrxiv.org/content/10.1101/2020.04.28.20083055v1)"). I would urge you to note this possibility in your manuscript. Thank you!<br /> Jerry

    1. On 2020-06-24 18:56:17, user André GILLIBERT wrote:

      Title : Proposal for improved reporting of the Recovery trial<br /> André GILLIBERT (M.D.)1, Florian NAUDET (M.D., P.H.D.)2<br /> 1 Department of Biostatistics, CHU Rouen, F 76000, Rouen, France<br /> 2 Univ Rennes, CHU Rennes, Inserm, CIC 1414 (Centre d’Investigation Clinique de Rennes), F- 35000 Rennes, France

      **Introduction**

      Dear authors,<br /> We read with interest the pre-print of the article entitled “Effect of Dexamethasone in Hospitalized Patients with COVID-19: Preliminary Report”. This reports the preliminary results of a large scale randomized clinical trial (RCT) conducted in 176 hospitals in the United Kingdom. To our knowledge it is the largest scale pragmatic RCT comparing treatments of the COVID-19 in curative intent. The 28-days survival endpoint is objective, clinically relevant and should not be influenced by the measurement bias that may be caused by the open-label design. While 2,315 study protocols have been registered on ClinicalTrials.gov about COVID-19, as of June 24th 2020, Recovery is, to our knowledge, the only randomized clinical trial on COVID-19 that succeeded to include more than ten thousands patients. The open-label design and simple electronic case report form (e-CRF) may have helped to include a non-negligible proportion of all COVID-19 patients hospitalized in the United Kingdom (UK). Indeed, as of June 24th 2020, approximatively 43,000 patients died of COVID-19 in hospital in the UK, of whom approximatively 0.24 × 11,500 = 2,760, that is more than 6% of all hospital deaths of COVID-19, where included in the Recovery study.<br /> Having read with interest version 6.0 of the publicly available study protocol (https://www.recoverytrial.n... "https://www.recoverytrial.net/files/recovery-protocol-v6-0-2020-05-14.pdf)") we had hoped for more details in the reporting of methods and results of this trial and take advantage of the open-peer review process offered by pre-prints servers to suggest improving some aspects of the reporting before the final peer-reviewed publication. Please, find below some easy to answer comments that may help to improve the article overall.

      **Interim analyses and multiple treatment arms**

      The first information would be about interim analyses. The protocol (version 6.0) specifies that it is adaptive and that randomization arms may be added removed or paused according to decisions of the Trial Steering Committee (TSC) basing its decision on interim analyses performed by the Data Monitoring Committee (DMC) and communicated when “the randomised comparisons in the study have provided evidence on mortality that is strong enough […] to affect national and global treatment strategies” (protocol, page 16, section 4.4, 2nd paragraph). The Supplementary Materials of the manuscript specifies that “the independent Data Monitoring Committee reviews unblinded analyses of the study data and any other information considered relevant at intervals of around 2 weeks”. This suggests that many interim analyses may have been performed from the start (March 9th) to the end (June 8th) of the study.<br /> Statistically, interim analyses not properly taken in account generate an inflation of the type I error rate which may be increased again by the multiple treatment arms. Methods such as triangular tests make it possible to control the type I error rate. Most methods of control of type I error rate in interim analyses require that the maximal sample size be defined a priori and that the timing and number of interim analyses be pre-planned. This protocol being adaptive, new arms were added, implying new statistical tests in interim analyses, and no pre-defined sample size as seen in page 2 of the protocol: “[...] it may be possible to randomise several thousand with mild disease [...], but realistic, appropriate sample sizes could not be estimated at the start of the trial.” This make control of the type I error rate difficult. The fact that the study has been stopped on the final analysis as we understand from the current draft rather than interim analysis does not remove the type I error rate inflation. The multiple treatment arms lead to another inflation of the type I error rate.<br /> The current manuscript does not specify any procedure to fix these problems. The Statistical Analysis Plans (SAP) V1.0 (in section 5.5) and V1.1 (in section 5.6) specify that “Evaluation of the primary trial (main randomisation) and secondary randomisation will be conducted independently and no adjustment be made for these. Formal adjustment will not be made for multiple treatment comparisons, the testing of secondary and subsidiary outcomes, or subgroup analyses.” and nothing is specified about interim analysis. Therefore, we conclude that no P-value adjustment for multiple testing has been performed, neither for multiple treatment arms nor for interim analysis. If an interim analysis assessing 4 to 6 treatment arms at the 5% significance level has been performed every 2 weeks from march to June, up to 50 tests may have been performed, leading to major inflation of type I error rate. In our opinion, the best way to assess and maybe fix the type I error rate inflation, is to report with maximal transparency every interim analysis that has been performed, with the following information:<br /> 1. Date of the interim analysis and number of patients included at that stage<br /> 2. Was the interim analysis planned (e.g. every 2 weeks as planned according to supplementary material) or unplanned (e.g. due to an external event, for instance the article of Mehra et al about hydroxychloroquine published in The Lancet, doi:10.1016/S0140-6736(20)31180-6), and if exceptional, why?<br /> 3. Which statistical analyzes, on which randomization arms, have been performed at each stage <br /> 4. If predefined, what criteria (statistical or not) would have conducted to early arrest of a randomization arm for inefficiency and what criteria would have conducted to arrest for proved efficacy?<br /> 5. If statistical criteria were not predefined, did the DMC provide a rationale for his choice to communicate or not the results to the TSC? If yes, could the rationale be provided?<br /> 6. The results of statistical analyzes performed at each step<br /> 7. The decision of the DMC to communicate or not the results to the TSC and which results have been reported as the case may be<br /> The information about interim analyses and multiple randomization arms will help to assess whether the inflation of type I error rate is severe or not. A post hoc multiple testing adjustment, taking in account the many randomized treatments and interim analyses, should be attempted, and discussed, even though there may be technical issues due to the adaptative nature of the protocol.

      **Adjustment for age**

      An adjustment for age (in three categories <70 years, 70-79, >= 80 years, see legend of table S2) in a Cox model was performed for the comparison of dexamethasone to standard of care in the article. This adjustment was not specified in the version 6.0 of the protocol but was, according to the manuscript “added once the imbalance in age (a key prognostic factor) became apparent”. This is confirmed by the addition of a words ““However, in the event that there are any important imbalances between the randomised groups in key baseline subgroups (see section 5.4), emphasis will be placed on analyses that are adjusted for the relevant baseline characteristic(s).” in section 5.5 page 16 of the SAP V1.1 of June 20th compared to the SAP V1.0 of June 9th which specified a log-rank test. The SAP V1.0 of the 9th June may have been written before the database has been analyzed (data cut June 10th) but the SAP of the 20th has probably been written after preliminary analysis have been performed. This is consistent with the words “became apparent” of the manuscript. Therefore, in our opinion, this adjustment must be considered as a post hoc analysis rather than as the main analysis. Moreover, even though the SAP V1.1 specifies that an “important imbalance” will lead to an “emphasis” on adjusted analyses, it does not change the primary analysis (see section 5.1.1 page 14). It is not clear what “important imbalance” means. To interpret that, we will perform statistical tests to assess balance of key baseline subgroups specified in SAP V1.1 (see section 5.4):<br /> 1. Risk group (three risk groups with approximately equal number of deaths based on factors recorded at randomisation). Its distribution is shown in figure S2. A chi-square tests on the distribution of risk groups in Dexamethasone 1255/500/349 and Usual care 2680/926/715 groups, lead to a P-value=0.092. A chi-square test for trend yields a P-value equal to 0.23.<br /> 2. Requirement for respiratory support at randomisation (None; Oxygen only; Ventilation or ECMO). P-value=0.89 for chi-square test and P-value=0.86 for chi-square for trend.<br /> 3. Time since illness onset (<=7 days; >7 days). P-value=0.17<br /> 4. Age (<70; 70-79; 80+ years). P-value=0.016 for chi-square test, p=0.019 for chi-square test for trend<br /> 5. Sex (Male; Female). P-value=0.97 for chi-square test<br /> 6. Ethnicity (White; Black, Asian or Minority Ethnic). No data found.<br /> The criteria to define “important imbalance” seems to be statistical significance at the 0.05 threshold, however that should have been stated and tests for all other variables should have been provided too.<br /> First, this adjustment, from a theoretical point-of-view, was not necessary since the study was randomized; if the exact condition of imbalance triggering the adjustment was pre-specified in the protocol or SAP before the imbalance was known, it could induce a very slight reduction of the type I error rate and power. However, as it was performed when the imbalance was known, there is a risk that the sign of the imbalance (i.e. higher age in the dexamethasone group) have influenced the choice of adjustment. Indeed, an adjustment conditional to a higher age in the dexamethasone group will increase the estimated effect of dexamethasone in these conditions, and so, provide an inflation of the type I error rate. If the same conditional adjustment were further considered for other prognostic variables, the inflation could even be higher. <br /> Unless there is strong evidence that the amendment to the SAP was performed without knowledge of the sign of the imbalance (higher age in the dexamethasone group), we suggest that the primary analysis be kept as originally planned, without adjustment, and that the age adjustment be performed in a sensitivity analysis only. The knowledge of the sign of the unbalance is unclear in the last version of the SAP (V1.1, June 20th) and in the manuscript. In addition, in an open label trial, it is always better to stick to the protocol.

      **Results in other treatment arms**

      The manuscript specifies that “the Steering Committee closed recruitment to the dexamethasone arm since enrolment exceeded 2000 patients.” It is not stated whether any other treatment arm has exceeded 2000 patients or not and whether the study is still ongoing. Results of treatment arms that have been stopped should be provided (all arms having enrolled more than 2000 patients?). If not, the number of patients randomized in other treatment arms should, at least, be reported. If the study is completely stopped, all treatments should be analyzed and reported, unless there is a specific reason not to do so; that reason should be stated as the case may be. This data would be useful to provide evidence on other molecules. It would also clarify the number of statistical tests that have been performed or not, providing more information about the overall inflation of alpha risk.

      **Sample size**

      The paragraph about the sample size suggests that inclusions were planned, at some time, to stop when 2000 patients were included in the dexamethasone arm. The amended protocol (May 14th), the SAP V1.0 (June 9th) and the SAP V1.1 (June 20th, 4 days after the results have been officially announced) all have a paragraph about the sample size but all specify that the sample size is not fixed and none specify any criteria of arrest of the research based on sample size. There are 2104 patients included in this arm, which is substantially larger than the target of 2000 patients. The exact chronology and methodology should be clarified: when was the sample size computed and what was the exact criteria to arrest the research? Could the document (internal report?) related to this sample size calculation and statistical or non-statistical decision of arrest of the research be published in supplementary material?<br /> Indeed, assessment of the type I error rate requires knowing exactly when and why the research has been arrested: arrest for low inclusion rate of new patients or for reaching target sample size cannot be interpreted the same as arrest for high efficacy observed on an interim analysis.

      **Future of the protocol**

      With the new evidence about dexamethasone, the protocol will probably be stopped or evolve. The future recruitment may slow as the peak of the epidemic curve in United Kingdom is passed. The past, present and future of the protocol needs also to be known to assess the actual type I error rate. Indeed, future analyses, that have not yet been performed influence the overall type I error rate. That is why we suggest that author’s provide the daily or weekly inclusion rate from March to June and discuss the future of the study.

      **Loss to follow-up**

      Table S1 shows that the follow-up forms have been received for 1940/2104 (92.2%) patients of the dexamethasone group and 3973/4321 patients of the usual care group (91.9%). The patients without follow-up forms (8.5% overall) may either be lost to follow-up or have been included in the 28 last days before June 10th 2020 (data cut). The manuscript mentions that 4.8% of patients “had not been followed for 28 days by the time of the data cut”, suggesting that 8.5%-4.8% = 3.7% of patients are lost to follow-up, but that is our own interpretation. We suggest that authors report the actual number of loss to follow-up and how their data have been imputed or analyzed. The number of loss to follow-up may differ for different outcomes. For instance, if the Office of National Statistics (ONS) data has been used for vital status assessment, there should be no loss to follow-up on that outcome.

      **Vital status**

      The current manuscript only specifies the data of the web-based case report (e-CRF) form, filled by hospital staff, as source of information, suggesting that it is the only source of information about the vital status. The document entitled “Definition and Derivation of Baseline Characteristics and Outcomes” provided at https://www.recoverytrial.n... specifies many other sources. For instance, the vital status had to be assessed from the Office of National Statistics (ONS). Other sources, including Secondary Use Service Admitted Patient Care (SUSAPC) and e-CRF could be used for interim analysis. The ONS was considered as the defining source (most reliable). Whether the ONS data has been used or not should be clarified. If the ONS data have been used, statistics of agreement of the two data sources (e-CRF and ONS) may be provided to help assessing the quality of data. If the ONS data have not been used, this deviation from the planned protocol should be documented.<br /> The manuscript as well as the recovery-outcomes-definitions-v1-0.pdf file specifies that the follow-up form of the e-CRF is completed at “the earliest of (i) discharge from acute care (ii) death, or (iii) 28 days after the main randomisation”. If the follow-up form is not updated further, patients discharged alive before day 28 (e.g. day 14) may have incomplete vital status information at day 28. The following information should be specified:<br /> 1. Whether the follow-up form of the e-CRF had to be updated by hospital staff at day 28 for these patients<br /> 2. If response to (1) is yes, whether there was a means to distinguish between a lost to follow-up at day 28 (form not updated) and a patient discharged and alive at day 28 (form updated to “alive at day 28”)<br /> 3. If response to (2) is yes, how many patients discharged before day 28 were lost to follow-up at day 28<br /> 4. If response to (2) is yes, how has their vital status at day 28 been imputed or managed in models with censorships (log-rank, Kaplan-Meier, Cox)<br /> Of course, this information is really needed if the ONS and SUSAPC data have not been used.<br /> The quality of the vital status information is critical in such a large scale open-label multi-centric trial, because there is a risk that one or more center selectively report death, biasing the primary analysis.

      **Inclusion distribution by center**

      A multicentric study provides stronger evidence than a single-center study but sometimes, few centers include most patients, with a risk of low-quality data or selection bias. The very high number of included patients in the Recovery trial suggests that many centers included many patients but the distribution of inclusions per center could be reported.

      **Randomization**

      The protocol specifies that “in some hospitals, not all treatment arms will be available (e.g. due to manufacturing and supply shortages); and at some times, not all treatment arms will be active (e.g. due to lack of relevant approvals and contractual agreements).” This is further clarified in the SAP V1 (section 2.4.2 Exclusion criteria, page 8) by the sentence “If one or more of the active drug treatments is not available at the hospital or is believed, by the attending clinician, to be contraindicated (or definitely indicated) for the specific patient, then this fact will be recorded via the web-based form prior to randomisation; random allocation will then be between the remaining (or indicated) arms.” Showing that randomization arms may be closed on an individual basis, when the patient is included, with the argument of contraindication or definitive indication. It seems that the “standard of care” group could not be removed and that at least another randomization arm had to be kept as suggested by the words “random allocation will then be between the remaining arms (in a 2:1:1:1, 2:1:1 or 2:1 ratio)” in section 2.9.1 page 11 of the SAP V1.0. Even exclusion of a single randomization arm can lead to imbalance between groups. For instance, if physicians believed that a treatment was contraindicated for the most severe patients, only non-severe patients could be randomized to the treatment’s arm, while most severe patients would be randomized to other arms. Several things can be done to assess and fix this bias. First, report how many times this feature has been used and which randomization arms have been most excluded. If it has been used many times, provide the pattern of use that help to assess whether this is a collective measure (e.g. 2-weeks period of shortage of a treatment in a center ? no major selection bias) or individual measure. If its use has been rare, a sensitivity analysis could simply exclude these patients. If it has been frequent, we suggest a statistical method to analyze this data without bias, based on the following principles: patients randomized between 3 randomization arms A, B and C (population X) are comparable for the comparisons of A to B. Patients randomized between A, B and D (population Y), are comparable for the comparisons of A to B. Population X and population Y may differ but, inside each population, A can be compared to B. Therefore, the within-X comparison of A to B and within-Y comparison of A to B are both valid and can be meta-analyzed to assess a global difference between A and B. This can be simply done with an adjustment on the population (X or Y) in a fixed effects multivariate model. Pooling of X and Y populations should not be performed without adjustment.<br /> A second problem with randomization exists although the dexamethasone arm is the least affected. Randomization arms have been added in this adaptative trial. When a new randomization arm is added, new patients may be randomized to this arm and fewer patients are randomized to other arms. Consequently, the distribution of dates of inclusion may differ between groups. This may have some impact on the mortality at two levels: (1) the medical prescription of hospitalization may have evolved as the epidemic evolved, with hospitalization reserved to most severe patients at the peak of epidemic and maybe wider hospitalization criteria at the start of epidemic and (2) evolution of patients included in the Recovery trial. Indeed, even if centers should have included as many patients as possible as soon as their inclusion criteria were met, it is possible that they have only included part of eligible patients and that this part evolved with time. This bias can be easily assessed and fixed: the curves of inclusions in the different arms and mortality rate in the Recovery trial can be drawn as a function of date (from March to June) and an adjustment on date of inclusion may be performed in a sensitivity analysis.

      **Conclusion**

      Recovery is the study with the best methodology that we have seen on COVID-19 treatments in curative intent and we salute the initiative of publishing transparently the protocol, its amendments, the statistical analysis plan and the first draft of the report. We hope that our reporting suggestions will be taken in account in the final version of the paper. We think that discussing these points will qualify the interpretation of results, further improve the transparent approach adopted by designers of the study and improve the reliability of the conclusions. We expect a high-quality reporting of these final results, with full transparency on interim analyses, statistical analysis plans and statistical analysis reports. We hope that these comments are helpful and again we acknowledge that this study is not solely outstanding in terms of importance of the results but is also a stellar example for the whole field of therapeutic research. We invite other researchers to provide comments to this article to engage in Open Science.

    2. On 2020-06-24 05:32:27, user Gavin Donaldson wrote:

      Were patients ineligible to the dexamethasone arm excluded completely from the recovery trial or allocated to the other arms including the standard care arm?<br /> Could the reasons for the exclusions be included in a consort diagram of the participant flow.<br /> There was an imbalance between the two arms in age. Is there any data on obesity or hypertension since these are important risk factors for covid-19 mortality.

    3. On 2020-06-29 17:14:09, user Aiman Tulaimat wrote:

      The study reports a mortality ~40% in patients on vent on the control arm. This is much lower than what is reported by the critical care audit from the UK, which reports mortality > 60% in such patients. The study reports ~25% mortality and ~60% discharge alive. Are we missing 15% of patients? If the analysis is a Cox hazard, why is the report using relative risk? How did 13% of patients with no oxygen therapy die? This is very high? Did their covid deteriorate or did they die from other reason? why were these patients hospitalized if they were not hypoxic? did they decline life support when it became needed? where they not on oxygen because they were in hospice like setting? how did the other patients die? Were patients on the ventilator made DNR early? Was prone position used? Was it used more in the dexa arm? Was there imbalance in the ramdomization by center?

    4. On 2020-07-10 18:25:07, user Joanna Spencer-Segal wrote:

      The authors speculate in the discussion that "It is also possible there is an effect via mineralocorticoid receptor binding in the context of SARS-CoV-2 induced dysregulation of the renin-angiotensin system." It is not clear what this means, but dexamethasone has minimal activity at the mineralocorticoid receptor, which distinguishes it from the other corticosteroids often used in critically ill patients (methylprednisolone, hydrocortisone). More clarification of what they mean regarding "mineralocorticoid effect" and rationale about why dexamethasone was chosen for this study would be welcome.

    1. On 2020-06-25 04:04:22, user Greg WHITTEN wrote:

      Thank you for your work. I am curious, however, about some parts of your article.

      First, I read your paper and could not see where you tried to control for the introduction of other virus-containment measures such as school closures, lock-downs, and physical distancing. Did I miss something in your paper?

      Second, I have a question about your model #4 on page 9. You wrote "All<br /> regression coefficients were statistically significant in this model." The coefficient for the non-mask wearing rate in late April and early May is significant but negative. I.e., not wearing a mask in late April and early may reduces deaths on May 13th. Do you have any thoughts about this?

      Third, did you consider performing a panel regression using deaths on all days, say, starting from March 31st (about 2 weeks after the March mask non-wearing rate) instead of relying just on deaths from May 13? Although you did explain why you chose May 13th, it may be better to use all death dates after, say, the incubation period for the virus.

      Fourth, your section "Prediction of mask non-wearing rates" suggests that your regression analysis suffers from multicollinearity. Do you have any concerns about this?

    1. On 2020-06-25 11:29:20, user MAGB wrote:

      Your basic reproductive number of 2.68 based on early Chinese data is at odds with the effective reproduction number of less than one in all Australian states by Easter, as tweeted by James McCaw. His data indicate that voluntary controls and border closures had the epidemic well under control before lock-downs had any effect.

    1. On 2020-04-08 22:31:44, user Mansour Tobaiqy wrote:

      I am glad to say that our manuscript Therapeutic Management of COVID-19 Patients: A systematic review has now accepted for publication at the Infection Prevention in Practice @IPIP_Open the Official Journal of the Healthcare Infection Society @HIS_infection

      The last version will be available soon at their site. Thank you very much medRxiv for sharing our SR to a great and large audience .

    1. On 2020-04-10 22:02:12, user Todd Johnson wrote:

      Have any of the causal inference researchers at Harvard taken a look at this? Do we know enough to create a few candidate causal DAGs to know what to adjust for?

    1. On 2020-07-02 15:42:32, user Kamran Kadkhoda wrote:

      The entirety of covid serology remains questionable with lack of clinical usefulness; the specimen type therefore is irrelevant...

    1. On 2020-07-05 20:02:42, user Rich Nunziante wrote:

      There’s a word missing in the first paragraph of the abstract: “Of the 9 locations, 3 had one or employees infected with SARS-CoV-2,...” Should that be “one or two” since later you mention “both”?

    1. On 2020-07-14 15:00:51, user Chyke Doubeni wrote:

      The title should reflect the multicomponent nature of the intervention so that it is clear to readers that it used CHW to help people navigate the engagement

    1. On 2020-04-17 18:17:20, user LASD wrote:

      So...uh...what about Sweden? Have yet to see any reasonable explanation for why the lack of lockdown there didn't lead to catastrophic consequences and bodies piled up in the streets?

      Significantly lower number of confirmed cases/deaths than Switzerland and all the other major western European countries, Belgium, etc.

    2. On 2020-04-19 15:53:35, user JGaltbna wrote:

      Nothing happens “right now”. I suggest actually reading the WH plan to reopen and what has to happen before anything is “relaxed” per policy. 3 phases, each lasting at least 14 days? Ring a bell? The only restrictions being eased now are things that should never have been restricted like walking on a beach. The danger isn’t the policy but that people ignore the policy.

    1. On 2020-07-18 18:45:51, user James Truscott wrote:

      Hi, I think there is an error in the model as laid out in Supplementary Text File 1. The variable z represents all non-susceptibles, which includes the infectious-infected , y. The rate of loss of immunity term in equation 1 is gamma*z, but infectious individuals presumably don't have immunity to lose. They first recover (at rate sigma) and then can lose immunity. The term in equation 1 should therefore be gamma*(z-y). This change will affect the algebraic result, probably, and may change the dynamics significantly at some time points and/or parameter values.

    2. On 2020-07-21 19:08:09, user Jeremy Rolls wrote:

      Fascinating paper. Looking at the antibody data (such as there is any published here in the UK) about 18% of people in London have antibodies compared to about 8% nationally. On that basis alone 82% of Londoners may still get infected compared to 92% nationally - i.e. you would expect the mortality rate in London still to be pretty close to the national rate. Yet the hospital death stats for covid-19 in recent weeks shows London's rate consistently to be less than 40% of the national rate. Something else must, therefore, be going on - a) London is locking down better (unlikely), b) antibody immunity does not give the complete picture (possible given the data coming out of Sweden showing that for every person having antibodies two others have T-cell immunity) or c) there is a % of the population who have pre-existing resistance (from exposure to other corona-viruses) or are biologically incapable of getting infected. Ruling out a), a quick bit of maths shows about 75% of the population must fall into b) or c). So, on that basis, in London well over 90% have either been exposed to the virus or have pre-existing immunity and maybe 80-85% nationally. I suggest herd immunity has probably been achieved in London and is close in many other parts of the UK.

    1. On 2020-07-17 01:54:28, user Born in Akron wrote:

      Is LD-RT a widely known specific therapy? This paper does not indicate the type of radiation. X-rays, Gamma rays, proton accelerator, sun lamp? The dose is 1.5 Grays = 1.5 Joules/kg = 150 rad. But the biological effect in rem or Sieverts depends on the type of radiation and duration of the exposure. Even if LD-RT is always, say, X-rays, shouldn't the effects depend on the energy of the X-rays? Unless LD-RT has a unique definition this preprint is deliberately irreproducible, perhaps to gain advantage for patent protection during a worldwide pandemic.

    1. On 2020-07-20 21:52:28, user Deborah Barr wrote:

      It might be useful to correlate by medications taken. Depletion of magnesium and zinc affect clotting.

      "drug-induced nutrient depletions are well known by pharmacists, many are underdiscussed and subsequently underdiagnosed and undertreated."<br /> 33 citations.<br /> https://www.uspharmacist.co...

      Uwe Gröber's Magnesium and Drugs, https://www.ncbi.nlm.nih.go... with an excellent image of ways that drug interfere with nutrient levels in the body, and a table specific to Magnesium.

    1. On 2020-07-21 16:14:03, user Kamran Kadkhoda wrote:

      Baes on the current estimates, the sero-prevalence in Idaho is around 4% at most; such high percentages are most likely false positives; I refer authors to the study just posted here on medrxiv from China showing sero-prevalence of 2% or less in Wuhan! They used PRNT to confirm the results. That's the right way. <br /> Abbott is clear in their IFU by saying they did NOT use samples from cases with confirmed infection with common CoVs…<br /> Despite publications using "convenience samples" specificity shows its shortcoming while used large scale in the field...here's one example!

    1. On 2020-07-22 17:00:56, user Robin Whittle wrote:

      As Karl Pfleger suggested, I hope there will be more detailed information on 25OHD levels, symptoms at admission and as treatment progresses.

      In light of a recent review (Charoenngam & Holick for a recent review https://doi.org/10.3390/nu1... "https://doi.org/10.3390/nu12072097)") which states that 40 to 60ng/ml 25OHD is required for proper immune system function, the 25OHD thresholds and D3 doses seem inadequate. This article also recommends an initial 12.5mg D3 (50,000IU) for all COVID-19 patients.

      According to the present article, patients with 30ng/ml or more are given no D3 at all. Daily doses for those with lower levels are only 0.02mg (800IU) per day, which is a 20% or less of what most people would require to maintain 40ng/ml - assuming the supplement was taken with a fatty meal and well absorbed. https://journals.plos.org/p... indicates that average weight people need about 0.125mg (5000IU) a day to reach the middle of the 40 to 60ng/ml target range.

      Surely all these low 25OHD levels (and the researchers report 21.6% of patients with initial levels below 6ng/ml and some below the 3.2ng/ml detection limit) warrant urgent action. What objection would there be to bringing all patients up to at least 40ng/ml with oral or IV 25OHD cholecalciferol (Rayaldee)? This would go into circulation immediately without relying on potentially hepatic conversion of D3 to 25OHD, which takes days or a week or so - even if the liver is functioning properly.<br /> The present article cites, as prior observations of low vitamin D levels correlating with COVID-19 symptom severity, an Indonesian article (26), an Indian article (27) and one from the Philippines (28). The first two have been withdrawn. Please see my page https://researchveracity.in... for the reasons which lead me believe that none of these three articles report actual research.

      I think that the present article and a recent one An autocrine Vitamin D-driven Th1 shutdown program can be exploited for COVID-19 Reuben McGregor et al. 2020-07-19 https://www.biorxiv.org/con... are important steps in elucidating the role of vitamin D deficiency in COVID-19 severe symptoms. I have cites both articles at my page on vitamin D and COVID-19: http://aminotheory.com/cv19/ .

      More research is urgently needed, but since vitamin D is a safe, inexpensive, nutrient which most people are deficient in (by the 40+ ng/ml standards we now know are important for immune system health) robust supplementation programs for all in need (most humans) need not await further research or clinical trials.

    1. On 2020-12-19 17:11:21, user Gary Bayer wrote:

      As an actuary whose required training includes construction of mortality tables, life tables and life expectancies, I attempted to verify the results. Unfortunately the details of the methods are too vague to be easily followed, so instead I attempted a standard approach to creating life expectancies. Starting with the 2017 US life tables, I explored modifying the "qx's" (probabilities of death in the next year for an individual aged x) but assuming a one time nature of Covid-19, only the specific current age (and perhaps the following age) should be adjusted for any age cohort. Therefore, for an individual age 10, only the qx for age 10, and perhaps age 11, should be adjusted to reflect the impact of Covid-19 on life expectancies. The age adjustment should be reflective of mortality risk at that age. At this point on time, based on the CDC's reporting of excess mortality, there is no evidence of increased mortality for idividuals under the age of 15. In other words, Covid-19 has not changed this cohort of individuals at all.<br /> The best guess that I can make as to what the authors were trying to express is that Covid-19 has, or is expected to reduce the average age at death this year by a year. I do not know if this is true or not but can see some merit in estimating that result.<br /> One final note, I visit the IHME Covid-19 website almost daily. It is a great tool for seeing the current state of Covid-19 in the United States, and a great tool for policy makers to get insights on what they may need to be planning for in the next couple of weeks. However, a simple look at it's various projections for daily deaths clearly shows the naivety of the estimates of what might happen in the beyond a couple of weeks. An adage that I always rely on as an actuary is the results can only be as good as the assumptions--even if the model being used is good.

    2. On 2020-07-15 13:15:39, user E Y wrote:

      Something is wrong here, the IHME projected 2020 total US death is about 250000, that's 0.08% of the US population, how can that cause 1% of reduction of life expectancy of US population?

    1. On 2020-07-25 12:17:53, user John H Abeles wrote:

      Hydroxychloroquine ( HCQ ) and Covid19

      The negative observational and controlled clinical studies to date refer mainly to using hydroxychloroquine (HCQ) in serious, later stage, hospitalised Covid19 patients

      In both the Solidarity/WHO study and the Recovery/UK study extremely high, even massive doses ( up to 6 times that recommended for early CoVid19 patients!) were used for unknown reasons - since the half-life of HCQ is around 21-30 days these daily massive doses could have caused very high blood levels and likely were fatal in some instances - so HCQ group deaths could have been caused by such high dose regimes, so probably skewed the results ..

      Also this is likely the wrong group of patients to treat with maximum effect, in the first place — early Covid19 is the best arena for HCQ treatment in combination with zinc and either azithromycin or doxycycline...

      It must be stated that no known oral antiviral for outpatients works maximally unless given quite early in disease eg oseltamivir/Tamiflu influenza; valacyclovir/Valtrex in herpes

      Even iV remdesivir - a potent SARS-CoV-2 antiviral - didn’t achieve hoped for results in hospitalised patients

      Later stage Covid19 patients are mostly suffering from the effects of hyperinflammation ( cytokine storm) and when viral titres are well beyond their peaks. Hyperinflammation can cause myocarditis which can certainly predispose to further cardiac toxicity.

      [There are interesting thoughts that the hospitalised patients with cytokine storm / hyperinflammation in reality have a form of ADE ( antibody dependent enhancement of disease ) ie a hyperimmune reaction to a second SARS-CoV-2infection or as a result of a SARS-CoV-2 infection after a previous infection with a closely related virus]

      HCQ was also used in the negative studies without added zinc which could be a design for failure, as one of the main, but certainly not only, antiviral actions of HCQ is as a zinc ionophore ie it gets zinc to enter cells much more easily where it can exert its added and established antiviral actions

      HCQ is a known antiinflammatory and this action may be of some use in the hyperinflammation stage in hospitalised patients, but other more potent immunosuppressive ( and a few candidates that are nonimmusuppresive immunotherapies) could be more demonstrative in this regard.

      Despite this there are some data to suggest benefit of HCQ even in hospitalised patients

      For early Covid19 the usually prescribed course is for 5 to 7 days of around 400 mg daily HCQ with 100-200 mg zinc which would not invoke the long term side effects mentioned so often - and very few toxicities are reported even in long term therapy for autoimmune disorders. Any short-term arrhythmia concerns can be allayed by making sure of normal potassium blood levels

      In the several thousands of outpatient Covid19 case reports published up to now , when used in early disease, there have been few if any major side effects noted.

      (But in later stage, serious hospitalised patients many other drugs are also used, bringing into question the possibility of toxic interactions with HCQ. Also organ damage including myocarditis -heart inflammation-could be a particular predisposing factor in hospitalised patient toxicity predisposition to HCQ )

      HCQ is a cheap, easily made generically available drug - and main manufacturers, like Novartis and Teva have donated billions of doses worldwide since the event of Covid19, so shortages, as some fear, for those taking it for malaria ( preventions or treatment) or for autoimmune diseases, like lupus or rheumatoid arthritis etc are highly unlikely

      Here below are some pertinent positive references for further reading on the question of HCQ plus zinc plus either doxycycline ( my preferred choice because it isn’t associated with further small cardiac risk) or azithromycin

      Note : Most of the successful reports of the use of HCQ plus zinc etc are in early stage, outpatients and not in late stage, hospitalised patients

      The first link is a large data base (more than 50 studies ) on HCQ in Covid19 treatment

      The second reference is an important review from a Yale University professor ...

      The third and fourth are on a recent, large, well conducted observational study from Henry Ford Hospital ...

      The fifth is an important outpatient study ...

      https://c19study.com/

      https://academic.oup.com/aj...

      https://www.ijidonline.com/...

      https://www.henryford.com/n...

      https://www.preprints.org/m...

      https://www.ijidonline.com/...

      https://www.preprints.org/m...

      https://aapsonline.org/hcq-...

      https://www.medrxiv.org/con...

      https://www.preprints.org/m...

      https://www.evms.edu/media/...

      https://link.springer.com/a...

      https://pjmedia.com/news-an...

      https://www.medrxiv.org/con...

      https://www.medrxiv.org/con...

      https://www.middleeasteye.n...

      http://www.ijmr.org.in/prep...

      https://aapsonline.org/hydr...<br /> decide/

      https://www.indiatoday.in/i...

      https://www.medrxiv.org/con...

    1. On 2020-06-25 11:12:33, user Dude Dujmovic wrote:

      Faulty study. The BCG cohort is older than non-BCG cohort, likely by 2-3 years on average. That is not a small difference when the samples are so big. Amazing how they did not notice that. There is always a good reason why randomized samples are used. Your samples are biased based on age. You need to have samples with about the same average age and about the same standard deviation. And more.

    1. On 2020-07-25 19:31:23, user ???? ??? wrote:

      It reflect the PK/PD pharmacological predication of efficacy <br /> The problem of LPV is complicated PK . Strong protein binding 98 % , extensive metabolism , long list of drug interaction. Therapeutic drug monitoring is mandatory to adjust dose in clinical setting. Moreover extrapolation of in EC50 to the current dose is not prefect. It was suggested to use PBA EC90 . the base line protein binding adjusted 90 % effective concentration. There is a debate about ability of current regimen to achieve Cmax > PBA EC 90 at lung tissues in severe cases

    1. On 2020-07-26 14:06:30, user Gordon Erlebacher wrote:

      I started to read the paper, but all the equations are missing. <br /> Here is an additional question. The contact matrix Mij measures to the average number of contacts between one person in group I and all members of group j. But are these different contacts or contacts with repetition? The different possible choices affects the spread of the virus. Any insight is appreciated.

    1. On 2020-07-28 17:02:07, user Liam Golding wrote:

      Nice research on an important matter. I really appreciate your work.

      It would be nice to know the sample sizes you experimented on to obtain the statistical differences. Can you provide these on request?

      Cheers,

      Liam G

    1. On 2020-06-30 21:10:32, user Stephen Cherniske wrote:

      This is rather paradoxical, in that IL-13 is generally considered to be an anti-inflammatory cytokine, as in IBS. Even more surprising: the observed beneficial effect of DHEA treatment of murine IBS appears to result in part from increased IL-13 expression in colonic epithelial cells. REF: Immunobiology. 2016 Sep;221(9):934-43. doi: 10.1016/j.imbio.2016.05.013. <br /> Dehydroepiandrosterone (DHEA) Restrains Intestinal Inflammation by Rendering Leukocytes Hyporesponsive and Balancing Colitogenic Inflammatory Responses<br /> Vanessa Beatriz Freitas Alves , Paulo José Basso et al.

    1. On 2020-08-03 13:54:21, user Charles R. Twardy wrote:

      Forgive me if this is covered in the paper - today I am just skimming abstracts. But another preprint out today shows a mortality risk reduction of 0.7 per 100 kJ/m^2 of ultraviolet (UVA) exposure, in three countries measured at the county level. Is US altitude a proxy for UVA? Vice versa? Could you two combine models to look for residual effects?

    1. On 2020-07-02 15:39:58, user Kamran Kadkhoda wrote:

      The reported prevalence is very high suggesting high false positivity rate. The actual sero-prevalence for that county is estimated to be around 6% as of today (if we assume only 20% of cases are tested by for RNA). It would have definitely been much lower back in April. Another reason serology should not be used given it's high rate of false positivity mostly due to common CoVs like OC43 and HKU1.

    1. On 2020-07-02 18:46:51, user Julio C. Spinelli wrote:

      Having personally arquitected several clinical trials, the phase II results in young volunteers forces me to provide a word of caution to our collective desire to quickly develop a vaccine for COVID-19. <br /> The frequency and severity of many of the AE's described in this preprint for the young (18-55) and healthy population described here doesn't bode well for the results of a phase III clinical trial. Not until the phase II results of the older cohort are known. Furthermore, extrapolating these data to the Latin and Black populations would be pure hubris on our part. Further phase II data is required before we move into phase III trials<br /> Dr. Julio C. Spinelli

    1. On 2020-07-02 20:46:36, user C'est la même wrote:

      The claim of 99.3% specificity seems very high compared to other antibody tests when tested with large population samples.

      But that aside, some readers seem to be inappropriately concluding that undersampling in specific regions during that period can be generalised to conclude that the true cumulative incidence is ten times the total number of confirmed cases.

      This is unfounded for two reasons. The first is that regions with very high case numbers (Such as NYC) were temporarily overwhelmed in terms of testing capacity and correspondingly very high test-positivity rates. However over time, the testing caught up with demand and with expanded testing, the test-positivity rates dropped by the expected order of magnitude and likely "caught up" for at least some of the participants who were missed.<br /> The second reason is the sample in the study is not a true random population based sample, but a convenience sample which is also biased towards higher test-positivity rates.

      Thus while I don't disagree with the conclusion of the authors, I urge strong caution among readers who are tempted to conclude that true case numbers are a magnitude of order higher than officially reported.

    1. On 2020-07-03 20:21:43, user Marm Kilpatrick wrote:

      Interesting paper. <br /> Could you clarify if the incidence values in Fig 1,2 and throughout are:<br /> Incidence = Cases in age group X/total population<br /> OR<br /> Incidence = Cases in age group X/population of age group X<br /> Since the age groups represent different fractions of the total populations this would change the intercept of the different incidence values/curves.<br /> Thanks!<br /> marm

    1. On 2020-07-06 18:41:52, user Fatnot wrote:

      Unlikely that just zinc supplementation would work,,,a zinc ionophore is also required.. We also have the report from Dr Zelensky, in Rockland County, NY, who treated hundreds with<br /> a combo including zinc and HCQ, resulting that few required hospitalization. The report is anecdotal...but another term for a set of anecdotes is of course DATA And with data and analysis, one can draw conclusions and confidence intervals.

    1. On 2020-08-12 12:37:37, user Marc Imbert wrote:

      It is worth to not that this study has more cormobity and symptomes for the group treated with HCQ and AZT. All patient not treated has a mild desease while about only 63% in the group treated. Further one should use a healthy scientific scepticims regarding hasting conlusions based on studies at the late stage of the desease. In particular with the description of the evolution of the disease which is now known,

      .

    1. On 2020-07-13 11:08:47, user Andrew D'Silva wrote:

      In any infection IgM responses converting to IgG responses fall over time and rise when there is a secondary antigen exposure. Why do these findings suggest that there is loss of immunity with declining neutralising antibody levels? Surely the questions are: what happens after secondary antigen exposure? Do the neutralising antibody levels rise again? Do they protect from developing the same clinical disease again? Do they affect severity of disease after second exposure?

    1. On 2020-08-25 09:01:02, user Tjabbe wrote:

      Evidence for what, that it doesn't work for late stage covid in hospitalised patients? Is that even news? How come at this stage in the pandemic we are still publishing reports that claim medication be ineffective "for treating covid19" when in fact it was only tested for patients with severe covid19 already in the hospital. We all know patients will not be sent to a hospital in the Netherlands for covid unless they have progressed pretty far. <br /> The report describes hcq being used on patients when deteriorating in several of the hospitals, affecting mortality, and media outlets conveniently leave out this part of the puzzle.

      If you want to curb covid, or if you want to write off medication as being useless "for covid" , start doing trials on early outpatient treatment.

    1. On 2020-08-30 11:33:10, user Martijn Weterings wrote:

      One problem with those S(E)IR compartmental models is that they always assume/pretend that a virus is spread homogeneously among a well mixed population. According to such models, the chance that someone in a small village in the South infects somebody else, is the same chance for anyone. The same for somebody in the North as somebody in their immediate family or other people in close neighborhood.

      Such compartments are obviously not realistic for modeling an entire country. More suitable are networked S(E)IR' or spatial S(E)IR models. In such models, the virus spreads more like an ink blot.

      Due to the local saturation, growth rates are already decreasing early on. Models that do not incorporate local saturation will 'compensate' (in order to get the same early deflection) by either reducing R0, or the (effective) population, or the reporting factor (upscaling the number of infected). If you try to fit a simple compartment SIR model to real data, then you will get unrealistic epidemiological parameters.

      What they are doing in this article, dividing the population into layers with different rates of infection, is effectively shrinking the population that is 'reached' by the virus.

      So this effectively makes the population smaller, but the question is whether it is the right way to shrink the population? Instead of a parameter in a mechanistic model, it might better be regarded as a parameter in an empirical model. It is an extra variable to ensure that the unsuitable simple SEIR model corresponds somewhat better with the measurements.

      In reality, there are several effects that cause the observed epidemiological curves to deviate from the simple models (Besides heterogeneity, the use of local distribution in spatial or networked S(E)IR models, instead of global homogeneous compartments, is another important one).

      By only including only a single effect in fitting, you get that all other effects are absorbed by that one effect. The result is an unrealistic estimate of the epidemiological parameters, which will not be suitable for extrapolation (for example calculating the 'herd immunity' percentage).

      It is to be expected that this model, with only the heterogeneity incorporated, will likely underestimate the percentage to reach herd immunity. This is because it is overestimating the effect to compensate for the lack of other non-incorporated effects (and spatial models will be able to model the same deflection of the curves, but with less reduction of the herd immunity).


      The above is a severe systematical problem, which will result in a bias towards smaller herd immunity percentages.

      In addition: The fit with the curve is strongly determined by an interaction of the population size and the factor between the reported infections and actual infections (in a simple S(E)IR model, the two have the same effect). Such correlation between the two parameters will cause great inaccuracy.

      And these are considerations that do not yet mention the problems with measurements of the epidemiological curve. For instance, the inaccuracies in reporting are not easily solved with a single (constant) reporting fraction. In order to estimate epidemiological parameters we need more direct experimental data (e.g. detailed information about contact tracing). From those we can deduce more directly the variations in infection rates and estimate the potential impact on herd immunity. Just fitting a model to the curve is a bad idea.

    1. On 2020-09-23 07:52:50, user Subhajit Biswas wrote:

      Pleased to see other scientists are supporting with further evidences, the trend we had observed and reported as early as April 2020.

      Based on non-overlap of dengue and COVID-19 global severity maps and evidences of SARS-CoV-2 serological cross-reactions with dengue, we proposed that immunization of susceptible populations in Europe, North America and Asia (China, Iran) with available live-attenuated dengue vaccines, may cue the anti-viral immune response to thwart COVID-19.

      https://www.preprints.org/m...

      Our publications in this area to support our proposition:<br /> 1) COVID-19 Virus Infection and Transmission are Observably Less in Highly Dengue-Endemic Countries: Is Pre-Exposure to Dengue Virus Protective Against COVID-19 Severity and Mortality? Will the Reverse Scenario Be True?

      Clinical and Experimental Investigations, Volume 1(2): 2-5.<br /> https://www.sciencereposito...

      1. Nath, H., Mallick, A., Roy, S., Sukla, S., & Biswas, S. (2020, June 19). Computational modelling predicts that Dengue virus antibodies can bind to SARS-CoV-2 receptor binding sites: Is pre-exposure to dengue virus protective against COVID-19 severity?. https://doi.org/10.31219/os...

      2. This one in medRxiv!

      Now, other scientists are observing the same trend in Brazil! Exciting!

      See recent publication below and news coverage

      1.https://www.medrxiv.org/content/10....

      1. Study suggests dengue may provide some immunity against COVID-19.<br /> https://timesofindia.indiat...

      Amazing! Nature has its own ways of controlling parasite aggression! Antigenic correlation between a flavivirus and a coronavirus was unprecedented.

      Existing and licensed dengue vaccines could be tested in SARS CoV2 animal models and tried in dengue non-endemic countries.

      Use in dengue-endemic countries may be problematic as such vaccination can elicit antibody-dependent enhancement of subsequent dengue infections.

    1. On 2020-08-09 21:12:25, user Cynac wrote:

      The results appear to show a significant relationship between menopause and diagnosis of Covid-19 by your algorithm. There is no significant association with positive Covid test ("proven" Covid) or severe disease.<br /> The significant symptom associations do include fever, but not cough or even the anosmia. Whereas "skipping meals" is a highly significant association.<br /> This brings the major possibility that it is your algorithm for diagnosing the disease that best relates to menopause, perhaps by some quirky inclusions.<br /> There must also be some difficulties in allowing for age etc. When the influences of these factors themselves are not precisely defined.<br /> This study is clearly worthwhile, and of interest. But the way the abstract will be viewed in the media might be an over-simplification.

    1. On 2020-08-13 00:49:59, user Jesse Baker wrote:

      Regarding a passage in this MedRxiv post (July 21, paragraph 3 with citation to reference #15), “Additionally, recent clusters of COVID-19 cases linked to a…restaurant in Wuhan are suggestive of airborne transmission.”

      Although the index case having lunch on Jan. 24 was from Wuhan, the restaurant was in Guangzhou. Indeed, its location far from Wuhan so early in the spread of Covid increased Guangzhou CDC’s confidence that the other patrons were infected by the index case and no one else.

    1. On 2020-08-13 07:57:05, user Zeit wrote:

      Very interesting manuscript. I think it may be wise to remove isotopes from your data as it seems clear that you have associations of monoisotopic peaks and their isotopic peaks with phenotypes. If you correlate the retention times of ions most correlated with each other by area count/signal, it should reveal that they are non-independent ions.

    1. On 2020-08-13 20:06:24, user Rhyothemis wrote:

      Could the low number of deaths in Kenya be at least partly attributable to low per capita protein consumption? It seems as though many countries with low per capita protein consumption rates are reporting relatively low per capita COVID death rates. Mechanistically, such an association (if it exists) could be related to lower baseline mTOR activation.

    1. On 2020-08-15 14:01:48, user Dom_Pedulla wrote:

      Joao not only had Dude made some very good points, but in observational trials like this, everything depends on the nitty gritty data. I notice the huge qualifier "recent" in the results sentence, noting that carefully since in many studies these kinds of adjectives disclose or hint at certain erroneous tendencies or conclusions in even in "meticulous peer-reviewed studies". I am requesting the paper to analyze for myself, and suspect strongly that what it may show is a "benefit" for only the very recently vaccinated, and that either long after it either ends up being a net liability as regards COVID death risk, or that the timing isn't possible to discern because the investigators avoided studying all but the recently vaccinated.

      We'll see.

    1. On 2020-08-15 23:30:43, user Nan wrote:

      To those who tweeted and regarded this as evidence that masks don't work,

      This article does NOT imply masks don't work. If one wishes to draw such a conclusion, a direct comparison is required on the disease risk when wearing masks versus not. From both the fifth and sixth comparison in the figure and a related article (https://www.bmj.com/content... "https://www.bmj.com/content/369/bmj.m1442)"), masks are better than not wearing at all! This article only says physical distancing is very important for cloth and surgical masks. It means that besides wearing normal masks, I should be cautious about a strict physical distancing. This agrees with common sense that the more protections (e.g., masks, distancing, etc.) we have, the safer we are.

      Also, is physical distancing always easy and tangible to follow? The answer is no. You cannot guarantee that you are always in safe distances with other people in the street. In contrast, masks are a lot more perceptible. They reduce exposure to the contaminated air. Masks are also a sign of caution. A sign that everyone should protect their community by reducing transmission.

    1. On 2020-08-16 19:15:38, user Skadu SkaduWee wrote:

      One of the fundamental assumptions of the paper is the use of a previously tested positive saliva sample to prepare the serial dilutions used for the limit of detection studies. However, the authors omit to declare how this initial copies/ul value was arrived at and by whom.

    2. On 2020-08-17 09:34:02, user buddinggenetics wrote:

      The principal author has stated in media that the cost per test is $10, however in the text the cost is listed as $1.29-4.37/sample. Pricing should be consistently stated to avoid misleading the public and/or scientific community. Also, the text states that the price per sample is low, which is a relative term, and gives no price estimates of other established tests for comparison.

      Multiplexing the samples is a fundamental improvement of testing, however there is insufficient evidence to show eliminating the N2 primer set is justified. There needs to be an analysis of how many inconclusive test results (N1 positive and N2 negative/ N1 negative and N2 positive) would now become positive or negative tests as a result of eliminating the N2 primer set. Also, in Supp Fig 2, the data appear irregular with a bimodal distribution when a Gaussian distribution would be expected. The authors do not discuss the reason for this in the text. Furthermore, the failure of the N2, E, and ORF1 sets may be due to the HEX fluorophore. Would they have worked using a different fluorophore? Would the authors have eliminated N1 if they had by chance used HEX on N1?

      The Source Data files are not posted.

    1. On 2020-08-18 18:07:44, user Eric Vallabh Minikel wrote:

      Excellent, important study, with carefully considered conclusions from the authors. Some readers may assume that if plasma NfL can become elevated 2y before onset, then NfL could be used as a prevention trial entry criterion, a primary endpoint, or a basis for deciding which patients are eligible for drug access/reimbursement. Importantly, the authors of this paper do not assert that their data support those applications. I believe there are three key considerations here that should be factored into any clinical application of plasma NfL quantification in pre-symptomatic genetic prion disease: genotype (rapid vs. slow PRNP mutations), age (affects reference range for NfL), and cross-sectional (as opposed to longitudinal) number of people in a prodromal state at any given time. I have written a detailed blog post here: http://www.cureffi.org/2020...

    1. On 2020-08-18 20:34:55, user Lauren Call wrote:

      I found this study through a link in a CNN article, along with the quote: “Gommerman said since scientists have not seen a record of re-infection, even with as widespread as the pandemic is, that strongly suggests the body's immune system is working well against this threat, and re-infection is less likely.” I am surprised they haven’t “seen” a re-infection, because I’ve had 2 positive COVID-19 tests, separated by 3 months, with a negative antibody test in between. Both times I had classic coronavirus symptoms, but they were distinctly different cases.

    1. On 2020-08-19 11:00:05, user AbsurdIdea wrote:

      Have I understood this right: " Vitamin D dose was not significantly associated with testing positive for COVID-19."? So taking vitamin D does NOT reduce the probability of testing positive for CoViD-19...Then, why take it against CoViD-19? For the rest - correlation or causation? Healthier people are likely to have a higher probability of sufficient vitamin D, conversely, people in poor health for any reason are likelier to have low vitamin D. Also there is a difference between becoming infected i.e. the virus actually entering into a person and propagating and the degree of illness and complication once being infected. This study does not appear to address these factors. Finally the phrase "499 had a vitamin D level in the year before testing" does not make sense. All people have some level of vitamin D.

    1. On 2020-08-19 17:59:56, user petsRawesome1 . wrote:

      "Of the 43 patients randomized to ConvP 6 (14%) had died while 11 of the 43 (26%) <br /> control patients had died."

      That sounds like the study showed promise on the key metric, mortality, it just did not have enough data when it was stopped. It would be good to be very clear about the reasons for discontinuing the study, as the New York Times of Aug 19, 2020 is quoting this paper as "Last month, one such trial in the Netherlands was stopped when researchers realized that patients given plasma showed no difference in mortality"

    1. On 2020-08-20 01:45:02, user giorgio capitani wrote:

      How it can be proved without any doubt that the virus present in the aerosol actually infects a person? it can present but be harmless. Where is the evidence of the actual trasmission of the infection? the presence in the aerosol is not evidence of the transmission of the virus it's another pair of shoes. Or somebody can be infected and others not. How can you tell one thing from the other? they are two different moments: the presence of the virus in the aerosol, the actual transmission of the virus.

    2. On 2020-08-21 13:38:24, user Susan Levenstein wrote:

      If I understand the paper correctly, its most striking result is the isolation of Patient 1's virus from the VIVAS air sampler located 4.8 m away. But according to the Figure, Patient 1 had to walk right past that air sampler, closer than 1 m, every time he went to the bathroom. Couldn't that be a simpler explanation for how it picked up his virus?

    3. On 2020-08-23 16:11:32, user Ang wrote:

      Hello there,<br /> below a question for someone with the right competence.

      True the approach of this work is great, it might result that they are right or wrong we'll see, starting from the review outcomes. However a common person would ask: "Why can't we do a direct and conclusive experiment about transmissivity through aerosol?". A direct experiment is to put a never infected person in the same room with a SARS-CoV-2 ill person, without the physical possibility to exchange any particle between them except air/aerosol. 100 person would cover a good statistics in terms of age, gender, time of exposure and other characteristics of the volunteer. Is this possible? How can be that in the entire world we cannot find 100 voluntaries that are available for the following experiment. Why is this something not done yet?

    1. On 2020-08-24 04:45:43, user Bill Pilacinski wrote:

      Now it will be important to identify those in the population who are immune so that the early limited supply of vaccine can be used for those susceptible individuals of high priority as we attempt to reach herd immunity.

    1. On 2020-08-24 06:26:34, user Stan Himes wrote:

      For COVID-19 you should include co-morbidity data, without this key information (which may be contained in full article) the data presented is worthless.

    1. On 2020-08-24 15:34:57, user Eva Lendaro wrote:

      Hello,<br /> I question regarding what does the vector beta account for. it supposedly includes policy dummies of businesses, restaurants, movie theaters, and gyms being allowed to reopen but in practice it is not very clear how these are accounted for. Is the capacity at which they were allowed to reopen considered? are the categories considered separately?

      I would also like to point out this systematic review on this exact topic published on may 26th, 2020 on bmj that is nowhere mentioned in this article but would be rather important to include for completness.

      https://www.bmj.com/content...

      Best Regards,<br /> Eva

    1. On 2020-08-25 21:29:56, user Chris Raberts wrote:

      I am not sure how the authors can use a study that speaks of N95 and 12-16 layered cloth masks and come to a conclusion like this. (reference 31).

      In a recent comment (https://www.thelancet.com/j... "https://www.thelancet.com/journals/lanres/article/PIIS2213-2600(20)30352-0/fulltext)") the same authors speak of a range of 6% to 80% of mask benefits regarding reduction in transmission. I wonder what percentage was used in this paper, but given the results I'd assume it is on the higher end. Also that paper does not speak to schools, mostly to health care settings.

      More transparency would be great but overall this paper looks like agenda and not science :/.

    1. On 2020-08-27 11:22:55, user pto wrote:

      I thought the index cases of that conference were all local residents of the Boston area. If so, that certainly wouldn't rule out a previous introduction a few weeks earlier. Say when international university students returned to Boston 3 to 4 weeks earlier.

    1. On 2020-08-27 13:31:15, user Joe Psotka wrote:

      Using data from Florida creates misleading expectations because Florida's decrease in March and April was largely from Snowbirds' and part time residents' departure from the State. Some people estimate that one-third of Florida's winter population leaves in the Spring to avoid the summer heat.

    1. On 2020-08-27 23:26:12, user Vinci P, MD wrote:

      There might be other explanations for better prognosis in post-menopausal women taking oestradiol: they were probably healthier than women not taking oestradiol, because HRT improves health. <br /> In addition I cannot understand why all post-menopausal women have better prognosis than men, since their estrogens are similar to those of men. Maybe it is the absence of testosterone, and not the presence of oestradiol, which makes the difference.<br /> Could you comment this, please?

    1. On 2020-08-30 15:05:05, user Henry Johnson wrote:

      Does anyone know whether similar experiments have been done with woodwind instruments. I'm particularly interested in clarinet. The instruments work differently. The sound comes out of a variety of places...

    1. On 2020-09-04 19:44:32, user Art Framer wrote:

      Excuse my ignorance but it seems that the tests for covid 19 are looking for the virus itself. Wouldn't the tests have a higher rate of success if they looked for signs of the body's reaction to the virus?

    1. On 2020-09-07 16:03:04, user Joe B wrote:

      We don't know how long ago the vitamin D levels were obtained in these patients. This is especially true in the COVID patients, because we have no idea if they truly were "deficient" at the time of their infection. Additionally, you never tell us in the methods that you were going to examine supplementation, and how you were going to do that (and assure adherence). Can vitamin supplements not be purchased over the counter in the countries involved in this study? Finally, I assume you categorized people by "sex" and not "gender" as sex if the term used for male/female DNA based differences.

    1. On 2020-09-09 19:12:11, user Michael Bishop wrote:

      I don't believe the authors' data, which would imply that SARSCOV2 was circulating with little increase or decrease in Dec 2019 - Feb 2020 until suddenly taking off in late Feb early March.

    1. On 2020-09-10 16:51:51, user Thomas Waterfield wrote:

      Thanks Sunil. It was great to chat the other day.

      We have produced a protocol that is currently with BMJ Open. The data presented here relates to the first clinic appointments (16th April to 3rd of July) for all participants. The symptom data was reported using RedCap data capture with retrospective reporting of illness episodes prior to the attendance from the beginning of the pandemic in February. In all instances the symptoms were reported without the participant knowing their antibody status. Data were entered by trained members of the research team.

    1. On 2020-09-13 01:19:25, user mzbaz wrote:

      There is an unfortunate typo in the horizontal axis unit label of Fig 3b, which should be "minutes" not "hours", consistent with the "15 Min Rule" vertical line, as well as the discussion in the text.

    1. On 2020-09-18 16:27:11, user kdrl nakle wrote:

      These types of papers that are masquerading as science are nothing more than speculations. Even IMHE forecasts from this Spring are laughable now. This is in the same venue.

    1. On 2020-09-24 10:16:48, user Camila Hobi wrote:

      I would like to congratulate the authors for this paper! Wonderful idea! The hypothesis that children can be protective rather than harmful is very plausible! Unfortunately since the beggining of pandemic people are saying the opposite based in misbeliefs and not in science. It’s very important to test this hypothesis in other countries. Reading this paper, I asked myself “why keep schools closed?”

    1. On 2020-09-27 03:43:34, user LB wrote:

      It is well known that magnesium absorption is an issue with elevated gastric pH from PPIs. <br /> Please evaluate the possibility that the individuals who had a history of taking PPIs might have had magnesium deficiency, which altered their immune response to SARS-CoV-2.<br /> - Linda Benskin, PhD, RN

    1. On 2020-10-15 22:40:24, user Marm Kilpatrick wrote:

      Dear Dr. van Beek and co-authors,<br /> Thank for your this important work!<br /> In your Table 1 you appear to be grouping results for multiple assays together:<br /> Panbio™ COVID-19 Ag rapid test (Abbott), and Standard Q COVID-19 Ag (SD Biosensor);<br /> and COVID-19 Ag Respi-Strip (Coris BioConcept), and GenBody COVID-19 Ag (GenBody Inc)<br /> I *think* you did this because they had similar LODs but it'd be more informative if you could show results for each assay independently. <br /> It would also help to know the sample sizes for each of the assays in each group of patients.<br /> Finally, specificity is a potential issue with these rapid antigen assays. Did you test samples that were negative by PCR to determine this (acknowledging that PCR could miss viral RNA, especially if not done at the same time)?<br /> thank you,<br /> marm

    1. On 2020-10-22 11:33:50, user Paul Peerbooms wrote:

      It would be interesting to see the protective effect of the flu-vaccination when only staff with contacts with patients is considered.

    1. On 2022-10-24 17:42:56, user CDSL JHSPH wrote:

      Dear Mekkes et. al.,

      Thank you for sharing your work with us! Creating models to predict neuropathological diagnosis based on temporal signs and symptoms is very significant research, and I’m looking forward to seeing where this heads in the future! I enjoyed reading this paper, especially since it introduced me to techniques and concepts I was previously unfamiliar with. That being said, while reading I did notice some parts that I think could be given further clarity in order to make this paper more accessible to those not within the immediate field. There are a lot of abbreviated disorders mentioned, and I noticed that some were explained in the introduction however I could not find the proper matching terms for disorders abbreviated in later sections. I think it would be a great benefit if there were a word key with the disorders and their corresponding abbreviations. Especially since different disorders may be represented by the same acronyms, so googling it may not provide the reader with the correct one. Also, I was wondering if you plan to do another study focusing on optimizing these models to diagnose mental illnesses and psychiatric conditions in a separate paper? I understand the main focus on brain disorders and neurodegenerative diseases since those can be linked to prominent neuropathological changes, but when reading the abstract I was given the impression that mental illnesses would be focused on to a larger degree than I noticed in the paper. I would love to see any future steps you take with this, especially since the alterations in cognition and behavior associated with mental illnesses can be observed from live patients, and don't necessarily have to be inspected retroactively like from brain donors.

    2. On 2022-10-24 18:20:28, user Jordan Ross wrote:

      This study was aimed to understand the complexity of diagnosing brain disorders in society today. The author utilizes data in statistical analyses to identify this crossover and give rise to further research in the field. This journal article emphasizes the use of a computational pipeline to establish a series of clinical synopses regarding a series of brain disorders from donors that were previously diagnosed. Donors from the NBB underwent a series of cross-disorder research to identify signs and symptoms associated with psychiatric diagnoses. Overall the article had great analyses and synthesis of ideas pertaining to the mechanisms of symptomatology and how it differentiates across brain disorder profiles. One thing to note is organization in this article. Maybe incorporate these main figures within the results section with titles and keys for each to limit confusion to the reader. Additionally, in the methods section (2.7.3 Observational profiles of the signs and symptoms), the author referred to a figure but did not implicitly state which one. It is rather stated as: (Figure number?). This should be better examined. Lastly, the addition of implicitly stating the need for further research is a great way to highlight the need for experimental study amongst the reader. With this, it would be helpful to highlight your targeted audience within this article (i.e. professional, student, etc.). If you have a targeted audience with a scientific background you will not need to go into further depth to define terminology and specific neurological pathways in this study. Overall, job well done and very interesting study!

    1. On 2022-11-08 15:26:46, user W_R_1 wrote:

      Hi,

      Thanks for the very interesting and useful paper!

      I wondered if the stated z-value calculation (p26, next-to-last line) has a typo of a "-" sign rather than a "+" in the denominator?

      I was also curious as to the properties of this test statistic in general, given that the two beta estimates aren't necessarily independent (i.e. when calculating the variance of the difference between the two beta estimates, their covariance can't necessarily be assumed to be 0?)?

    1. On 2022-11-26 06:41:30, user Soichiro Obara wrote:

      Dear Prof. Schindler,

      First I (Soichiro Obara, the PI of this project) apologize for being replying to your great comments.

      Second, our project has modelled the European APRICOT as you commented. We would sincerely appreciate your suggestions from the viewpoint of the author of the APRICOT.

      On this matter, before launching this project, we got in contact with the principal investigator of the APRICOT, Prof. Walid Habre, who has been giving many suggestions and comments to us.

      As you might have concerns regarding data acquisition, he also gave comments as to how tough the data acquisition and cleaning in the APRICOT.<br /> Hence, this time, we assume that we may need to change the case report form (actually modelling the CRF of the APRICOT) after a pilot study which hopefully will be conducted February or March in 2023.<br /> (I have heard from Prof. Habre that a pilot study was conducted in three centers to examine the feasibility of the protocol and the CRF in the APRICOT.)

      If possible, I would appreciate your "specific and direct" suggestions as to which data items in the APRICOT might be difficult or be unnecessary to collect.

      Again we would sincerely appreciate your great suggestions and comments, if you kindly gave, in the future again.

      I am very pleased if you kindly get touch with me on behalf of the Asian Society of Paediatric Anaesthesiologist research committee.

      Soichiro Obara (Japan)<br /> e-mail address: soichoba1975@gmail.com

    1. On 2022-12-09 13:17:21, user Maja wrote:

      I applaud the authors for their meaningful work on reporting COVID-19 trials. The results are fascinating. Only 19% of COVID-19 clinical trials were published within three months of completion, which shows how much work has to be done regarding research transparency, publishing trial results, and responsibility while conducting clinical trials. Timely publication of results and avoiding research waste in clinical trials should be a priority, especially during global public health emergencies, such as the COVID-19 pandemic.

    1. On 2022-12-27 15:27:14, user nobiggie wrote:

      The most interesting piece of information here seems to be that the more shots you get the more likely you are to get sick but somehow it goes unmentioned

    2. On 2023-01-06 12:54:34, user Mike Verosole wrote:

      A more recent, peer reviewed publication involving a much larger population set showes that indeed vaccinated people were more likely to be infected vs previously infected individuals, however, the vaccinated group was about 25-35% less likely to be hospitalized or experience death. Maybe this paper will be reviewed in light of this new publication

    3. On 2023-02-19 17:11:56, user citrate reiterator wrote:

      The simplest explanation for the vaccine dose trend is that it’s not broken out by previous infection status and date. Other data in the paper confirms that recent past infection is very protective against the omicron subvariants. The more shots you have, the more likely you probably were to be omicron-naive at the start of the study period. As Luis Cruz points out below, even past research from this same group has not previously found a dose-response relationship of this type. Also, when they actually fit a model that takes these confounders into account (the proportional hazards model), they find that there is a modest preventative effect of the bivalent booster — which is not consistent with a dose-dependent increase of risk following vaccination.

    1. On 2023-01-16 01:47:52, user Brian Piper wrote:

      This is a timely review on an important topic!

      In evaluating the safety of the relaxation of take-homes, it might be helpful to consider that “The number of clients receiving methadone increased from 306,440 in 2011 to 408,550 in 2019 and then decreased to 311,531 in 2020” [1]. This reduction from 2019 to 2020 of 97,019 patients is a 23.7% decline!<br /> The increase in the number of overdoses involving methadone may only appear equivalent to the increase in all opioid overdoses when one does not take this factor (i.e. using the population size and not the number of methadone patients as the more appropriate denominator) into account.

      The authors also are encouraged revisit the important policy research in England and Scotland which identified sizable declines in methadone overdoses following implementation of supervised administration [2].

      Citations

      1. National Survey of Substance Abuse Treatment Services (N-SSATS): 2020 Data on Substance Abuse Treatment Facilities. Table 3.2.

      2. Strang et al. Impact of supervision of methadone consumption on deaths related to methadone overdose (1993-2008): analyses using OD4 index in England and Scotland. BMJ 2010;341:c4851.

    1. On 2023-02-20 07:41:20, user Martin Schecklmann wrote:

      I miss in the discussion the meaning of relevant dose of TMS with respect to e-field. We could demonstrate that F3-based method is similar to neuronavigation-based method with respect to clinical efficacy even if induced e-field was lower for F3-method (doi: 10.1016/j.brs.2021.01.013). In addition your sample size is very low and correlation between clinical efficacy and e-field may be biased by outliers (figure 3).

    1. On 2023-02-22 16:16:10, user Robert Clark wrote:

      Major flaw in your analysis. <br /> You noted the report by Peterson et al included cardiac arrest cases where they survived:<br /> “This study identified 173 confirmed SCD cases (and 158 SCA cases with survival), so on average 43 SCDs per year.”

      Since there has been a great increase in awareness of cardiac arrests and arrhythmia in athletes there have been a great increase in AED’s (automatic defibrillators) available and those trained in CPR.

      Then many of those cases even in just the last couple of years survived who would have died previously. So you should have also counted the number of cases who had cardiac issues who also survived in the current, pandemic era.

      Additionally Maron et al described other causes other than just cardiac arrest as the cause of the sudden deaths. Since the Goodsciencing report considers several kinds of serious life threatening illnesses arising in athletes you should also have done a separate count of those cases in the Goodsciencing report, again both of those who died and those who survived.

      Robert Clark

    1. On 2023-03-02 18:33:37, user Daniel Park wrote:

      Fascinating. Lines up well with other evidence including the Lewnard et al. study showing interactions b/w pneumococcal carriage and SARS-CoV-2. We also saw similar patterns with viral load (+ severity) and pneumococcal carriage with human endemic CoVs: https://journals.lww.com/pi...

    1. On 2023-05-16 21:45:14, user Maria Log wrote:

      This is really very strange and disappointing that you consider the use of private e-mail as an indicator of fake articles. Since my years as a PhD student I always use, and I use now and will use my personal email in all my articles. There are many reasons for that, but the most crucial one is that I (as well as any other researcher)can change my affiliation and then my institutional e-mail will be inactivated and ones who are interested in my publication will not be able to get an answer to their questions. So the use of an email linked to the organization is unfair and irresponsible. Here is a link to my google scholar profile: https://scholar.google.com/... - good luck with finding any fake articles!

    1. On 2023-06-07 16:45:42, user Nathan Pearson wrote:

      In this study (of American patients to whom bivalents were available only as booster), all bivalent recipients by definition got 3+ total mRNA vaccine doses, while the current preprint text's control group got 2+.

      As such, to compare equal dose counts (if not timing relative to prior waves etc.), can the authors please add a sub-analysis of peer bivalent (original or BA.4/5) vs. 3+ dose (not 2+) monovalent recipients?

      Not doing so inherently confounds any additional benefit of bivalent vs. monovalent formulation with a group difference in total doses per participant.

      Thanks

    2. On 2023-06-07 17:10:10, user Nathan Pearson wrote:

      Also: in adding such needed apples-to-apples comparison of bivalent vs. monovalent peer boostees (i.e., with 3+ doses in each otherwise background-similar group), can the authors take care to control for time since last dose?

      I.e., if bivalent boostees got boosted at most k months before data freeze (so were tracked for severe COVID for <=k months after last dose), likewise tally cases of severe COVID in monovalent boostees <=k months (but not longer) after their last dose.

      Thanks.

    1. On 2023-06-22 16:56:53, user Benjamin Shuman wrote:

      This is interesting work. The descriptions of methodological variation (task, muscles measured, amplitude normalization method) certainly impact how the studies compare to one another. However, there is no discussion of filter parameters and its impact on the interpretation of synergy metrics. With more aggressive filtering the resultant processed EMG signal is less variable which also has a direct impact on the number of synergies extracted or tVAF1/DMC. The Collimore article is noted for having the least number of synergies identified but also has the most aggressive LP filtering (4hz). There may be additional trends in the article findings when looked at through a filtering lens and I would encourage the authors to consider this. Finally, please note that DMC is a linear transformation of tVAF1 (Steele 2015). As such the trends in DMC are directly comparable to tVAF1.<br /> Thanks,

    1. On 2023-07-10 13:53:43, user Carlos Menck wrote:

      This work was published in Carcinogenesis and the correct citation is: <br /> Corradi C, Vilar JB, Buzatto VC, de Souza TA, Castro LP, Munford V, De Vecchi R, Galante PAF, Orpinelli F, Miller TLA, Buzzo JL, Sotto MN, Saldiva P, de Oliveira JW, Chaibub SCW, SarasinA, Menck CFM. Mutational signatures and increased retrotransposon <br /> insertions in xeroderma pigmentosum variant skin tumors. Carcinogenesis. 2023 May 17:bgad030. doi: 10.1093/carcin/bgad030. Epub ahead of print. <br /> PMID: 37195263.

      Corradi C, Vilar JB, Buzatto VC, de Souza TA, Castro LP, Munford V, De Vecchi R, Galante PAF, Orpinelli F, Miller TLA, Buzzo JL, Sotto MN, Saldiva P, de Oliveira JW, Chaibub SCW, Sarasin A, Menck CFM. Mutational signatures and increased retrotransposon

      insertions in xeroderma pigmentosum variant skin tumors. Carcinogenesis. 2023 May 17:bgad030. doi: 10.1093/carcin/bgad030. Epub ahead of print.

      PMID: 37195263.

    1. On 2023-10-24 02:18:23, user CDSL JHSPH wrote:

      Dear Dr. Bi et al,

      Thanks to your work on influenza, which has provided a new proof that the residual repeat vaccination effect might be explained by different rates of subclinical infection between repeat and non-repeat vaccinees via two proposed mechanisms, the infection block hypothesis and enhanced vaccine immunogenicity and protection post-infection.

      As a reader who doesn't know much about the field ,I can give you some reading feedback for your reference.<br /> First I think your article provides three important pieces of evidence. <br /> 1,Repeat vaccinees were vaccinated earlier in a season by one week.<br /> 2,Clinical infection influences individuals’ decision to vaccinate in the following season while protecting against clinical infection of the same (sub)type.<br /> 3,Adjusting for recent clinical infections did not strongly influence the estimated effect of prior-season vaccination.<br /> 4,Adjusting for subclinical infection could theoretically attenuate this effect.

      On the basis of your good work, I would like to offer a bit of advice for readers who are not experts in this area. First is the article structure section. I hope this provides some perspective to help you publish. First of all the name of the title may be too long for non-specialized readers. It may lose some of the attention. Furthermore the explanations of within-season waning, recent clinical infection, and subclinical infection could have come in the INTRODUCTION instead of the METHOD before being mentioned. Another thing is that I think you could put in the conclusion that has some summarizing words underneath the fig so that readers might find them easier. Also the part you put in the appendix about theoretical modeling some of it is necessary for understanding the model, if you could summarize it necessary and put it in the body methodology would help understanding.

      The next aspect is about research, first of all your work is very relevant and on this basis you are perfectly placed to capitalize on this aspect. First of all you can try to find some way (like sampling) to compare your theoretical model with the results of the data response you got, so that you can speculate about the effect of the vaccine in reality and the possible number of Subclinical infections.

      After that comparing your model with real data would be an interesting aspect. And this can emphasize the correctness of your model and increase the credibility of your article. Although overall this article has been highly relevant with enough realistic data.

      However, on this basis, one can consider whether the data from different regions (the five regions sampled) are very different? For example, are the probabilities of tuning into a vaccination strategy similar in different regions, and do repeat vaccinees in each region tend to get vaccinated earlier in the flu season than non-repeat vaccinees? These comparisons of data from different regions can be added to the article as they relate to the reproducibility and generalizability of your model's and conclusions.

      Finally, thank you for this article, which provides very good evidence for the causes of Reduced effectiveness of repeat influenza vaccination, and this article has the advantage of incorporating a lot of details that were not considered in previous studies and provides a good interpretation of the errors, providing new ideas and theoretical models for the field. And I personally learned a lot of research ideas from you through this article, thank you for your work!

    1. On 2023-11-03 16:41:46, user Dr. Hans-Joachim Kremer wrote:

      It is not well described, what exatly is control. The estimates in Figure 1 are not comprehensible given the few details provided. <br /> Fig 1: It is written that Pfizer bivalent vs. Moderna bivalent is shown, however, the text suggests that Pfizer bivalent was compared with its own contol (before vaccination). What is true? If in fact you compared them to their own control: Where are respective details?<br /> It appears that in that Figure the "all" in the first row has a different meaning from the "all" in the third row. Note that the "16" of the first row, denoted as "all", is repeated in the fifth row, but denoted as "new".<br /> Anyhow, the "new" is not explained in the text and the "all" might have a meaning different from common understanding, as it appears that it stands for "after matching", so not really "all".

    1. On 2023-12-07 15:20:52, user Alex Liber wrote:

      This article has now been published: Liber AC, Faraji M, Ranganathan R, Friedman AS. How Complete Are Tobacco Sales Data? Assessing The Comprehensiveness Of US Tobacco Product Retail Sales Data Through Comparisons To Excise Tax Collections. Nicotine Tob Res. 2023 Nov 2:ntad214. doi: 10.1093/ntr/ntad214. Epub ahead of print. PMID: 37933997.

    1. On 2023-12-14 17:58:32, user Michelle Carras wrote:

      Very curious about this. It's worth considering, but the authors had some big challenges with their sample, which is a relatively well-off, very White population, a bit older than the US average. They have a very high rate of premorbid asthma (about 2-4x the usual), but the thing I find most curious is the use of naltrexone (20% of the sample) and ivermectin (18% of the sample) to treat these new symptoms. Since these uses are very off-label, it makes me wonder about the types of patients who get them. Same with intermittent fasting as a non-pharmacological treatment. I'd be interested in seeing other responses to this and whether it ultimately gets published.

    1. On 2024-01-04 18:43:48, user John Beach wrote:

      As a non-doctor with no opinion on ivermectin, I read<br /> "Effect of Ivermectin 600 ug/kg for 6 days vs Placebo"<br /> to learn.

      My questions:

      1.a.) What is being compared to what?<br /> Placebo isn't identified in the paper or the Supplemental material. Where can I look up the substance used as placebo?

      1.b.) Does the placebo contain folic acid or folate? Or a placebo containing Magnesium, Sodium, or Potassium in some form?

      Reason for question 1: is it possible the study assumed the placebo was inert, but fortuitously discovered instead another treatment that works against COVID with efficacy equal to ivermectin?

      2.a.) Is there a supplement-to-the-study that shows the outcomes sorted by Cytochrome polymorphism?

      Reason for question 2:<br /> CYP polymorphisms can affect how our bodies respond to virus.<br /> Some CYP polymorphisms can prevent a patient from even metabolizing the treatment.<br /> Furthermore, CYP3A4 mediates metabolism of ivermectin, but COVID-19 reportedly decreases metabolism of drugs by CYP3A4

      e.g. via GoogleScholar, I found<br /> https://tandfonline.com/doi... <br /> "The impact of COVID-19 infection on cytochrome P450 3A4-mediated drug metabolism and drug interactions"

      Back to my questions...

      2.b.) Is there a way to determine if ivermectin was metabolized by a study participant, and if the molecule was circulating in their system? If so, was there any variability in study outcomes when those outcomes were sorted by Cytochrome polymorphism?

      3.a.) Has any past research shown that the placebo ingredients interfere with any of the following?

      • Phospholipase PLA2
      • Lipid pathways
      • Glycolysis, e.g. <br /> PKM2 (pyruvate kinase muscle isoform 2)
      • lactate dehydrogenase
      • pyruvate kinase
      • glyceraldehyde phosphate dehydrogenase

      If so, evidence suggests that these, and therefore the intended placebo that affects these, change human biology relevant to the COVID disease process.

      It looks like a lot of good work was put into the Duke study, and so I am trying to learn from it. Any answers you can provide are much appreciated.

    1. On 2024-04-27 22:04:26, user Linette Roungchun wrote:

      I am a TSW sufferer and this work is important. I have suffered with TSW on and off since the young age of 3, so to see the science backing up the fact that TSW is in fact, a separate entity from Eczema is so validating and healing. Huge thank you to Kelly, Kathy, and Dr. Ian Myles!

    2. On 2024-04-28 00:05:10, user pfwindsor wrote:

      Thank you for the opportunity to comment on this excellent TSW study. I am not a scientist, but have more than 10 years of personal experience with TSW and have studied and published material through the American Academy of Dermatology. I also moderated ITSAN’s online forum for several years where I reviewed thousands of posts and responded to thousands of questions about TSW. I am currently involved with ITSAN research advocacy and would be happy to offer additional input.

      Line 33 and Line 66 refer to “prolonged exposure” and “longterm use” of TCS. While my own experience with TSW involves longterm use of TCS, the anecdotal data suggests that some TSW cases follow a short term course of TCS treatment, such as for poison oak. Others report TSW symptoms emerging after decades of TCS use. Our theories are centered around multiple variables for susceptibility to TSW including, but not limited to length of use. We also considered potency, quantity and length of exposure and areas of application. Your study doesn’t seem to address these other potentially significant variables.

      Line 67 further states that TSW is related to “abrupt cessation” of TCS use. This may be the case for your small study cohort, but it has not been studied or established within the TSW community.

      Line 73 -74 states that “improvement is seen through avoidance of TCS therapy”. This is misleading. Overwhelming anecdotal evidence suggests that TSW symptoms that emerge after cessation of TCS become more acute before they resolve.

      Line 306-307 suggests a threshold of 4 months of TCS exposure. Is this an arbitrary threshold identified for purposes of your study or is there a source that proposes or establishes this threshold? I am not familiar with any particular exposure timeframe associated with a risk of developing TSW. In fact, given the anecdotal evidence of TSW with short term exposure, this may not be a valid metric.

    3. On 2024-04-28 21:04:46, user Marly Peel wrote:

      PLEASE! PLEASE! PLEASE do more research & acknowledgement of this TERRIBLE problem! I just recently discovered info on this & it all matches my symptoms exactly. Imagine a horrific itch that you cannot scratch enough to make it STOP! I don't think I've ever been this close to panic as I have during those times. There's no way to sleep or rest AT ALL when this is happening. That doesn't even include the flaking, shedding, red & raw areas, irritated & antsy feelings one can have. WE NEED HELP! Plus, there needs to be a HUGE WARNING included with every top.steroid...that doctors know about & take to heart. Top.steroids might be a quick "fix"...but they DO NOTHING to address the underlying reason for the problem in the first place. It's time for common sense to COME HOME!

    4. On 2024-04-27 18:39:09, user Haley DelPlato wrote:

      As a young adult whose life has been put on hold for the past 3 years due to Topical Steroid Withdrawal, I can't thank you enough for this work!

      Seeing studies about TSW not only helps validate my pain that so many medical professionals have dismissed, but also contributes greatly to advancements in dermatopathology looking forward. The current stigma that makes TSW such a controversial concept NEEDS to be eradiated, a complex task that ultimately relies on substantiated clinical proof to combat misinformation. Unfortunately, the current scope of dermatopathology has kept so many folks unaware of TSW and trapped in harmful cycles of topical corticosteroid addiction. I hope this will be the first of many legitimate works seeking to uncover the truths about this tragic condition so future generations will be believed, treated, and cared for with dignity, in ways the current dermatological standards simply haven't allowed for.

      Appreciate the strides this study has taken toward a more compassionate reality for TSW sufferers!

    5. On 2024-04-28 19:43:09, user Laura Mihalidesz wrote:

      I am a TSW sufferer and made a signifacant change to the worse in my condition when TS were introduced to my life 30 years ago, stopping the usage a age 26 and going into TSW the symptoms are uncoparable to eczema. This study is important to understand what harm can TS cause in the long run and after 9 years in TSW I still suffer from symptoms. The importnce of this research is not a question but answer to many patient's questions and finding and developing treatment methods.

    6. On 2024-05-07 17:04:04, user Katy Ross wrote:

      There is a current flawed premise that ‘adherence’ to steroid treatment is a good idea and anything querying whether that’s a good idea is deemed to be a phobia, as opposed to a clear, rational and well reasoned concern.

      I am out the other side of topical steroid withdrawal syndrome, and have lost a lot to the trauma it brings (in all matter of ways). Sufferers like me are desperate for science-backed research, and it’s great to see progress here. It will be fantastic when a diagnostic code has been approved and further research is available to differentiate TSW from other skin conditions so that patients can be acknowledged and treated appropriately.

      The former Chair of the British Dermatology Council said that as many as 1 in 10 patients may have TSW and it may be seriously under-diagnosed, and the hashtag #TSW has racked up more than one billion views on TikTok - I applaud anyone who’s working towards ending this huge amount of unnecessary suffering.

    7. On 2024-05-10 04:50:19, user Kaitlin Lee wrote:

      Thank you for this long overdue research. It gives me hope that in the near future TSW will be universally diagnosable, treatable, and most importantly, prevented. I have had eczema for ten years, just a few patches here and there, which never interfered with my livelihood. Now 5 months into TSW, so many things have been put on hold for me. People suffering from TSW need treatment and support and medical professionals need to start acknowledging this condition.

    1. On 2024-04-30 16:10:19, user Cameron Daniels wrote:

      Hello! My name is Cameron Daniels and I am a Undergraduate Student at Sam Houston State University. As part of a class project we were told to pick a preprint and provide the authors with relevant comments that could help their study. I really enjoyed reading your study and learning about new data collection techniques. Here are my comments on your paper that could help you improve the paper:

      * List the number of cycles for the first round of PCR in the patient samples. You did it for the in vitro experiment, but you must've forgotten about it in the patient one. The number of PCR cycles helps to be able to replicate your experiment and also see if too many cycles were run because the more cycles that occur leads to the more mistakes, chimeras, and faulty sample reads. Consider looking at this article to validate my response: Sze MA, Schloss PD.2019.The Impact of DNA Polymerase and Number of Rounds of Amplification in PCR on 16S rRNA Gene Sequence Data. mSphere4:10.1128/msphere.00163-19.https://doi.org/10.1128/msphere.001...

      * Specify filtering for chimeras and PCR errors, which is a must to ensure accurate data.

      * List the number of sample reads for the ASV units. The sample reads will help understand your data better.

      * Make sure to include coverage measures and do it before filtering of singletons. Calculating Good's Coverage or Chao1 helps to ensure that representative sampling occurred.

      * Great introductory paragraphs but seems to have many long run-on sentences. Consider breaking up those sentences for readability.

      * A preserving agent is mentioned numerous times as important to capsule function. What is this agent and what concentration?

      * Fig4A visualizes beta diversity. Should also use an appropriate statistical test, such as ANOVA, too determine if the community compositions are significantly different with probiotic addition.

      * The probiotic contains two specific strains, while the microbiome data is from V3-V4 sequencing, and so cannot identify at the species or strain level.

      * An increase in Lactobacillus or Bifidobacterium is expected for the probiotic-treated patients. What would be more interesting is to analyze if certain other taxa are significantly increased or decreased in abundance by the probiotic strains, as the question of how those two strains affect the rest of the community is scientifically important and would enhance the literature.

      * Fig5 and 6 show that stool sampling vs capsule are different. Authors also need to include analysis to show that capsule and endoscopy are not different, since capsules are proposed as a non-invasive alternative.

    1. On 2024-06-08 00:47:24, user Renzo Huber wrote:

      The manuscript entitled “Laminar multi-contrast fMRI at 7T allows differentiation of neuronal excitation and inhibition underlying positive and negative BOLD responses” is a methods paper that estimated metabolic changes (CMRO2) across cortical layers.

      The subject matter is relevant for the field. (layer-)fMRI suffers from the interpretability challenge of ‘only’ capturing an indirect measure of neural activity. This study aims to estimate neural energy demand more directly with a newly re-implemented multi-contrast sequence of CBV, CBF, and BOLD.

      The method is benchmarked on previously established tasks (finger tapping) and applied on visual retinotopic stimuly.<br /> The study is clearly described and the results on positive responses look robust and convincing.<br /> The results on negative responses are weak and less clear and less convincing, though. <br /> One advantage of this study compared to previous laminar CMRO2 studies is that it does not rely on a Grubb coefficient that relates CBV and CBF. Instead, the study at hand measures both parameters concomitantly.

      There are some model assumptions that are not really justified (detailed below).

      I recommend the editors to publish this manuscript given the authors make a few small revisions.

      Detailed comments are below:

      1.) The Davis model on CMRO2 estimation is based on many assumptions that might not be valid for the spatial scale of laminar fMRI with GRASE. I believe the authors could spell out the assumptions that they are making and discuss if and how much they matter for the conclusions.

      1a) The Davis model is based on the Fick’s principle. This assumes that delivered oxygen (via CBF) is either (i) sitting in the voxel -CBV , (ii) metabolized -CMRO2 or (iii) drained away - BOLD. Its a mass-balance principle. This assumption is valid for conventional 3mm voxels that cover the entire vascular tree. But for laminar resolution this is not valid anymore. The exchange (CBF) is happening in different layers than the draining (BOLD). So in superficial voxels, when there is a BOLD signal change without any CBV or CBF change, the Davis model results in unphysiological results.

      1b) The Davis model is solely parametrizing venous CBV that is contributing to the BOLD signal. The Davis model does not include arterial CBV. In the study at hand, the authors take VASO and it’s estimation of total CBV, in the equation that is meant for venous CBV only. Given that venous CBV is weaker, slower, and has a different sensitivity to superficial layers [Huber 2014 10.1016/j.neuroimage.2014.04.022], this can result in skewed estimations of CMRO2. Previous studies on laminar CMRO2 have used a scaling factor to account for this [Guidi 2016 10.1016/j.neuroimage.2016.06.030]. The study at hand does not account for the mismatch between total CBV and venous CBV.

      1c) The power law that equates BOLD signal changes with oxygenation changes is originally estimated based on a supralinear effect: “a linear large vessel component is combined with small vessel contributions, which tend toward a quadratic effect on relaxivity according to the Luz-Meiboom model for diffusion-mediated exchange on the capillary scale” (Davs paper 1998). In my understanding, this has always been applied with gradient echo BOLD. In the study at hand, the authors apply the same relationship to GRASE BOLD. Based on modeling work in [Scheffler 2021, https://doi.org/10.1002/mrm...], the vessel sensitivity and the relationship between intra and extravascular BOLD is dependent on vessel radius and flip angle. This is different from GE-BOLD which does not have these dependencies. This makes me wonder if it's justified to use an universal beta value in the Davis model for GRASE BOLD. Maybe beta varies a lot across layers and areas?

      2.) The study by Bohrhaus et al 2023 also used laminar CBV, CBF and BOLD to estimate CMRO2 with a layer peak that seems much more superficial (monkeys) than the results shown here. The authors could acknowledge that this study exists and include it in the reference list?

      Bohraus, Y., Merkle, H., Logothetis, N.K., Goense, J., 2023. Laminar differences in functional oxygen metabolism in monkey visual cortex measured with calibrated fMRI. Cell Reports 42, 113341. https://doi.org/10.1016/j.c...

      3.) It seems that the profiles in Figs. 3,4 are group results. It is not clear if the corresponding maps are single participant maps. Are the inflated brains in Fig. 4 averages in FS-average?

      4.) It is not clear to me to which experiment the results in Fig. 3 refer to. The heading suggests its experiment 1. The figure caption seems to suggest it refers to experiment 2.

      5.) I think it would be helpful to add a zero line in Fig. 5d. It's not clear if the author hypothesizes that the superficial layer sees negative changes or if the deeper layers see positive changes.

      6.) I found Fig. 8 a bit misleading. The scanner plots are mixing many different sources of variance. The spread across points might contain true spatial patterns as well as intersubject variability e.g. different fMRI gain due to different venous baseline oxygenation [Lu,et al., 2008. https://doi.org/10.1002/mrm...]. So it’s not clear what a higher correlation means. In the Davis model, CBF dominates the estimates of CMRO2. Thus, any thermal noise in CBF will be expected to translate to noise in CMRO2 estimates; Making them not independent parameters. Thus, I am not sure if the higher correlation in CBF-CMRO2 is an excelent measure to investigate which parameter is most closely related to CMRO2. But it also doesn’t hurt to keep the figure in there.

      7.) In the discussion, the authors discuss their beta value with respect to the literature. I think it would be helpful to mention that beta is not solely a tissue property constant. It is expected to be different across field strength, TE and BOLD contrast (GE-SE).

      8.) Typo in discussion “rang from 0.9…”

    1. On 2024-10-19 20:32:11, user CDSL JHSPH wrote:

      I thoroughly enjoyed reading your paper and found it to be a significant contribution to the field of tuberculosis treatment. The exploration of model-based methods, particularly MCP-Mod, to enhance traditional qualitative approaches is both timely and necessary. Your findings highlight the effectiveness of these methods in detecting duration-response relationships, especially in small sample trials, which is a crucial aspect of optimizing treatment strategies.

      I am particularly intrigued by the potential for these model-based approaches to be applied beyond TB. It would be fascinating to investigate whether the insights gained from your research could be translated to other bacterial infections, such as Staphylococcus or Streptococcus species, as well as to the treatment of viral and fungal infections. This could open new avenues for improving antibiotic and antiviral therapies.

      Additionally, your suggestion to incorporate patient characteristics and risk factors into the analysis is commendable. I believe that further exploration of how different design parameters—such as sample size and time intervals—affect the accuracy of the model could provide valuable insights. Understanding how to integrate more patient-centric data into real-world clinical settings would enhance the applicability and scalability of your findings.

      Thank you for your valuable contribution to this important area of research. I look forward to seeing how your work evolves and the potential implications it may have for personalized treatment strategies in the future.

    2. On 2024-10-18 00:43:20, user CDSL JHSPH wrote:

      Thank you for sharing your research. I have read your paper and deeply appreciate the necessity and advantages of using model-based methods, such as MCP-Mod, to improve traditional qualitative approaches. The model-based methods you proposed have demonstrated excellent performance in detecting the duration-response relationship, especially in the context of small sample trials, and their potential application is very promising. Additionally, your suggestions for incorporating patient characteristics and risk factors pave the way for more personalized treatment strategies in the future. However, I wonder if it would be possible to further explore how different design parameters, such as sample size and time intervals, might affect the accuracy of the model, and how more patient-centric data could be integrated into the analysis in real-world clinical settings. I believe this could help enhance the understanding of the model’s applicability and scalability. Thank you once again for your valuable contribution to this field, and I look forward to seeing more of your research in the future.

    1. On 2024-12-05 12:13:20, user xPeer wrote:

      Courtesy review from xPeerd.com

      Summary<br /> The study "Pre-existing anti-polyethylene glycol antibodies in pregnant women and newborns" investigates the prevalence and levels of pre-existing anti-PEG antibodies in pregnant women and their newborns, revealing significant safety concerns over the use of PEGylated drugs in these populations. The study highlights maternal age and certain lifestyle factors, such as cosmetic use and consumption of take-out food, as influencing the prevalence and levels of these antibodies. The implications for public health lie in the potential reduced efficacy and increased adverse reactions to PEGylated drugs.

      Potential Major Revisions<br /> 1. Study Design and Population Detail Improvement: The current study design section provides a basic overview of the population criteria (pg. 7) but lacks deeper context about the broader representativeness of the sample size and demographics. Additional detail on potential regional and healthcare-specific biases can help contextualize the findings better for international readers.

      1. Methodological Clarification: Some methodological aspects, such as the exact ELISA techniques used and their validation, are mentioned cursorily (pg. 11). A more comprehensive separate methodological section could provide greater clarity and benefit reproducibility.

      2. Detailed Analysis of Influencing Factors: The discussion of influencing factors like maternal age and cosmetic use (pg. 11) needs expansion to delve into how these were statistically analyzed and how robust these findings are. The differences in antibody prevalence and levels based on lifestyle factors should be discussed with more supporting data.

      3. Potential Confounders: Addressing potential confounding variables not examined in the study could enhance its robustness. Consider expanding the discussion around potential other environmental and genetic factors influencing anti-PEG antibodies not addressed in this study.

      4. Discussion on Clinical Implications: While the study raises concerns about the safety of PEGylated drugs, it stops short of providing specific clinical recommendations or guidelines. This section could be expanded to address more direct implications for clinical practice and public health policies (pg. 10).

      Recommendations<br /> 1. Expand and Detail the Methodological Section: Expand where necessary, especially focusing on the validation and comparison of ELISA techniques used between this study and previous studies.<br /> 2. In-depth Statistical Analysis: Include more detailed statistical tables and charts to back the discussions around influencing factors and antibody levels.<br /> 3. Address Confounders: Identify and address other potential influencing factors and confounders that were not examined and discuss their potential impact on the findings.<br /> 4. Clinical Guidelines Discussion: Provide a more detailed discussion with possible clinical guidelines or recommendations addressing the raised safety concerns about the use of PEGylated drugs in seropositive populations.

      Potential Minor Revisions<br /> 1. Typographical Errors: Correct minor typographical errors, such as any found in the description and presentation of data in figures and tables (pg. 7).<br /> 2. Formatting Consistency: Ensure that formatting is consistent throughout the document, particularly around headings and subheadings for better readability.<br /> 3. AI-Generated Content Analysis: No significant AI-generated content was detected in the document. The content is likely produced by human authors, given the nuanced arguments and specific scientific context presented.

      By addressing these points, the study could provide a more thorough and accessible analysis of its findings, enhancing its contribution to the understanding of pre-existing anti-PEG antibodies in pregnant women and newborns.

    1. On 2024-12-06 02:09:34, user xPeer wrote:

      Courtesy review from xPeerd.com

      Summary

      The preprint titled "RGnet: Recessive Genotype Network in a Large Mendelian Disease Cohort" introduces RGnet, a novel tool for analyzing recessive genotypes in large cohorts, focusing on compound heterozygotes and homozygotes. The study applied RGnet to the SLC26A4 gene within a cohort of individuals with hearing loss, identifying significant pathogenic variants and demonstrating the tool's potential for advancing the understanding of recessive genetic disorders. The paper highlights the novelty of RGnet, the methodology involving variant preprocessing, phasing, network construction, and permutation-based enrichment analysis, and presents the results from its application to the CDGC cohort.

      Potential Major Revisions

      1. Reproducibility and Data Availability:
      2. Ensure that the datasets and tools used in this study are accessible. Although the paper mentions the availability of RGnet on GitHub, details about accessing specific datasets (e.g., CDGC data) were not explicit.
      3. Example: "RGnet is available from GitHub at https://github.com/jiayiiiZeng/RGnet " (page 1) but does not provide direct links or instructions for data access.

      4. Robustness of the Methodological Framework:

      5. Explain the justification for the chosen phasing methods (trio-based, read-based, expectation-maximization algorithms) and their combination.
      6. Example: "This study employs a combination of trio-based phasing, read-based phasing, and an expectation-maximization phasing algorithm" (page 3). However, specific reasons for selecting these methods are not provided.

      7. Statistical Analysis:

      8. Provide a more detailed description of the permutation tests used for RG enrichment analysis and why 100,000 permutations were specifically chosen.
      9. Example: The paper states that "100,000 permutations were performed" without detailing the basis for this choice (page 5).

      10. Ethical Considerations:

      11. Include a section discussing ethical considerations, particularly concerning patient data privacy and consent given the sensitive nature of genetic data.
      12. There is no mention of ethical reviews or consent processes, which is crucial for studies involving human genetic information.

      Potential Minor Revisions

      1. Typos and Grammar:
      2. Correct minor typos and ensure grammatical consistency. For example:
      3. Line 18, page 1: "To address this 18 gap" should be "To address this gap".
      4. Line 58, page 2: "research3,4" should be "research" followed by proper citations.

      5. Formatting Issues:

      6. Ensure consistent citation formatting throughout the text.
      7. In the reference section, ensure that all references, such as URL links, are formatted and hyperlinked correctly. For example, repeat the formatting used for URL links like " https://doi.org/10.1101/2024.12.02.24318353 " for other references as well.

      8. AI Content Analysis:

      9. The paper does not provide any indications of AI-generated content. It appears authentically authored by humans, considering its depth and technical specialization.

      Recommendations

      1. Increase Transparency in Methodological Choices:
      2. Provide a more granular explanation of the methodological decisions, particularly around the choice of phasing methods and permutation tests.

      3. Enhance Data Accessibility:

      4. Ensure that all datasets and supporting materials are accessible, with clear instructions for researchers wishing to replicate the study or apply the RGnet tool.

      5. Incorporate an Ethical Review Section:

      6. Add an ethics section discussing how patient data was handled, the consent process, and any relevant ethical approvals obtained for this study.

      By addressing these major and minor revisions, the paper can be significantly strengthened, ensuring clarity, reproducibility, and ethical adherence, which are vital for advancing research in genetic studies.

    1. On 2025-02-24 23:45:15, user Stephen Goldstein wrote:

      I am now pasting in the review done by Gemini 2.0 Flash Thinking Experimental. I have not edited it.

      Prompt: Please conduct an extremely rigorous, journal-agnostic peer review of this manuscript.

      Response:<br /> I have now thoroughly read the manuscript: "Immunological and Antigenic Signatures Associated with Chronic Illnesses after COVID-19 Vaccination" by Bhattacharjee et al., available at https://www.medrxiv.org/content/10.1101/2025.02.18.25322379v1 .

      Here is an extremely rigorous, journal-agnostic peer review of this manuscript:<br /> Extremely Rigorous, Journal-Agnostic Peer Review of Manuscript: "Immunological and Antigenic Signatures Associated with Chronic Illnesses after COVID-19 Vaccination"<br /> Manuscript DOI: https://doi.org/10.1101/2025.02.18.25322379 <br /> Manuscript URL: https://www.medrxiv.org/content/10.1101/2025.02.18.25322379v1.full-text

      Overall Assessment

      This manuscript by Bhattacharjee et al. investigates the complex and highly sensitive question of immunological and antigenic signatures associated with chronic illnesses reported following COVID-19 vaccination. The authors employ a multi-faceted approach, combining clinical data from a prospective cohort study with deep immunophenotyping and autoantigen profiling. The study's aim is to identify potential immunological correlates of these post-vaccination chronic conditions, which is a crucial area of investigation given ongoing public health discussions and concerns.<br /> While the study addresses a highly relevant and important question, and utilizes advanced methodologies, this review finds that the manuscript in its current form presents significant limitations in its methodology, analysis, interpretation, and overall conclusions. From a rigorous, journal-agnostic perspective, the manuscript requires substantial revisions and further validation before it could be considered to provide robust and reliable scientific evidence. The current findings are preliminary and require cautious interpretation, and the manuscript overreaches in its conclusions and implications.

      Major Strengths (Limited in Impact by Weaknesses)<br /> Addresses a Highly Relevant Question: Investigating potential immunological links to chronic illnesses post-COVID-19 vaccination is undeniably important for public health and vaccine confidence.

      Prospective Cohort Design: The prospective cohort study design is a strength, allowing for systematic data collection and comparison between vaccinated individuals with and without reported chronic illnesses.

      Multi-Omics Approach: The use of multi-parameter flow cytometry for immunophenotyping and autoantigen microarray profiling are advanced techniques that offer the potential to identify complex immunological signatures.

      Clinical Data Integration: The study attempts to integrate clinical data (symptomology, diagnoses) with immunological data, which is essential for translational relevance.

      Major Weaknesses and Critical Concerns (Requiring Substantial Revision)

      Case Definition of "Chronic Illnesses": Critically Vague and Subjective. The definition of "chronic illnesses" post-vaccination is alarmingly vague and relies heavily on self-reported symptoms and broad diagnostic categories (e.g., "dysautonomia," "POTS," "fibromyalgia"). There is no clear, objective, and validated case definition used. This introduces significant subjectivity and potential for bias in patient selection and outcome assessment. The lack of specific inclusion/exclusion criteria for these "chronic illnesses" undermines the scientific rigor of the entire study. It is unclear if these are new-onset conditions post-vaccination, or exacerbations of pre-existing conditions, or simply background rates of these conditions temporally associated with vaccination.

      Lack of Appropriate Control Groups: The study compares "Vaccinated individuals with chronic illness" to "Healthy vaccinated controls." This control group is insufficient. Crucially, there is no unvaccinated control group with similar chronic illnesses. Without this, it is impossible to determine if the identified immunological signatures are specific to post-vaccination illnesses, or are simply signatures associated with these illnesses in general, regardless of vaccination status. Furthermore, a "healthy unvaccinated" control group would be essential to assess baseline immunological differences related to vaccination itself.

      Causality vs. Association: Fails to Establish Causality. The study is fundamentally correlational. It fails to provide any robust evidence for a causal link between vaccination and the reported chronic illnesses. The temporal association alone is insufficient to establish causality. The manuscript over-interprets correlational findings as suggestive of causal relationships, which is scientifically unsound. Confounding factors (e.g., pre-existing conditions, other exposures, reporting bias) are not adequately addressed or controlled for.<br /> Small Sample Size and Statistical Power: The sample sizes for the "chronic illness" groups are relatively small (e.g., n=15-20 per group in some analyses). Given the complexity of immunological data and the heterogeneity of "chronic illnesses," the study likely lacks sufficient statistical power to detect subtle but potentially meaningful differences, and is highly susceptible to false positives. The statistical analyses presented are descriptive and lack rigorous correction for multiple comparisons, further increasing the risk of spurious findings.

      Over-Interpretation and Over-Generalization of Findings: The manuscript overstates the significance and generalizability of its findings. The conclusions are presented in a manner that suggests a stronger link between vaccination and chronic illnesses than is warranted by the limited and preliminary data. The discussion section speculates broadly on potential mechanisms without strong empirical support from the study itself. The manuscript risks contributing to vaccine hesitancy by presenting preliminary and correlational data as stronger evidence than it is.

      Limited Validation and Reproducibility: The findings are not validated in an independent cohort. Without external validation, the reproducibility and generalizability of the identified immunological signatures are highly questionable. Given the small sample sizes and potential for batch effects in omics data, validation is absolutely essential.

      Lack of Clinical Characterization and Phenotyping: Beyond broad symptom categories, the clinical characterization of the "chronic illness" groups is superficial. Detailed clinical phenotyping, including objective measures and validated diagnostic criteria, is lacking. This makes it difficult to interpret the clinical relevance of the identified immunological signatures. The manuscript does not adequately address the heterogeneity within these broad "chronic illness" categories.<br /> Potential for Reporting Bias and Selection Bias: Individuals who believe they have experienced adverse events post-vaccination may be more likely to participate in studies investigating vaccine-related illnesses. This introduces potential reporting bias and selection bias, which are not adequately addressed in the manuscript.

      Ethical Considerations and Responsible Communication: Given the sensitive nature of the topic and the potential for misinterpretation, the manuscript needs to demonstrate a heightened awareness of ethical considerations and responsible scientific communication. The current manuscript, in its over-interpretation and lack of caveats, falls short in this regard. The potential for public misinterpretation and the fueling of vaccine hesitancy needs to be carefully considered and mitigated.

      Recommendations for Substantial Revision (Minimum Requirements for Consideration in a Rigorous Journal)<br /> Develop and Implement Rigorous, Objective Case Definitions:

      This is the most critical revision. The authors must develop and apply clear, objective, and validated case definitions for each "chronic illness" category. This requires moving beyond self-reported symptoms and broad diagnoses. Consult with relevant clinical specialists to establish specific inclusion and exclusion criteria based on objective clinical findings, validated diagnostic tools, and established disease classifications. Vague terms like "dysautonomia" and "POTS" are insufficient without precise operationalization.

      Include Essential Control Groups: Absolutely essential revision. The study must include:<br /> Unvaccinated individuals with similar chronic illnesses: To determine if the immunological signatures are specific to post-vaccination illnesses or are general features of these conditions.

      Healthy unvaccinated controls: To assess baseline immunological differences related to vaccination itself.<br /> Focus on Establishing Causality (or Acknowledge Purely Correlational Nature): The authors must either:<br /> Significantly temper their interpretations and conclusions to reflect the purely correlational nature of the study. Avoid any language that implies or suggests causality. Clearly state the limitations of correlational findings. <br /> OR<br /> Implement methodological approaches to strengthen causal inference. This would be extremely challenging in this observational study design, but might involve more sophisticated statistical methods to address confounding, mediation analysis, or focus on subgroups with stronger temporal associations. However, even with these, establishing robust causality will be very difficult.

      Increase Sample Size and Statistical Power: Larger sample sizes are needed, particularly for the "chronic illness" groups, to improve statistical power and reduce the risk of false positives. This may require multi-center collaboration and a significant expansion of the study.

      Rigorous Statistical Analysis and Correction for Multiple Comparisons: Implement more rigorous statistical analyses, including appropriate correction for multiple comparisons in omics data. Focus on robust statistical methods and validation of findings.

      External Validation in Independent Cohort: Validation of the identified immunological signatures in an independent cohort is essential to assess reproducibility and generalizability. Without validation, the current findings are highly preliminary and should be interpreted with extreme caution.

      Enhance Clinical Characterization and Phenotyping: Provide much more detailed and objective clinical characterization of the "chronic illness" groups. Include validated clinical assessments, objective measures, and explore heterogeneity within these groups.

      Address Potential Biases: Acknowledge and discuss the potential for reporting bias and selection bias. Implement strategies to mitigate these biases where possible, or at least transparently discuss their potential influence on the findings.<br /> Responsible and Cautious Communication: The manuscript must be rewritten with a much more cautious and balanced tone. Avoid over-claiming, over-generalization, and any language that could be misconstrued as definitive proof of vaccine-induced chronic illnesses. Emphasize the preliminary and correlational nature of the findings, and the need for further rigorous research. Clearly discuss the limitations of the study and the potential for misinterpretation.

      Recommendation for Publication<br /> In its current form, this manuscript is not suitable for publication in any rigorous peer-reviewed journal. It suffers from critical methodological flaws, significant limitations in its analysis and interpretation, and overreaches in its conclusions. Substantial revisions are absolutely necessary to address the major weaknesses outlined above.<br /> Without these extensive revisions, the manuscript risks contributing to misinformation and vaccine hesitancy without providing robust and reliable scientific evidence.<br /> If the authors are able to undertake the very significant revisions recommended, particularly regarding case definitions, control groups, causality, and validation, the manuscript might then be reconsidered for publication in a specialized immunology or clinical immunology journal, but even then, would require extremely careful and balanced framing of the findings. Publication in a very high-impact journal like Science or Nature remains highly unlikely given the fundamental limitations in the study design and the sensitive nature of the topic.

      In conclusion, while the research question is important, the current manuscript requires a fundamental re-evaluation of its methodology, analysis, and interpretation to achieve scientific rigor and responsible communication of findings.

    2. On 2025-02-21 05:12:28, user Evan Stanbury wrote:

      Re "PVS participants also had lower anti-spike antibody titers, primarily due to fewer vaccine doses", ie the people with more vaccine doses had less PVS. This contradicts the hypothesis that vaccines cause Post-Vaccine Syndrome, since the dose-response relationship contradicts the hypothesis.

    3. On 2025-02-21 05:14:49, user Evan Stanbury wrote:

      A Machine-Learning (ML) model was able to distinguish fairly well between the "sick" and "not sick" cohort. But most ML models are incapable of explaining why a decision was made.<br /> In particular, ML cannot distinguish between the following two hypotheses: (A) "PVS is a Post-Vaccination Syndrome" (ie Iatrogenic) vs (B) "PVS is a Post-Viral Syndrome" (ie a symptom of disease). This is an important distinction since the the reported training data does not include patients with Long COVID.

    4. On 2025-02-24 06:06:47, user Daniel Corcos wrote:

      This work is a good start to appreciate the existence and organic nature of a post-vaccination syndrome. What worries me is the low number of control subjects. A much larger number of control subjects would allow adjustments to be made, particularly regarding the number of vaccine doses received.

    5. On 2025-03-26 03:22:48, user Pavel Montes de Oca Balderas wrote:

      It seems risky to claim so vehemently that vaccines saved lifes when the studies cited to backup this claim did not consider the syndrome described here that could also lead to death. <br /> Moreover, one of the references used here to demonstrate that vaccines saved 14 millions lifes through a math model has been called a "fiasco" because simple maths with world data records DO NOT fit their model. Also the authors of that paper have conflicts of interest as some are sponsored by the WHO and others.

      Longdom Publishing SL<br /> https://www.longdom.org <br /> The discrepancy between the number of saved lives with COVID-19 vaccination and statistics of Our World Data.

    1. On 2025-03-10 17:26:08, user Ruhollah Dorostkar wrote:

      I am Dr. Ruhollah Darskar, a specialist in medical virology with 20 years of research experience in the field of virology and vaccines.<br /> The article... has been published without considering the facts of the covid-19 virus and the complications of this virus and the corona pandemic, and unfortunately without mentioning the round-the-clock efforts of Iranian scientists in the field of identifying and controlling the corona disease.<br /> For example, due to the lack of diagnostic kits in the world, at the beginning of the spread of the disease, more than 20 of my colleagues and I worked around the clock in the field of rapid virus detection and identification and the launch of the virus detection kit.

      In order for our activities to be accurate and completely scientific, my colleagues and I have been away from home for a long time and were stationed at the hospital and many researchers and colleagues have also been infected with the corona disease.

      Considering the publication of thousands of articles about the scientific activities of the Corona pandemic in Iran, it is unethical to create ambiguity about the scientific activities of Iranian researchers.

    2. On 2025-03-14 14:22:15, user Amir wrote:

      The paper’s statistical approach is troublingly opaque; without a detailed account of the one-way ANOVA assumptions or the Prophet model’s parameter choices, the reported findings on vaccine delays and excess mortality remain unconvincing and methodologically fragile.

      While the manuscript highlights a correlation between delayed vaccination and excess deaths, it fails to adequately control for crucial confounding variables such as healthcare infrastructure, socioeconomic status, and demographic differences, making its causal claims appear overly simplistic.

      Claiming that 50,000 excess deaths resulted directly from delayed vaccination campaigns is a bold assertion that lacks the necessary evidentiary support; the absence of comprehensive sensitivity analyses and diagnostic metrics renders this figure speculative at best.

    3. On 2025-03-15 20:37:50, user Ali Molaverdi wrote:

      The authors appear to conflate correlation with causation when attributing excess mortality to political populism, an interpretative leap that undermines the scientific validity of their conclusions and calls for a more cautious analysis.

    1. On 2025-04-10 20:11:03, user Jeffrey_S_Morris wrote:

      This study's conclusion of -27% negative effectiveness does not seem to be supported by the study, given they did not account for testing bias, which happens to also be 27%, with vaccinated testing on average a 27% higher rate than unvaccinated.

      To their credit, the authors acknowledged this in the following plot:<br /> https://uploads.disquscdn.c... <br /> Here it can be seen in my replotting of their Figure 1a scatterplot on the log y axis (after extracting the data by applying AI tool to their scatterplot image), with the 27% increase being the (geometric) mean testing rate (vaccinated/unvaccinated) over the days they plotted.

      https://uploads.disquscdn.c...

      Incidentally, taking simple means (or fitting linear regressions) for a sample of ratios is not good statistical practice since the <1 and >1 parts are asymmetric, so instead the geometric mean (averaging on the log scale) should be used. For example, if one day is 4x higher for vaccine and one day 4x lower, they should average to be equivalent. The average on the raw scale (4 + 0.25)/2 = 2.125 would imply a mean 2.125x increase, which is incorrect, while a geometric mean (averaging on the log scale and then exponentiating) would get the correct result. 2^{log_2(4)+log_2(0.25)} = 2^(2 + -2) = 2^0 = 1. That is why I used geometric mean in the plot above and plot in the log scale, and think the authors should do the same in their paper.

      While they acknolwedge the increased testing rate, the text of the paper dismisses it as a potential source of bias by claiming the test positivity rate is equivalent in vaccinated and unvaccinated. I agree with their logic that if test positivity were identical in vaccinated and unvaccinated, then the 27% higher testing rate could simply be a result of a 27% higher infection rate, and not from testing bias.

      However, the analysis they present to support this assumption is not justified and seems flawed. They perform a linear regression of the ratio of testing positivity (vaccinated/unvaccinated) by day over time, and because the confidence bands intersect zero they conclude the test positivity is no different between vaccinated and unvaccinated, and thus the difference in testing rate is not a bias, but from the negative effectiveness that they conclude is true.<br /> https://uploads.disquscdn.c... <br /> However, this analysis is problematic for numerous reasons:<br /> 1. It is not clear why a regression over time should be done to answer this question, and not clear why one would assume any time trend is strictly linear. It would make much better sense to compute a (geometric) mean over time, or if wanting to model time trends to use a smooth nonparametric function.<br /> 2. Computing means or modeling time trends on ratios should not be done on the raw scale, but the log scale, for the reasons discussed above.

      Plotting these numbers on the log scale (again, after using AI tool to extract it from their scatterplot image in the paper), I computed the geometric mean test positivity, and find it to be 0.80, meaning the "average" test positivity over time is 20% lower in vaccinated than unvaccinated, certainly not the same.

      https://uploads.disquscdn.c... <br /> This lower test positivity is obscured in their original plot on the raw scale, since the ratios <1 got compressed and ratios>1 expanded.

      If you have a situation with vaccinated having 1.27x the testing rate and 0.80x the test positivity, this would correspond to an infection rate that is 1.27 x 0.80 = 1.016x higher infection rate. This would correspond no difference in infection rate, certainly not a 27% increased infection rate in vaccinated.

      While not a formal analysis, this demonstrates that vaccinated having a 27% higher testing rate along with a 20% lower test positivity rate could result in a 27% higher rate of confirmed flu infections even if the infection rate was equivalent between vaccinated and unvaccinated.

      In that case. the 1.27x increased testing rate would be a testing bias that produces a spurious 1.27x confirmed infection rate even if the infection rate were not higher in the vaccinated.

      Based on this, one cannot tell from the study whether the 1.27x increased rate of confirmed flu infections is from negative effectiveness (as claimed), or from the testing bias (which is not adjusted for in the analysis).

      The authors cannot rule out the possibility that their results are caused by the testing bias, which is not accounted for in their analysis.

      Thus, I don't think the conclusion of -27% VE is valid.

      At most, they could say there is no evidence of any vaccine effectiveness vs. infection, but cannot conclude a significant negative effectiveness because of failure to account for the testing bias.

      Of course, there are designs to adjust for this testing bias -- test negative designs -- but the authors eschew this design, seemingly because it gives odds ratios rather than relative rates which they express concern that they are not as intuitive to grasp.

      To me, that seems like a relatively minor issue relative to testing bias of sufficient magnitude to drive spurious results.

      If I were reviewing this paper, I'd require them to adjust for the testing bias, and ideally perform a test negative design, even if considered a secondary analysis.

      Of course test negative designs have their own limitations and potential biases, but at least considering it as a secondary analysis would be useful to see if they obtain equivalent results using that design and, if not, should raise questions on whether they should boldly conclude negative effectiveness in this study, or instead more carefully conclude a lack of evidence of vaccine effectiveness in their cohort.

      These concerns are also summarized in an http://x.com thread

    1. On 2025-04-23 12:00:02, user Pei Meng wrote:

      I read your paper with great interest, but I have a question regarding the following statement:

      “This approach identified putative drivers for all aut-mCAs (Table 1). Annotation of driver genes as tumor suppressors or proto-oncogenes revealed that every + aut-mCA contained proto-oncogenes as putative drivers, notably every – aut-mCA contained tumor-suppressors as putative drivers, and = aut-mCAs contained either proto-oncogenes and tumor-suppressor drivers.”

      I noticed that TP53 is located on chromosome 17p, and from Extended Figure 2, there appear to be some losses on chromosome 17q. These 17q losses don't seem to contain tumor suppressors as putative drivers. Could you please clarify how these cases were classified, and whether 17q losses were considered as – aut-mCAs without tumor-suppressor drivers?

    1. On 2022-05-18 18:22:54, user Yosuke Tanigawa wrote:

      Hi Chelsea,

      Congrats on the impressive work and the talk at #BoG22. I am curious if your results would also help resolve the pathogenicity of rare SNVs or rare short indels. For example, is it possible to say rare (smaller) variants disrupting the boundaries of TADs identified from your genome-wide scan (Fig. 4) would likely be pathogenic? Suppose many pathogenic variants are enriched at such TAD boundaries at a well-characterized locus (perhaps MEF2C locus). In that case, it may be possible to gain insights into the pathogenicity of VUSs at other TADs. Thanks!

      Best,<br /> Yosuke

    1. On 2022-06-06 19:03:39, user Chris wrote:

      During this period, the test positivity rate was about 7%. Some of those tests may be random (schools, needed for travel, etc.), but it seems odd that the rate of having COVID would be lower for people who go get a test than for people who just pick up the phone and answer a survey - even if it related to a 2 week period. Note the COVID tests would pick up people who were infected over some period in the past as well.

    1. On 2022-06-17 14:18:31, user Peter J. Yim wrote:

      The trial registration at ClinicalTrials.gov listed three primary endpoints: <br /> 1. Number of hospitalizations as measured by patient reports. [ Time Frame: Up to 14 days ]<br /> 2. Number of deaths as measured by patient reports [ Time Frame: Up to 14 days ]<br /> 3. Number of symptoms as measured by patient reports [ Time Frame: Up to 14 days ]

      The publication reports the outcomes for none of those endpoints.

      1. The rate of hospitalization was reported at 28 days. That was registered as a secondary outcome.
      2. Mortality was reported at 28 days. That was registered as a secondary outcome.
      3. The number of symptoms was only reported at baseline.

      This article is close to irrelevance on the question of the efficacy of ivermectin in COVID-19.

    1. On 2022-07-15 12:36:40, user Bob wrote:

      So what happened with the peer review process?<br /> 238 days have passed, about 2/3rd (66.85%) of a year, or 8 months have passed and no updates. It hasn't been declined and claimed to be wrong, nor has it been approved ...

      Either the paper did a proper job or it didn't ....

    1. On 2022-07-23 18:11:27, user Rogerblack wrote:

      The mental health scales used in this protocol are in general wholly inappropriate for someone with a condition where they are significantly fatigued, facing an uncertain future, with difficulty doing normal activities of life.

      This studies depression and anxiety measure ASSUMES A HEALTHY PATIENT. 'little energy', 'trouble concentrating' 'moving slowly' = a minimum score of 3 due to physical symptoms of longcovid/fatigue. If very exhausted, this can easily rise into the 'severely depressed' range.

      It is not unreasonable to use the PHQ-9 or similar as a screening measure of disease severity.

      To use it in a patient population suffering from fatigue, concentration problems, ... is guaranteed to cross-read between those symptoms and anxiety - it is useless without a careful assessment of each question to find if you are measuring MH, or physical symptoms.

      In past papers with this issue (many), you find unjustifed claims like (PHOSP-COVID) "The physical, cognitive and mental health burden experienced by COVID-19 survivors was <br /> considerable. This included symptoms of anxiety and depression in a quarter" without much more work, as it will lead to the conclusion that treating depression may benefit the patient when there is no interpretable (scale valid) depression, and it's a scale artifact.

      PHQ9 and similar scales are designed for patients without significant physical <br /> comorbidities to the mental state they are trying to measure. The normal scale cuts are only valid for this purpose.

      To use them at face value is as meaningful as noting that a symptom of Orthostatic hypotension is to wobble on standing up, and concluding that patients with one leg are likely to have OI because nearly all wobble on standing up.

      I am a patient with ME/CFS who has been noticing this same issue for many years.<br /> Please at the very least carefully consider scale validity per-patient severity and avoid making MH statements that come from inappropriate scale use. If you must use unmodified PHQ-9 and friends, this must come with a large warning that it is only a severity measure.

    1. On 2022-09-08 01:25:06, user Martin Hensher wrote:

      Some people who have tweeted this article have taken your Long COVID prevalence estimates as being representative of Long COVID arising from Omicron / BA.5 infections. Yet my reading is that you have explicitly not counted Long COVID symptoms arising from the new COVID infections in your study period (June 2022); therefore people reporting Long COVID in this study do so from a prior infection, which could in fact have occurred at any time during the pandemic until June this year. Interpreting your Long COVID prevalence estimate as prevalence arising from recent BA.5 infection is therefore not correct - which some have done, but I hasten to add the authors have not! Would you agree that is a fair assessment?

    1. On 2022-09-19 01:01:30, user Miles Markus wrote:

      A question (a subject for future research) arises, indirectly, from this impressive analysis.

      Relatively few hypnozoites are present in chronic Plasmodium vivax malaria. But there is a large, concealed, asexual parasite reservoir.

      So does tafenoquine prevent a significant proportion of P. vivax malarial recurrences by killing many non-circulating, extra-hepatic merozoites (see reference below)?

      Or are the vast majority of recurrences of P. vivax malaria relapses (i.e. recurrences that have a hypnozoite origin), as has been suggested by various researchers?

      REFERENCE (the link below can be clicked on): Markus, M.B. 2022. Theoretical origin of genetically homologous Plasmodium vivax malarial recurrences. Southern African Journal of Infectious Diseases 37 (1): 369. https://doi.org/10.4102/saj...

    1. On 2024-02-05 00:31:50, user disqus_qMy1DU5jUb wrote:

      The preprint challenges the validity of two modelling papers by comparing COVID-19 mortality data across Japanese prefectures, assuming these differences reflect the impact of varying vaccination coverages. This comparison is flawed due to unaccounted variables like population density, demographics, urbanization, and epidemic stages. Moreover, it overlooks the interconnectivity between prefectures in virus spread. The critique simplistically equates, for example, Tokyo and Saitama as identical except for their vaccination rates, which is highly questionable.

    1. On 2024-02-20 15:54:40, user John wrote:

      Chesekes et al (2022) utilise 2 different defibrillators in their trial.<br /> Zoll X series - rectilinear biphasic - 120,150, 200 j protocol<br /> Lifepak 15 - truncated exponential biphasic - 200, 300, 360 j protocol<br /> Both have a 15% variance in actual energy delivered.<br /> Is there consideration to be made to the range of energy delivered to the VC and DSD cohorts?<br /> VC - 200J Zoll 360J Lifepak<br /> DSD - 400J Zoll 720J Lifepak<br /> Should future trials use a single type of defibrillator to remove this as a possible confounding variable?

    1. On 2024-03-14 13:13:14, user Rune Wilkens wrote:

      This is a very interesting study! Why not discuss and highlight that 20% of the "cirrhosis" patients have PBC? One of the big drivers of the difference between IBD and non-IBD looks like being PBC ("immune-mediated") or even viral. This provides a different picture.

    1. On 2024-04-11 17:36:56, user JMIR Publications wrote:

      Join JMIR Publications & PREreview for a Live Review of this preprint: Assessing the Incidence of Postoperative Diabetes in Gastric Cancer Patients: A Comparative Study of Roux-en-Y Gastrectomy and Other Surgical Reconstruction Techniques - by Tatsuki Onishi medRxiv: https://hubs.la/Q02stzCL0

      April 19, 9am PT / 12pm ET / 4pm UTC

      Learn More & Register: https://hubs.la/Q02stwwt0

    1. On 2024-08-21 16:23:52, user DUPA- Preprint Review wrote:

      Overall, this is a well-designed and conducted analysis that provides valuable insights into comorbidity patterns among early COVID-19 deaths in the United States. The manuscript presents important findings on the morbidity patterns associated with COVID-19 mortality and offers valuable insights for public health strategies. The latent class analysis (LCA) is a widely utilized clustering method for investigating comorbidities, which effectively addresses the issue of collinearity among comorbidities in high-risk populations. It could help identify disease patterns and understand disease relationships. The findings give researchers and health departments detailed knowledge to quickly identify vulnerable populations and provide protection in these public health emergencies. However, addressing the suggestions outlined above will enhance the clarity, transparency, and impact of the study. Therefore, we recommend the manuscript for publication with minor and major revisions.

      Major Comments:

      In Materials and Methods section, line 6 of the second paragraph, it is noted that cardiovascular disease (CVD) includes a variety of diseases/conditions with different prevalence and severity. For example, hypertension may have a significantly higher prevalence compared to other diseases within the CVD group, potentially leading to a disproportionate representation. Is it possible to list the prevalence of individual diseases in the supplementary material? Additionally, It would be beneficial to separate the diseases that have more than 60%(or other value)prevalence as the sensitivity analysis. This approach could enhance the stability of the study by avoiding amplifyfication of the effects from individual diseases with high prevalence. On the other hand, it also provides more details and discussion for the formation of the present results.

      Minor Comments:

      1. In the Abstract, Results section line 3: the phrase “A low frequency of comorbidities” is not precise. Use several words to express “where the prevalence of each comorbidity group was less than that of the entire sample” could be clear.

      2. In the Introduction, paragraph 1: <br /> The study effectively reaffirms the importance of cardiovascular disease and diabetes. Including a comparison with other studies conducted during the same period would provide valuable context.

      3. In the Discussion section, paragraph 2, line 3-6:<br /> Cardiovascular disease was present at 23%, even in the "minimal prevalence" category, which includes cardiovascular disease and diabetes, prominent cardiovascular disease without diabetes, and "minimal prevalence." Is there a difference in the distribution of each disease? Could this same/different distribution further explain the large proportion of "minimal prevalence" in people over 85 years old?

      4. In the Discussion section, paragraph 2, line 7-11:<br /> What are the mechanisms behind the high rankings for kidney disease and chronic lung disease.

      5. In the Discussion section, paragraph 2, line 1:<br /> In addition, please briefly state the underlying mechanisms behind cardiovascular disease and diabetes, such as mechanisms of interaction between cardiovascular disease and diabetes.

      6. In the Discussion Section, paragraph 4, line 2:<br /> The discussion effectively interprets the findings, particularly identifying the "minimal prevalence" class. But besides the eldest group, the proportions of this class still lead in other age groups. Is there any other explanation for why the "minimal prevalence" class still experienced significant mortality? It would also be helpful to provide more details based on citations 28,29(or other literature) to explain the reasonableness of proportions. This additional detail could offer deeper insights into the underlying factors contributing to their outcomes.

    1. On 2024-10-13 17:50:05, user Tom Hagan wrote:

      Consider that niclosamide is highly lipophilic (LogP of 3.91) and the distal rectum’s inferior and middle veins have been employed to systemically transport many rectally administered, insoluble drugs. Would niclosamide have greater bioavailability if simply compounded as rectal suppositories? Apparently - from searching the journals- this has never been attempted.

    1. On 2024-11-15 15:00:42, user Chandni Khemai wrote:

      This paper has been accepted for publication in the Journal of Integrated Care, published by Emerald Publishing Limited.

    1. On 2025-02-20 16:46:44, user Caleb Hartman wrote:

      SN Table 5 and SN Table 6 appear to present the exact same mutational signature activities data for copy number signatures. The signature column labels for both tables appear to be decomposed COSMIC signature labels. I think SN Table 5 needs to contain de novo extracted copy number activities values. Would the authors be able to update this discrepancy?

    1. On 2025-02-26 23:25:34, user Guido Mazzone wrote:

      The sentence " This stop gain variant has not been previously reported and it is not present in <br /> gnomAD and 1000 genomes databases" is not correct anymore.<br /> ERMARD(NM_018341.3):c.1523G>A is actually present in gnomAD 4.1.0 with global AF=1/1613140 and South Asian AF=1/90712.<br /> This is interesting because the patient is South Asian.<br /> https://gnomad.broadinstitute.org/variant/6-169776457-G-A?dataset=gnomad_r4

    1. On 2025-04-01 17:33:08, user Richard DiBenedetto wrote:

      A number of Havana Syndrome victims reported a sudden onset of symptoms resembling effects of a concussion. Impairment similarity with areas of the brain affected by pesticides does not conclude cause of sudden concussion. Sudden symptoms from my personal experience were more like radio frequency, EMP or radar type technology. There has been much research on microwave bioeffects which is worth further study for a cause of Havana Syndrome.

    1. On 2025-04-29 13:39:34, user Guignabert wrote:

      Dear Dr. Sajid Shahul and colleagues, <br /> Reference 3 is incorrect and should be replaced with the following: Guignabert C, Savale L, Boucly A, Thuillet R, Tu L, Ottaviani M, Rhodes CJ, De Groote P, Prévot G, Bergot E, Bourdin A, Howard LS, Fadel E, Beurnier A, Roche A, Jevnikar M, Jaïs X, Montani D, Wilkins MR, Sitbon O, Humbert M. Serum and Pulmonary Expression Profiles of the Activin Signaling System in Pulmonary Arterial Hypertension. Circulation. 2023 Jun 13;147(24):1809-1822. doi:10.1161/CIRCULATIONAHA.122.061501. Epub 2023 Apr 25. PMID: 37096577. <br /> Thank you.

    1. On 2025-05-22 10:50:12, user Naoki Watanabe wrote:

      We are pleased to announce that this preprint has undergone peer review and has been published in a formal journal. Please refer to the final version of the article.

      Watanabe, N., Watari, T., Hosokawa, N., & Otsuka, Y. (2025). Alistipes Bacteremia in Older Patients with Digestive and Cancer Comorbidities, Japan, 2016–2023. Emerging Infectious Diseases, 31(4), 652.

      https://doi.org/10.3201/eid3104.241284

    1. On 2025-06-23 20:40:10, user FletchKennedy wrote:

      This write-up in Nature News (linked below) makes some good points. The creators of Otto-SR call it an "end-to-end agentic workflow using large language models (LLMs) to support and automate the SR workflow from initial search to analysis," but then they really only talk about the screening and data extraction processes. They also indicate that they used the pre-existing search strings when re-running the Cochrane Reviews.

      I'm not saying that speeding up the screening and data extraction processes aren't valuable, they are; but building a proper Cochrane search string can take months. Saying you did 12 work-years of traditional systematic review work in two days is a pretty bold claim when you're really only doing two parts of the process.

      https://www.nature.com/articles/d41586-025-01942-y?utm_source=bluesky&utm_medium=social&utm_campaign=nature&linkId=15300380

    1. On 2025-07-03 15:36:38, user Iraq Body Count wrote:

      We commend the authors of the Gaza Mortality Survey (GMS), and in particular their Palestinian survey colleagues, for producing the first rigorous estimate of violent deaths in Gaza since 7 October 2023 which is completely independent of deaths documented and collated by Gaza’s Ministry of Health (GMoH). Also significant is that it contains the first data-driven attempt to estimate non-violent deaths, which has so far been lacking from any other source.

      Also welcome is that, while the GMoH’s numbers are notably lower than those in GMS, its authors recognise that “By naming individual victims one by one, the GMoH endows each person with a measure of human dignity.”

      In their concluding section titled “The Future”, the authors go on to state that “Undercounting of violent deaths by the GMoH is likely to persist.” However the level of this undercount cannot be consistently derived from a single snapshot survey, for the simple reason that the GMoH documentation is continually being backfilled, as we have discussed extensively elsewhere: https://iraqbodycount.substack.com/p/gazas-internal-list-of-the-killed .

      The number the authors provide for the “comparable” period to GMS is one which the GMoH put out in early January 2025: 45,650. However in the GMoH’s list published March 2025, which the authors refer to elsewhere, this number had grown to 48,440. Latest GMoH data (15 June 2025) show that they have further increased their number of verified violent deaths for the period to 49,048 individuals.

      In addition to the deaths listed by GMoH, another 4122 identified dead were known to them by 10 April 2025 but had yet to be verified for addition to their list. (See: https://iraqbodycount.substack.com/p/gazas-victim-details-and-victim-deniers ) On past evidence, most if not all of these names will eventually be included too.

      In fact, had the GMS been conducted a year earlier (January 2024) the gap between its estimates and deaths listed by GMoH would have been markedly wider, as the GMoH has increased its numbers for that early period from an initial 14,121 to 26,987 (an increase of 91%). As the backfilling has progressed, the shortfalls have become appreciably smaller. A notable and predictable pattern has been that the higher the intensity of killing, the more has needed to be completed later. At any rate, these efforts by the GMoH have been constant (not to say noble and brave) and are likely to continue to reduce the difference between competently estimated and actually-recorded casualties.

      So any figure given for the level of difference between a survey and the GMoH is temporary, provisional, and dependent on the date at which the GMoH data was accessed.

      This exemplifies some of the difficulties in comparing a snapshot view such as is obtained by a survey with an ongoing casualty recording effort conducted on a daily basis. Any such comparison needs to be done with appropriate caveats which, if not included, might have the unintended effect of setting in stone a particular estimate of official “undercounting”, thus undermining essential casualty documentation efforts, particularly where such efforts are already being impeded by the most awful circumstances on the ground.

      Hamit Dardagan and John Sloboda, Iraq Body Count, London, UK<br /> 3 July 2025

    1. On 2025-07-06 08:41:23, user xu-sheng zhang wrote:

      Dear medRxiv staff

      I just want to inform you that our article has been [published in The Lancet Regional Health - Europe 2025; https://doi.org/10. 1016/j.lanepe.2025. 101364<br /> with a title: "Cost-effectiveness of vaccination strategies to control future mpox outbreaks in England: a modelling study". could you please help signpost to it.<br /> Best wishes<br /> Xu-Sheng Zhang

    1. On 2025-07-08 12:22:49, user Md Rakibul Hasan wrote:

      The article have recently been published in a journal, please see the following link

      Hasan, M. R., Sultana, N., Panthi, S., Hasan, M., Jahan, S., & Hasanat, M. A. (2025). Fasting Plasma Glucose as a Primary Screening Test for the Diagnosis of Gestational Diabetes mellitus: Fasting Plasma Glucose and Gestational Diabetes mellitus. Journal of the Medical College for Women & Hospital, 21(1), 43–51. https://doi.org/10.3329/jmcwh.v21i1.80952

    1. On 2025-07-13 08:45:42, user Ben Auxier wrote:

      In their pre-print Brackin et al. [1] present data suggesting nosocomial infections (that is, infections arising from the clinical environment) of patients infected with A. fumigatus. This is a surprising finding, given the near universal abundance of this fungus. As I detail below, there is no evidence of transmission chains within a hospital. Rather, the analyses presented fall victim to the statistics of detecting matches within populations of differing sizes, related to what is commonly referred to as “the birthday paradox”. The main data in this paper consists of whole genome sequencing data from 182 isolates from 15 patients (>2 samples from each patient), 101 isolates from patient’s homes, and 102 isolates from a medical centre that all patients visited. From these data, three comparisons are made between a) case samples and general environmental samples, b) case samples and their own home and c) case samples and the reference clinic.

      The authors find that there are links for a), consistent with reports over the last several decades that A. fumigatus populations are highly recombinant, but includes widely dispersed clones [2–5]. More interestingly, they find no links for b) but abundant links in c), which would be consistent with hospital spread. However, while the sample sizes in b) and c) are equivalent, the comparisons are not. Across the 8 cases (average of 11.3 isolates per case) where the housing was also sampled, an average of 12.6 isolates per house (101 total) were used for whole genome sequencing. This leads to ~1000 comparisons being made, due to substructure in the data. Notably, since some patients have long-term infections of one genotype, this number is an overestimate due to within-patient correlations. Then, all 182 patient isolates (more than the 8 patients sampled) are compared against all 102 isolates from the medical centre, producing over 18,000 comparisons. Thus, using a null hypothesis of no difference between patient-hospital and patient-home data, since there are ~20X more patient-hospital comparisons (and 20% of patient samples match a hospital sample), a naïve expectation would be 1% of patient-home comparisons to be clonally related, likely detectable in the ~1000 comparisons.

      Unfortunately, their analysis falls into the “birthday paradox”. Briefly stated, this paradox reflects the fact that while the chance that you share a birthday with someone else is 1/365, the chance that two people share a birthday within a classroom of 30 students is not 8% (30/365), but instead a surprisingly high 70%. This is because in the classroom situation you not only have a larger group, but also many more combinations. The chance of sharing a birthday can be considered as the chance of sampling two identical genotypes from a population of clones. Thus, while roughly equal numbers of isolates from homes and the reference center were used for genome sequencing, this difference in structure means that comparisons with patient isolates are unequal. However, the birthday paradox shows that this math is not intuitive and the chance of finding matches increases non-linearly. So, while perhaps 10 matches should have been expected between patients and their homes, which would already be a tenuous link, the expected number is effectively zero due to the smaller sample size.

      The actual sites of infection for A. fumigatus is important to discern. The cryptic nature of initial infections makes this a challenging task, requiring creative experimental or observational studies. However, I would argue simply identifying clonal matches provides insufficient evidence..

      References:<br /> 1. Brackin, A. P. et al. Genomic epidemiology links azole-resistant Aspergillus fumigatus hospital bioaerosols to chronic respiratory aspergillosis. 2025.07.04.25330042 Preprint at https://doi.org/10.1101/2025.07.04.25330042 (2025).

      1. Chazalet, V. et al. Molecular Typing of Environmental and Patient Isolates of Aspergillus fumigatus from Various Hospital Settings. Journal of Clinical Microbiology 36, 1494–1500 (1998).

      2. Rhodes, J. et al. Population genomics confirms acquisition of drug-resistant Aspergillus fumigatus infection by humans from the environment. Nat Microbiol 7, 663–674 (2022).

      3. Shelton, J. M. G. et al. Landscape-scale exposure to multiazole-resistant Aspergillus fumigatus bioaerosols. 2022.11.07.515445 Preprint at https://doi.org/10.1101/2022.11.07.515445 (2022).

      4. Snelders, E. et al. Widely dispersed clonal expansion of multi-fungicide-resistant Aspergillus fumigatus limits genomic epidemiology prospects. mBio 16, e03652-24 (2025).