10,000 Matching Annotations
  1. Last 7 days
    1. On 2020-10-23 05:06:05, user Robert Clark wrote:

      This is another paper where positive effects of HCQ are left out of the conclusions the paper reports. In the Table 2, the line for mortality at 28 days shows a cut by a factor of 0.54 on HCQ. The difference is not at the standard 0.05 significance level, with a p-value of 0.22. However this does not mean the result is false. It could just as well be the sample size is not large enough for the significance to reach the 0.05 level.

      Too often this is overlooked in medical studies. For instance a significance level of 0.05 means there is 5% chance that the difference is just by chance. Or said another way there is a 95% chance that the difference is not by chance alone, meaning the difference is a real effect.

      But by the same token a statistical significance of 0.22, i.e., the p-value being 0.22, means there is a 78% chance that it is a real effect. In other words in probability terms it’s more likely than not to be a real effect.<br /> {There are several online calculators of, for example, the Fishers Exact test of statistical significance, such as here: https://www.graphpad.com/qu...}

      Yet, often when a result does not reach the 0.05 significance level, it is common, and mistakenly, reported as the result being proven wrong.

      In this regard it must be remembered that these calculated levels of statistical significance are dependent on the sample size. For instance with the mortality rates for the HCQ and non-HCQ cases the very same as in this study but at a large enough sample size the statistical significance could be at the 0.05 level. This is especially important in a study such as this one where The originally planned on number of subjects had to be greatly reduced because of a reduced number of cases of the illness.

      Another aspect of this Table 2 becomes apparent from unwrapping the data. The study uses what is called a “composite endpoint”, or “composite outcome”. This means two subcases are combined into one. In this study, the cases of “invasively mechanically ventilated”, i.e., intubated, and “deaths” are combined, called the “Primary outcome” in the Table 2.

      But the number of deaths specifically on invasive mechanical ventilation is an important number to find out. This is because the mortality rates for that category have been so high. So, the RECOVERY trial for example counted it as a breakthrough when dexamethasone cut deaths in that category by 30%.

      In this study, the “Primary outcome” is the union of the two sets, “invasively mechanically ventilated” and “deaths”. What we want though is the number of those ventilated patients who died, the intersection of the two sets.

      Use the formula |A ? B| = |A| + |B| – |A ? B|, which simply means the number in the union is found by adding the numbers in the two sets minus the number in the overlap.

      We want the number in the intersection though so we’ll turn it around to get:

      |A ? B| = |A| + |B| – |A ? B|

      For HCQ:<br /> |ventilated?deaths| = |ventilated| + |deaths| – |ventilated?deaths| = 3 + 6 – 9 = 0. So 0 deaths out of 3 patients on invasive ventilation on HCQ.

      But for non-HCQ:<br /> |ventilated?deaths| = 4 + 11 – 12 = 3, so the number of deaths on invasive ventilation not taking HCQ was 3 out of 4.

      The numbers are too small to draw firm conclusions though. It is unfortunate that the study could not be completed with the originally planned number of cases.

      One last fact left out of the conclusions of the paper that supports benefits of HCQ:

      Figure 2. Analysis of outcomes in predefined subgroups.<br /> For analysis of the primary outcome in the subgroup of patients receiving azithromycin at randomization, the relative risk could not be calculated because the primary endpoint occurred in 0 of 10 patients who received both azithromycin and hydroxychloroquine compared to 3 of<br /> 11 patients who received azithromycin and the placebo.

      ???????

      Robert Clark

    2. On 2020-10-28 13:08:33, user juanpa wrote:

      I completely agree with what you say about the unintended? "Forgetting" the huge difference in deaths at 4 weeks.

      I also agree on the meaning of the "p".

      To all this should be added another "forgetfulness": that the percentage of intubations and mechanical ventilations in the active group are 40.9% lower than placebo group in the same time period (2.4 vs 3.3%),

      The number of deaths specifically on invasive mechanical ventilation is new to me.

      In my opinion there are thre more criticisms to add

      1st.- If the study designers wanted to verify the degree of effectiveness of the Raoult method scientifically, they only had to clone it. It is evident that this was never his intention (there is no AZ or Zn, neither the doses nor the timing are the same, the treatment was not started early enough either, ...)

      2nd.- The funding agencies should never have financed it until the treatment designed cloned the Marseillais.

      3º.- For me, the reasons for the premature suspension of the study were never too clear. Did they think there would not be a 2nd wave? Couldn't they wait for her?<br /> Someone might suspect that the preliminary results were too flattering for the HDQ and that the results should be prevented at all costs from being statistically more significant.

      Sorry for my bad english

    1. On 2020-10-23 20:18:59, user María José wrote:

      I do believe that this article is so interesting, as it combines the biological and clinical basis in once article. I just want to say congratulate them. On the other hand, I have some questions about your article, the first one, why didn't you include Anexina V? The second, the final part of the protocol why was it not controlled and why the sample size wasn´t bigger?

    1. On 2020-10-24 02:52:00, user CDSL wrote:

      Dear Authors,

      I enjoyed reading about this research, and I think you all do a great job of providing logical explanations for the data you collected. However, one major question that remains with me after reading this paper is, what is the novelty of this study? There is a lot of reference in both the introduction and discussion sections about previous studies that align or do not with the results of this study, and it seems that the data being collected here is just another study on the same correlation between these cytokines and MDD. I think a direct reference to the novelty of this information in the abstract, discussion, and conclusion will help solidify the data being collected. Additionally, how did you all reach the conclusion regarding females exhibiting greater serum cytokine levels compared with males at higher Ham-D scores? The visual data does not seem to conclusively provide this conclusion, so I think in the future it would be beneficial to elaborate on the actual statistical analysis being used to get this conclusion and provide an explanation in the discussion for why females would potentially have higher cytokine levels.

    1. On 2020-10-24 23:37:11, user Nando wrote:

      Out of 110 cases, 27 created secondary exposures - of which 23 were in closed environment.

      Conversely 71 cases were in closed environment and did not generate a secondary exposure.

      As is, the data presented is statistically insignificant... it does not prove that closed environments increase the risk of COVID Exposure.

    1. On 2020-10-26 09:27:03, user Leaf Expert wrote:

      Great research! The FDA reported that it completely endorsed the utilization of remdesivir as a treatment for COVID-19 requiring hospitalization in all grown-up and some pediatric patients.

      Remdesivir is just to be regulated in a clinic or medical care setting fit for giving intense consideration similar to inpatient emergency clinic care. The medication, likewise alluded to by the FDA as Veklury, is the main treatment for COVID-19 to get FDA endorsement, as per a FDA news discharge. It tends to be utilized for grown-up patients and pediatric patiens who are more than 12 years of age and gauge in excess of 40 kg (88 lb).

      The medication was as of late in the news after it was reported that it was among the medicines given to President Donald Trump during his session with COVID-19.

    1. On 2020-10-27 02:05:30, user Critical Dissection wrote:

      Dear author,

      I enjoyed reading the article and I liked how the abstract was divided and broken down to introduction, methods, conclusion and results. I think that really helped me get an idea of what I will be reading. The methods section was detailed which was good. However, I had some difficulty and confusion when reading the paper. I thought the figures could be explained better because I had confusion dissecting them. Some issues with the methods were the reduced sample of the study and the lack of long-term follow up for atrial flutter relapse.

    1. On 2020-10-28 17:44:49, user Andrea Camperio wrote:

      Here you finally can find our research on the covid pandemic revealing that the first wave which was early modeled by Giordano et al., .(2020 reference in the text),and influenced the Italian government decision toward lockdown, using SIDDARTHE algorithm, was dramatically worse than what actually happened.

      This suggests that there is good hope against the pessimistic perspectives of this second wave will be disattended as well. We are still actively developing new strategies to counteract virus effects. We are in the brink of implementing vaccination, new medicines are becoming available, older ones have been rehabilitated, so there is good hope for winning new battles to defeat the virus.

      My personal predictions are that when the whole country, 60 million people, will have been infected, 2% of them will need special and intensive care( data on present fraction of infected needing intensive care about 2%), that means 1.2 million people, if intensive care will be available and sufficient, only 2% or less will die (data on at present survival rate in intensive care with Sars-Covid19), that means around 12,000 people, once everyone is (and if) exposed to the infection. However, if the intensive care, wont be sufficient, then at present 40% of the worst cases (1.2 millions) will risk their lives, without intensive care support, equal to about 400,000 people.

      All depends on the evolution of the virus. The virus is evolving, in two directions, as all other aereal virus that affect humans, such as flu. First direction, that we already have seen, the virus is evolving toward being less and less lethal, because harming the host means extinguishing its self as well. The second direction, however, is more dangerous, and it is going toward faster and faster diffusion in new human hosts.

      All human flu are very fast spreading, within 3-5 month they affect a very large portion of the population (30-40%), and very low mortality, usually 8.000 to 12.000 lives every year, mostly old and fragile individuals (about 0.0002 % of infected individuals). The Sars-Covid19 virus at present is killing at 0,1 % without the support of intensive care, and 0,002 with the supplement of intensive care, which means that is between 1000 and 10 times more lethal than a normal flu. In other words, if the virus will affect the whole population, in ten years or more (very improbably slow) the rate of people in intensive care will be below 10.000 per month, and affordable by our present health system. On the other extreeme, if the virus will spread to the whole population in just one year (extremely fast given the present rate), there will be ten times more people needing intensive care that the ones available, which will mean, not more, but around, 400.000 people at risk of failing with present rates.

      Hence my personal prediction is that this pandemic in Italy, will take between less 10,000 and 400,000 lives more, before transforming in a normal human flu, depending on virus evolution regarding the speed of infection and the decrease of lethality.

    1. On 2020-10-29 06:19:38, user Marm Kilpatrick wrote:

      This is a very nice study. Unfortunately, two pieces of information are missing that make it very difficult to build on this study or compare it to the vast data on viral loads over time that are available from other studies:

      1) the date of symptom onset for the 13 symptomatic patients. Can you indicate this date of symptom onset on the figure with the individual viral loads (Supp Fig 13)?

      2) a conversion of viral loads from Ct values into copies per swab. This could be done either by re-running the samples with standards on the plate, or by simply running some standards with known copies. I am aware that this relationship (Ct-viral copies) can vary from machine to machine and even a little from run to run on the same machine, but without this conversion the Ct values in this study can't be compared to other studies that used different assays, machines, etc. Given that you were willing to use Ct scores from the Florida labs in your analysis (with the relationship in Figure S5) it seems like it would be possible to run a few standards and at least get an estimate of what viral loads you observed in copies/swab.

      Adding these two aspects to your paper would greatly enhance its value for the broader scientific community.

      A third component which may be much more difficult for most samples, but might be possible would be to indicate the likely day of infection if this can be inferred from case investigation. This would allow the data to be even more informative in mapping the relationship of viral load back to the day of infection.

      Thank you,<br /> Marm Kilpatrick

    1. On 2020-10-30 09:59:29, user RS wrote:

      This is an interesting paper. I recognise the caveats relating to correlations which the authors acknowledge. I am however confused. Given the results found:<br /> 'There were similar incidence rates among SAH + MFM states (95% CI, 1.19% to 1.64%. n=34), SAH + no-MFM states (95% CI, 1.26% to 2.36%. n=9) and no-SAH + no-MFM (95% CI, 1.08% to 1.63%. n=7). However, SAH+MFM states (n=34), SAH+no-MFM states (n=9) had significantly higher averages in daily new cases and daily fatality, case-fatality-ratio (CFR) and mortality rate (per 100,000 residents) than no-SAH+no-MFM states during pandemic periods (about 171 days), respectively. ' how can the authors conclude that <br /> 'This study provided direct evidence of a potential decreased in testing positivity rates, and a decreased fatality to save life when normalized by population density through strategies of SAH + MFM order' I have looked at the paper and I can find no evidence for that conclusion. (Normalising with regard to population density found no difference). Indeed the authors state that "Furthermore, dismissing a low-cost intervention such as mass masking as ineffective because there is no evidence of effectiveness in clinical trials, is potentially harmful.' Surely non-pharmaceutical interventions such as masking should be evidence based, as the tragedy of the advice of mothers to lay their new borns on their stomachs showed.<br /> I would appreciate clarification, thanks.

    1. On 2020-11-08 03:03:45, user perrottk wrote:

      Comments on “A Benchmark Dose Analysis for Maternal Pregnancy Urine-Fluoride and IQ in Children”<br /> I question the validity of attempting to determine a BMC for the effect of fluoride intake on IQ without first ascertaining if there is a real effect. The problem of this document is that it assumes an effect without making a proper critical assessment of the evidence for a causal effect.<br /> The draft paper relies completely on two studies which reported very weak relationships from exploratory analyses. Nothing wrong with doing exploratory analyses – providing their limitations are accepted. Such analyses can indicate possibilities for future studies testing possibly causes – but, in themselves, they are not evidence of causation. These studies provide no evidence of causal effect<br /> The studies this draft relies as evidence that fluoride causes a lowering of child IQ illustrates have the following problems.<br /> 1: Correlation is not evidence of causation – no matter how good the statistical relationship. And reliance on p-values is not a reliable indicator of the strength of a relationship anyway The two studies relied on here do not report the full results of statical analyses which would have revealed the weaknesses of the relationships.<br /> 2: These two studies were exploratory – using existing data. They were not experiments specifically designed to establish a cause.<br /> 3: Many other factors besides those investigated can obviously be important in exploratory studies where there is no control of population selection. While authors may claim confounders are considered it is impossible to do this completely – there are so many possible factors to consider. Most are not included in the datasets used and the researchers may make their own selection, anyway.<br /> The study of Malin & Till (2015), referred to in this draft, illustrates the problems. Malin & Till (2015) reported what they considered reasonably strong relationships (p-values below 0.05 and R squared values of 0.21 to 0.34 indicating their relationships explained 21% to 34% of the variance in ADHD prevalence). However, their consideration of possible other risk-modifying factors was limited. They did not include state elevation which Huber et al (2015) showed was correlated with fluoridation. The strength of Huber’s relationship (R squared 0.31 indicating elevation explained 31% of the variance in ADHD prevalence) was similar to that reported by Malin & Till for fluoridation.<br /> Perrott (2018) showed that when elevation is included in the statistical analysis the relationship of ADHD prevalence with fluoridation was non-significant (p>0.05). This show the danger of relying on the results of statistical relationships from exploratory studies where consideration of other possible risk-modifying factors is limited.<br /> 4: This draft paper relies on the reported links between cognitive factors and F intake without testing for a causal effect. But it also does not critically assess those correlations. The problems of confounders have already been mentioned but these two studies report very weak relationships or, in most cases, no statistically significant relationships.<br /> For example, of the 10 relationships between measures of fluoride exposure and cognitive effects Green et al (2019) reported that only 4 were statistically significant (Perrott 2020). That is not evidence of a strong relationship and underlines the danger of assuming correlations (especially selected correlations) are evidence of causation. Incidentally, this draft paper mentions the study of Till et al (202) which also reported relationships between fluoride exposure with bottle-fed infants and later cognitive effects. In this case only three of the 12 relationships reported were statistically significant (Perrott 2020).<br /> Even those relationship reported as significant were still very weak. For example Green et al (2015) reported a relationship for boys which explained less than 5% of the variance of IQ measures.

      The relationships reported by Bashash et al (2017) were also extremely weak – explaining only about 3.6% of the variance in IQ and 3.3% of the variance in GCI. This weakness is underlined by other reports of relationships found for the Mexican ELEMENT database. Thomas (2014) did not find a significant relationship of MDI with maternal urinary fluoride for children of ages 1 to 3 although in a conference poster paper Thomas et al (2018) reported a statistically significant relationship for urinary fluoride adjusted using creatinine concentrations.<br /> 5: As well as ignoring the incidence of non-significant relationships from these studies this draft paper also ignores the findings of positive relationships from other studies. For example, Santa-Marina et al (2019) reported a positive relationship between F intake indicated by maternal urinary F and child cognitive measures. Thomas (2014) also reported a positive relationship of child IQ (MDI for 6 – 15-year-old boys) with child urinary fluoride.<br /> 6: The draft paper describes the two studies it uses for its analysis as “robust” but ignores the fact that the findings in these and other relevant studies are contradictory. For example, the findings reported in the two papers differ in that Bashash et al (2017) did not report different effects for boys and girls whereas Green et al (2019) did. Santa-Marina et al (2019) reported opposite effect to those of Bashash et al (2017) and Green et al (2019). These contradictory findings, together with the lack of statistical significance for most of the relationships investigated, are perhaps what we should expect from relationships which are as weak as these are.<br /> Summary<br /> The paper relies on weak relationships from exploratory studies. Such relationships, even where strong, cannot be used as evidence for causation and to assume so can be misleading. BMCs and similar functions derived without any evidence of real effects are not justified. While the derived BMCs may be used by activists campaigning against community water fluoride, they will be misleading for policy makers. This sort of determination of BMC is a least premature and a worst meaningless.<br /> References:<br /> Bashash, M., Thomas, D., Hu, H., Martinez-mier, E. A., Sanchez, B. N., Basu, N., Peterson, K. E., Ettinger, A. S., Wright, R., Zhang, Z., Liu, Y., Schnaas, L., Mercado-garcía, A., Téllez-rojo, M. M., & Hernández-avila, M. (2017). Prenatal Fluoride Exposure and Cognitive Outcomes in Children at 4 and 6 – 12 Years of Age in Mexico. Enviromental Health Perspectives, 125(9).<br /> Green, R., Lanphear, B., Hornung, R., Flora, D., Martinez-Mier, E. A., Neufeld, R., Ayotte, P., Muckle, G., & Till, C. (2019). Association Between Maternal Fluoride Exposure During Pregnancy and IQ Scores in Offspring in Canada. JAMA Pediatrics, 1–9.<br /> Huber, R. S., Kim, T.-S., Kim, N., Kuykendall, M. D., Sherwood, S. N., Renshaw, P. F., & Kondo, D. G. (2015). Association Between Altitude and Regional Variation of ADHD in Youth. Journal of Attention Disorders.<br /> Malin, A. J., & Till, C. (2015). Exposure to fluoridated water and attention deficit hyperactivity disorder prevalence among children and adolescents in the United States: an ecological association. Environmental Health, 14(1), 17.<br /> Perrott, K. W. (2018). Fluoridation and attention deficit hyperactivity disorder a critique of Malin and Till (2015). British Dental Journal, 223(11), 819–822.<br /> Perrott, K. W. (2020). Health effects of fluoridation on IQ are unproven. New Zealand Medical Journal, 133(1522), 177–179.<br /> Santa-Marina, L., Jimenez-Zabala, A., Molinuevo, A., Lopez-Espinosa, M., Villanueva, C., Riano, I., Ballester, F., Sunyer, J., Tardon, A., & Ibarluzea, J. (2019). Fluorinated water consumption in pregnancy and neuropsychological development of children at 14 months and 4 years of age. Environmental Epidemiology, 3. <br /> Thomas, D. B. (2014). Fluoride exposure during pregnancy and its effects on childhood neurobehavior: a study among mother-child pairs from Mexico City, Mexico [University of Michigan].<br /> Thomas, D., Sanchez, B., Peterson, K., Basu, N., Angeles Martinez-Mier, E., Mercado-Garcia, A., Hernandez-Avila, M., Till, C., Bashash, M., Hu, H., & Tellez-Rojo, M. M. (2018). OP V – 2 Prenatal fluoride exposure and neurobehavior among children 1–3 years of age in mexico. Environmental Contaminants and Children’s Health, 75(Suppl 1), A10.1-A10.<br /> Till, C., Green, R., Flora, D., Hornung, R., Martinez-mier, E. A., Blazer, M., Farmus, L., Ayotte, P., Muckle, G., & Lanphear, B. (2020). Fluoride exposure from infant formula and child IQ in a Canadian birth cohort. Environment International, 134(September 2019), 105315.

    1. On 2020-11-11 19:43:04, user Dr. Amy wrote:

      "1081 patients with a diagnosis of COVID-19 were admitted between May 5 and July 31, 2020 in our hospital. 793 patients had mild disease. 545 patients received steroids, and 125 patients received TCZ along with steroids for treatment. We did not have any control group as TCZ was available in our hospital and was a part of the treatment protocol since we started treating COVID-19 patients." I'm a bit confused as to why you can't use some of the 956 patients who didn't get TCZ as controls? Since patients on room air did receive TCZ, surely there are patients at all levels of severity who could serve as a control group to demonstrate that early course TCZ matters?

    1. On 2020-11-18 19:54:47, user Donald R. Forsdyke wrote:

      RISK ALLELES FAVOUR POSITIVE SELECTION OF CELLS POISED FOR "NEAR-SELF" REACTIVITY

      The hypervariable CDR3 regions of T cell receptors (TCRs) show specificity for peptides (p) that can associate with individual-specific sets of MHC (HLA) proteins. Different individuals inherit different sets of MHC genes (polymorphism). T cells defend against pathogens by recognizing pathogen-derived peptides complexed with MHC proteins (pMHC). However, T cells can also cause autoimmune disease by reacting with an individual's own peptides complexed with MHC proteins. Thus, there are "inter-individual differences in autoimmune disease risk," and "CDR3 patterns associated with autoimmune disease risks might indicate T cell reactivity to pathogenic antigens." Indeed, vulnerability to autoimmune disease is strongly correlated with inheritance of certain MHC sets ("risk alleles").

      From a statistical study of pMHC-TCR sequence covariance in human populations, the authors conclude that "MHC risk polymorphisms modulate the process of thymic selection and give rise to TCR repertoires that may be poised for autoreactivity." However, they also state that “T cells that cannot generate substantial TCR signaling from any HLA-peptide complex die by neglect (positive selection).” This implies that death by neglect equates with positive selection.

      In the 1970s it was proposed that, anticipating a pathogen strategy of exploiting "holes" in the T cell repertoire that had been created by negative selection of freshly arising anti-self T cells, future hosts would, though positive selection, naturally establish repertoires poised for autoreactivity. Thus, following positive selection, peripheral T cells recognize, and are maintained through tonic-stimulation by, "near-self" antigens. Individuals inheriting MHC risk alleles equilibrate nearer to the perilous anti-self "brink" than individuals inheriting non-risk alleles.

      The wealth of fresh evidence on this, as provided by the authors, is interpreted as favouring the “central [thymic] hypothesis.” However, they agree that the “central hypothesis” and the “peripheral hypothesis” are non-exclusive. Indeed, their results provide important evidence supporting a combined central-peripheral hypothesis. This has recently been summarized (Forsdyke DR. Scand J Immunol. 2019; e12746).

    1. On 2021-09-11 13:44:08, user Irl Smith wrote:

      Arola et al. show that the incidence of myocarditis is in the vicinity of 140 per year per million boys aged 15 (in girls, and other boys, the incidence is roughly an order of magnitude smaller). By neglecting the prior probability of myocarditis in all persons, not just those being vaccinated, the authors render their conclusions completely untenable. In other words, while the risk of hospitalization from COVID in boys is arguably smaller than the risk from myocarditis, there is no evidence that vaccination status affects the myocarditis risk.

    2. On 2021-09-12 01:57:32, user Swapnil Hiremath wrote:

      The authors have undertaken an ambitious project: briefly, taking numerators from the VAERS database, denominators from vaccine numbers from elsewhere. They then perform a ‘harm-benefit’ analysis looking at COVID hospitalization as the only harm. The whole analysis is restricted to the 12-17 age group in whom the concern of myocarditis is admittedly higher. <br /> They report a risk which was anywhere from 1.5 to 6.1 times higher for vaccine associated myocarditis vs COVID causing hospitalization. Vaccines must be bad, surely.

      However, several problems are quickly apparent. <br /> 1. The rate of myocarditis is much higher than the ones reported in Ontario: 160/million for 12-15 males compared to 72.5/million from Ontario (which includes Moderna as well - which has higher rates of myocarditis than the Pfizer/BioNTech). Why would this be so? There are many possible reasons, including the overestimation from VAERS being probable cause. On a perusal of the supplement, there are many which are other viral diseases which could be the reason; additionally many descriptions are quite vague (‘the doctor told us troponin was elevated’). It is very easy to submit cases to VAERS, so the numbers reported by the authors seem to be higher than the true value. The case ascertainment performed in Ontario seems a bit more reliable and trustworthy than user entered data in VAERS.

      1. It was not clear why the authors chose Jan 1, when vaccines EUA for 16-17 started in March, and for 12-15 in May. In their database, there seems to be one case in March and most of the VAERS reports from May or later.

      2. Secondly, the authors make many assumptions when it comes to who had comorbidities and who did not among the children, and multiply numbers to come up with some crude estimates. It would be useful for a pediatric diseases researcher to assess these assumptions. The 40% assumption of children hospitalized 'with COVID' and not due to COVID is a very crude untruth that the authors and others have needlessly perpetuated on social media with little foundation.

      3. Most importantly, the authors assume that hospitalization is the only bad thing for children who develop COVID. 12-17 years olds have died due to COVID. Some developed MIS-C. Some developed longer term sequelae. To group them under ‘hospitalization’ seems overly simplistic. Similarly, from perusing some of the vaccine-myocarditis, many seem to have recovered with symptomatic care. The authors seem to be minimizing COVID and maximizing vaccine associated adverse events.

      4. It should be noted that the involvement of children in the first two waves seems to be different than the one we have seen in the last 2 months with delta (for whatever reason - perhaps with lower immunization numbers in these).

      5. Lastly, the pandemic is not yet done. Many more children are going to get COVID in the next few months and years. We are going to have many more hospitalization, morbidity and sadly many more deaths. There will be long term morbidity and sequalae. We do need better data to assess the risks and benefits. This study is not it.

    3. On 2021-09-13 10:33:24, user Max Sargeson wrote:

      This is a useful study in terms of demonstrating the risks but tells us little about the causal etiology of post-vaccinal myocarditis.

      Until recently I'd assumed it was coagulopathy related i.e. due to tiny clots or fibrin deposits in the myocardium. Others have suggested that these intramuscular mRNA injections result in the lipid nanoparticles used for delivery being pinocytosed by skeletal muscle cells - which would only be infected in the case of the most advanced and unmanaged Covid cases, with significant viremia - and subsequently the unusual presentation of the spike protein antigens on muscle cells (rather than epithelial pneumocytes) thus promoting T-cell meditated autoimmunity against cardiac muscle.

      Are the markedly elevated troponin levels of affected boys compared to girls in the 12-15 age bracket (5.2 vs 0.8 ng/ml median) after the first dose evidence for one scenario over the other? I would appreciate if someone knowledgeable in immunology could offer comment, in the unlikely case that they see this.

    4. On 2021-09-15 06:06:42, user Jakob Heitz wrote:

      Why is the time frame for Covid hospitalizations of 120 days chosen and then compared with CAE events due to vaccination? Is it assumed that an individual will get vaccinated every 120 days?

    5. On 2021-09-15 06:28:09, user Jakob Heitz wrote:

      You equate a hospitalization due to Covid with hospitalization due to CAE from vaccination. Is it possible that the hospitalization due to CAE from vaccination was only for observation and only one day in length? Is it possible that a hospitalization due to Covid was due to serious illness?

    1. On 2021-09-11 19:31:00, user Rikk wrote:

      CDC has done a good job in pinpointing covid risk factors. Age and high BMI stood out. Other studies have confirmed vitamin D deficiency to increase severity of disease. We can obviously ignore age in a study of children. But why is correlation to BMI not included? It is easy to obtain. Vitamin D status should be included where available.<br /> Such a high level view becomes crude as the individual variation of risk factors has a major impact typically. Only with refinement of data can good conclusion be made, I think the work should strive to use the CDC defined risk factors as much as possible and as an overlay to analyse the risk of myocarditis for each CDC defined co-morbidity. Especially if the study has an intent to be a guide for any sort of intervention.

    2. On 2021-07-27 22:07:18, user ReviewNinja wrote:

      Some remarks:<br /> - confidence intervals would be necessary when interpolating data from such small numbers<br /> - 90 days is a long period after a positive test for an acute event…<br /> - if you want to compare these numbers to vaccine-caused myo/pericarditis, you need to use the same method (same criteria and same codes) to determine these

    1. On 2021-09-15 14:55:56, user Geoff Bridges wrote:

      There are many, many different types of PCR tests all of which are very accurate at detecting SARS-CoV-2 even the original Drosten et al test was quite accurate but has since been improved.<br /> The problem is that governments haven't requested the amplification or CT rate of all positive tests so we don't know whether the person tested is infectious or not. A low CT rate up to 20 is probably infectious, a mid CT rate of around 25 is possibly infectious and a high CT rate 25 to 45 probably not infectious.<br /> A study in the US suggested that 85% to 90% of positive cases are not infectious.<br /> https://www.nytimes.com/2020/08/29/health/coronavirus-testing.html<br /> A further PCR or LF test should be done a day later after self isolating on the 25 to 45 group to ascertain the trajectory of the virus in the person to see if they are coming OUT of an infection and therefore not infectious or going IN to an infection and therefore infectious.<br /> It is the lack of CT rate information which is causing the "casedemic" and NOT a fault with the PCR tests per se.<br /> The ONS Dataset, Coronavirus (COVID-19) Infection Survey: technical data, https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/conditionsanddiseases/datasets/covid19infectionsurveytechnicaldata shows the CT rate for a random number of people around the UK. If they set the upper limit of an infection at 25 then approximately 60% of cases would potentially not be infectious which could be confirmed with a Lateral Flow test 24 hours later. <br /> This would give an accurate view of how many people are actually “infectious” whilst giving those with positive test results but “not infectious” to carry on with their employment and lives and would avoid the problems with a “casedemic”.

    1. On 2021-09-16 03:49:51, user Truther wrote:

      They only used the PCR test to determine previous infection and it seems the re-infection rate is lower than the false positive rate on the PCR test so the question is has that been addressed in the quoted error?

    2. On 2021-09-04 19:09:42, user Ben Veal wrote:

      As a qualified statistician who's been doing this stuff for over 20 years, and has worked on several medical studies I think I ought to add my voice to the crowd.<br /> There may be a few things that aren't fully accounted for such as the false positive rate for PCR tests, or unbalanced populations due to deaths of highly vulnerable members of the pre-infected group, but they should not alter the conclusions much. As mentioned by others the false positive rate for PCR tests would have the effect of biasing the risk ratio downwards, not upwards, so we should expect the effect to be even stronger than reported.

      As for the potential drop-out issue due to deaths of highly vulnerable people among the pre-infected group; this would only be a problem if there are some unaccounted for cofactors causing that high vulnerability. If this is the case then we can approximately correct for the imbalance by estimating the number of deaths in the pre-infected group based on the known infected mortality rate. <br /> I have done that calculation (see link below), and get a lower bound estimate for the 95% confidence interval of [4.3,11.23] which is still significant.<br /> However, it could make a big difference to the risk of hospitalization (again assuming there are some important cofactors unaccounted for).<br /> https://www.facebook.com/ec...

      Another criticism I have read in these comments is that they should have used a conditional model (https://en.wikipedia.org/wi... "https://en.wikipedia.org/wiki/Conditional_logistic_regression)") to account for the matching. Actually a conditional model is used when there is unequal distribution of the treatment groups (pre-infected & vaccinated) within each strata (age, gender, socio-economic status & geographic region), and you are unable to use covariates to control for this. But the matching that they did ensures that this isn't the case. Furthermore they control for all but one of the strata (geographic region) with covariates.

      So, overall I trust the overall conclusion; natural immunity from pre-infection is better than vaccination, but not as good as natural immunity + vaccination.

      This does not mean governments should put a halt to their vaccination programs since that's obviously going to result in more deaths among the vulnerable, but perhaps it might be wise to reduce the vaccination rate among the less vulnerable people (i.e. young healthy people) so that they can build up natural immunity and be better prepared to fend off new variants from spreading through the population. In fact it ought to now be possible to estimate the optimal proportions of vaccinated & unvaccinated that would result in the lowest risk of contagion spread, given that we can expect to see this virus reappearing every year.

    3. On 2021-11-09 17:19:35, user MrMinerUndercover wrote:

      How exactly are you people criticizing CDC studies for their construction; when this study doesn't even tell you the number or nature of the test subjects.<br /> It just posits something without any actual data.<br /> Finally, the people whom the consider to have natural immunity caused from getting covid INCLUDES people who got covid, and 1 shot. Would this not stray the data?

    1. On 2021-10-04 18:45:17, user MUltan wrote:

      A maternal cognitive ability measure would have been a much better predictor of infant cognitive ability than educational level, which is a rather poor proxy for mental ability. Having such a measure for both parents would be even better. Virtually all those with educational levels above high school should have some standardized test scores that would give a fair indication of mental ability. With the clinical setting, it shouldn't be at all hard to get at least a Wonderlic or other brief cognitive ability measure for nearly all the mothers, which would be vastly better data.

      The maternal stress questionaire is also a very indirect measure -- asking about time spent interacting with infants and the character of those interactions would have been much more informative -- though these questions were not asked of the cohort prior to the pandemic, so the data would be hard to interpret, I suppose.

      The paper seems to be trying to find a social environmental cause and neglecting the possibility that the mental performance decline could be due to an environmental toxin. The CV spike protein is by far the most plausible candidate for such a toxin, nothing else is sufficiently new and widespread to have such an effect size. From summaries of research I gather that the spike protein can be: toxic to blood vessel linings, cause clotting disorders, strokes, and low blood oxygen; can cross the blood-brain barrier and the placenta; is expressed in breast milk; and can sometimes cause various pathological immune reactions, including neurological damage in some cases. The spike protein levels will have been by far highest among vaccinated mothers, so comparing the mental performance of a cohort of infants who were gestating or breastfeeding when their mothers received an mRNA CV vaccine to a contemporaneous cohort of infants whose mothers were not CV-vaccinated (and preferably uninfected as determined by antibody testing) should clearly resolve whether the CV spike protein itself is the culprit for lower infant mental performance, or rather other, primarily social factors.

    2. On 2021-12-13 22:59:33, user Just Because I can wrote:

      Greetings RI team from Utah! I must begin with nicesties; "Go BRUNO"! My son graduated this past May 2021 from Brown. I am a speech and language pathologist with over 30 years of hospital, private and public school setting experiences. Over the past nine years, I have professionally focused on children ages 3-5 within the public preschool and private therapeutic settings. I service students and their parents with the most intensive and restrictive learning environments within our District due to cognitive, behavioral and communicative delays. I can't help but weigh in now, as I previously shared this article with my peers in August as I braced for the impact of the 2021 school year.

      Given your single assessment tool (I professionally do not profess strong decisions based on a single evaluative instrument, even as widely accepted at the Mullen), I've found your results to be intriguing and frankly, just as we anticipated.

      To compare to RI, our school district, closed schools for Remote Learning for only 3 mos. in the Spring of 2019 and returned to in person instruction with hybrid options in 2020. Of a caseload of 65 students, I had 3 that were online/virtual. In 2021, our District returned to essentially all in student learning.

      My informal observations this school year in Utah has been as follows:

      1. Increase in new referrals and eligible "older" 4+ year old children scoring remarkably delayed communication (Standard scores <50 given a typical range of 85-115) and no previous history of EI or preschool interventions. Our TIER 3, most restrictive preschool program has a marked influx of new referrals (e.g., total students in May was 24 and currently rises at 36 with 8 new referrals in Jan.)
      2. Many declined or rarely attended virtual Early Intervention supports, skipped medical wellness visits including dentistry during the pandemic.
      3. Increase in parent report of primary concerns with behavioral components.
      4. Given the current timeframe, we are NOT seeing marked progress with an influx in discharges (no longer eligible due to more typical standard scores). We are seeing progress and we have continued to see progress through the pandemic (which at times surprised me) but the levels of improvement are not as remarkable or typical as years past.
      5. Typical communication, fine/gross motor and even cognitive delays are still present but the comorbidity of exceptional delays in social/pragmatic and ultimately, behavioral skills combined make measured learning and ultimately IEP progress at a slower rate. Social/pragmatic delays are interfering with overall progress.
      6. Parent involvement, participation, enthusiasm and grit appear markedly depressed. Educational teams walk a fine line between empathy, compassion and expecting parents and care givers to step in and "do hard things" in difficult times. The teams are using external motivators such as pizza cards to motivate parents to attempt, complete and turn in 2x monthly parent based home practice pages.
      7. Increased rate of meeting attendance with Virtual options.

      Where do we go from here? I agree, measuring student outcomes is critical but supporting the parents (in any evidence based manner) is to me, a critical and crucial element. I thought the kids, once exposed to typical learning/situations and with repetition, our inflated numbers would flatten in a year and they would bounce back into typical ranges but it's the apathetic, tired, depressed parents that are lacking resilience and grit currently. I do think another component that would be most valuable and continues to need funding is Preschool for All (or most).

      Thank you to any cohort, parent, professional person interested in this dialogue, for reading my insights.

    1. On 2021-08-31 22:18:33, user Timmy Tester wrote:

      Why would you use hypothetical modeling data <br /> to predict results? We have kids in school now, some with mask mandates some without. In the US, Europe and beyond? Why wouldn’t you look at real world data on actual kids in school? If you create a model that shows more covid spread with no masks, the result is kind of inevitable and not very scientifically valid.

    1. On 2021-09-04 06:31:10, user Philological wrote:

      In this version of the paper the “U” shaped response curve between covid vaccine hesitancy and education level is still mentioned. It is clear from the updated statements in this version that the data set was mortally compromised by respondents falsely linking PhD education with vaccine hesitancy. This resulted in an avalanche of anti-vaccination invective in social media and online news media, in many cases justifying covid vaccine rejection based on the relevant findings in the paper. All references to PhD’s should be removed.

    1. On 2021-09-08 03:52:00, user Matt Lee wrote:

      It would be informative to see the disease outcome comparison after removing patients from the study with acceptable exclusion criteria; an active immunocompromising condition or recent immunosuppressive therapy was used by Pfizer in their clinical trials. In addition, adjustment of the data for comorbidities would make the data more clinically meaningful.

      Because the two treatment groups could not be controlled for comparable rates of comorbidities, it may make more sense to remove them from the comparison. It's unfortunate for data analysis, that 21.5% vs 7% of the unvaccinated & vaccinated, respectively, had diabetes, a notable co-morbidity for COVID-19. Only a subset of the Charlson Comorbidity Index Categories were evaluated in this study. Just as Pfizer showed # of participants with any Charlson comorbidity for each treatment group in Table S2 of the 6 mo outcome study, https://www.medrxiv.org/con..., such information added to Table 1 would be a valuable addition.

      This data does not rule out the possibility that the differences from the Disease Outcome between vaccinated & unvaccinated could skewed by the higher population % with comorbidities in the unvaccinated group.

      The differences in pneumonia in 53% vs 22% and in suppl. O2 required in 21% vs 3% in unvaccinated vs. vaccinated, respectively, may or may not still be statistically significant in the subset of patients from this study without any Charlson comorbidity.

    1. On 2021-09-08 14:41:41, user Sherri Christian wrote:

      Can you please provide details on the HD population (I assume HD stands for healthy donor)? It doesn't appear that CD24Fc treated patients were compared directly with HD. This is an important comparison, in my opinion.

    1. On 2021-09-10 01:38:54, user Tanner wrote:

      A limitation to consider: A control and experimental cohort of "unvaccinated" and "Vaccinated" does not take into account a large population of previously infected individuals. This would likely have a large impact on the infection rate of both the vaccinated and unvaccinated cohorts and help guide current policies being passed.

    1. On 2021-09-10 09:16:10, user Wolfgang Birkfellner wrote:

      I posted this comment with a few questions on my side under the wrong paper initially ... so here it is again:

      I am afraid that the statistical model of using a linear regression on exponential data is not fully adequate here.

      First, using the logarithm of the antibody level introduces a bias. think of <br /> the specimens that have zero antibodies - after taking the logarithm, <br /> the value for these is -infinity, which renders every effort to <br /> determine a regression line totally useless. it is therefore not <br /> surprising that, for instance, the predicted value from the model for <br /> the antibody level at t0 is quite off - 6366 for the vaccinated <br /> specimens whereas the mean is found to be 12153 and the median is 9913 <br /> according to table 2a. I know that using a linear regression on <br /> logarithmic data is a common method but it has its pitfalls.

      Second,<br /> the data do not follow a Gaussian distribution (look at the mean and <br /> the median in tables 2a and 2b), and apparently at least the median for <br /> the covalescent specimens does not even follow a simple exponential <br /> decay model; in table 2b, we see a rise of the median antibody titer <br /> from 490 (t0) to 586 (t1).

      Third, it is somewhat disturbing that <br /> in table 2b, the IQR for the median titer of the covalescent patients at<br /> t6 is given as [140-8301} - the third quartile is ten times higher <br /> compared to the values at the other timepoints.

      What I do see from<br /> the data indeed is that even six months after vaccination, the median <br /> antibody level of the vaccinated patients (447) is higher than the level<br /> for the covalescent patients (314). There is an indication that the <br /> titer might fall off more rapidly for the vaccinated cohort, but given <br /> the data as represented in the paper i consider this conclusion a bold <br /> one.

    1. On 2024-01-19 09:50:31, user Dr. Hans-Joachim Kremer wrote:

      I largely agree with WIlliam Bond. It is fair enough to show mean (instead of median) and SD in Table 1, but you definitively misplaced the estimates.

      It is also a good idea to show subset analyses by age cohorts. To retain sufficient power, I would recommend A. confine this to W12 data, B. use three age cohorts: <50 (assumed to be healthy), 50-59 (in between) and >60 ( assumed to be less healthy).

      You claimed to have performed multivariate logistic regression. OK. I would then expect clearly listing the variables to be adjusted for and anywhere the attribute "adjusted" in Table 3.

      Then it would be nice to have, at least for W12 data, the unadjusted OR. The same for the 3 age cohorts suggested above.

    1. On 2024-04-27 20:39:23, user David Lockyer wrote:

      It's very encouraging to read of some focused research being done into TSW. For those suffering It's really important that a separate diagnostic category is identified so that the condition can be treated appropriately. I look forward to seeing this peer reviewed and published.

    2. On 2024-05-02 14:51:48, user Kathy Tullos wrote:

      TSW research is imperative to validating (or invalidating) the patient experience. As it stands, the patient experience is invalidated by the medical community based on no research: "to date there have been no systematic clinical or mechanistic studies to distinguish TSW from other eczematous disorders." To confidently tell patients something is all in their heads while doing no research to prove or disprove that stance is truly unethical. Patients numbering in the thousands have been reporting these adverse effects for the past 10 years - including a cluster of new symptoms never experienced prior to topical steroid treatment - and it is roundly dismissed. "Steroid phobia" is thought to be the culprit. The finger is pointed at the patient for underusing, overusing, or going off treatment. The only reason patients go off this treatment is because the symptoms have escalated so extremely during treatment, there is nothing left to do but see if the treatment was the problem. Of course there is an acute phase after cessation of withdrawal. But after a protracted withdrawal phase, there is improvement. We see this pattern time and again in the patient community. All anecdotal of course. There has been no intellectual curiosity from the scientific medical community to drill down and see if there is validity or an explanation. That is why this study, and future studies that it will spark, is so very important. Thank you for listening to reports and for trying to understand and fix the problem. We need to connect patients with doctors and we can not do this if patients' reports are not believed. Doctors only really listen to other doctors. Another reason we need more research like this.

    3. On 2024-05-19 04:01:18, user Natalie wrote:

      Thank you so much for this research! I’m 20 months into TSW and so excited to see research like this finally being done. I would love to see this peer reviewed and officially published so that it is able to gain a wider audience and reach more of the patients and practitioners who desperately need to be made aware of this important information.

    1. On 2024-05-01 23:32:45, user ppgardne wrote:

      This is an excellent paper, showing a clear association between variation in RNU4-2 and NDD phenotypes. The enrichment of variation in the gene between undiagnosed NDD and population cohorts was remarkable.

      I thought there were a few areas where the manuscript could be improved slightly.

      * Figure 1: Clearly define the measures “genotype quality”, “allele balance” and “total coverage”. We can infer what these mean, but definitions of each in the method section would be helpful.

      * Table 1: I spent some time gathering the population sizes for each of the count columns. Please add an extra row or two, giving the number of individuals in GEL NDD, Non-GEL NDD and the population cohort.

      * The statement “Humans have multiple genes that encode the U4 snRNA, although only two of these, RNU4-2 and RNU4-1, are highly expressed in the human brain” is slightly inaccurate. The HGNC database and reference (https://doi.org/10.15252/em... "https://doi.org/10.15252/embj.2019103777)") list just those two functional copies of U4 in the human genome. There are ~100 annotated pseudogenes however.

      * You state that there is “97.2% homology” between RNU4-1 & RNU4-2 – this is a wrong (but common) use of the term homology. You should have stated “similarity” instead.

      * Figure 3: I understand that the BrainVar RNAseq data are from samples of human dorsolateral prefrontal cortex. This should be stated in the caption.

      * Figure 3: you state that “expression of RNU4-1 & 2 is tightly correlated”. Looking at the figure it appears the tissues with higher expression are also the ones were more samples were taken. Was the potential confounding of sample depth and/leverage accounted for in the analysis?

      * Figure 4: it is unclear what this heatmap is showing. Is it really normalised on a per-gene basis, or is the null for SNV densities derived from the 1,000 random intergenic sequences mentioned in the methods? That would seem to be a more useful measure of variant enrichment or paucity. The ordering of the sequences is odd too, why are the paralogous genes U4/U4ATAC, U1/U11, U2/U12, U5 etc not next to each other? Surely the paralogs are more comparable. What is the justification for an 18bp window? –Other than that is the size of the variable region in RNU4-2.

      * The recurrence of n.64_65insT is fascinating. And speculation on the mechanism is very worthwhile. You mention early in the manuscript the possibility of slippage in homopolymer regions, but this is not mentioned again in the appropriate section. You mention local secondary structure as a possible driver, but there seems to be very little evidence to support this based on free energy modelling.

    1. On 2024-05-25 16:10:01, user Mark wrote:

      Writing that the "simulation demonstrates that repeated boosters, given every few months, are needed to maintain this misleading impression of efficacy" (in their abstract) the authors build upon the assumption that (fully) vaccinated persons are miscategorized as "unvaccinated" for some period of time after they've received a repeated ‘booster’ vaccination.

      I wonder if there is any example, research study or country which actually proceeded this way ...

    1. On 2024-07-04 09:09:56, user Rohit Satyam wrote:

      I was wondering if you can also provide the major/minor sublineage assignment the authors obtained for the case studies included in the paper as a supplementary file.

    1. On 2024-07-31 12:40:06, user David Curtis wrote:

      The paper presents these findings as if they were novel but in fact the main result, an association of ITSN1 ptvs with Parkinson's, was published on the AstraZeneca PheWAS portal years ago: https://azphewas.com/geneView/ba08a93f-501e-44e6-a332-98ce2f852279/ITSN1/glr/binary The current paper does cite the PheWAS publication but without making it clear that the central results have previously been reported. What the current paper seems to do is to confirm the association in a new sample and an animal model but most readers would be unaware that the main evidence for association represents one finding from the previously reported PheWAS. Failing to mention that the results were obtained as part of the PheWAS is misleading because there were over 18,000 phenotypes tested. Without knowing this, the association results appear to be more strongly statistically significant than they actually are. In fact, correcting for the number of phenotypes tested as well as the number of genes and models tested would mean that the primary results at least would not be regarded as statistically significant. All these issues should be properly discussed.

    1. On 2024-10-22 02:23:08, user Olivia Piraino wrote:

      I really enjoyed reading your paper. This study shows that when it comes to identifying duration-response correlations and determining the minimum effective duration (MED) in phase II trials, model-based techniques like MCP-Mod and FP1 consistently outperform traditional qualitative methods like the Dunnet test. Because these model-based techniques utilize flexible statistical models, they reduce bias and variation and are more accurate in calculating duration-response curves and the MED. But the study also points out drawbacks, like the possibility of underestimating the MED in cases with small sample sizes, which raises the possibility of bias and variability. Although model-based methods are more precise, their practical application may be limited due to their complexity and the requirement for meticulous control of confidence intervals.

      After reading your paper, I wonder if this approach would work for other long-term treatments for diseases like HIV. Also, how would these model-based approaches perform using real-world medical patient data, which often includes complex medical conditions, comorbidities, and variations in patient adherence compared to the controlled clinical trial environment? Do you think this will enhance model flexibility or create more challenges?

      Overall, I enjoyed your pre-print and look forward to seeing more of your work in the future.

    1. On 2024-11-30 22:32:43, user xPeer wrote:

      Summary<br /> The preprint investigates the remodeling effects of icosapent ethyl (IPE) supplementation on plasma lipoproteins and its subsequent impact on cardiovascular disease (CVD) risk markers in normolipidemic individuals. The study finds that IPE supplementation significantly enhances eicosapentaenoic acid (EPA) levels in the plasma, reducing major CVD risk markers such as triglycerides, remnant cholesterol, and apoB levels. There are consistent alterations across all lipoprotein classes, influencing their lipidomes, reducing proteoglycan binding properties, and potentially decreasing the atherosclerotic risk. However, the study's small sample size and short duration limit the generalizability of findings.

      Major Revisions

      1. Extended Sample Size and Duration:<br /> The study's findings are constrained by a limited sample size and short duration (28 days), impeding the generalizability to broader populations or those with pre-existing cardiovascular conditions.
      2. Example: Expand the cohort size and extend the duration to assess long-term impacts and variability of EPA incorporation among different CVD risk groups (Discussion, Page 14).

      3. Detailed Mechanistic Insights:<br /> The precise mechanisms by which IPE alters lipoprotein characteristics and its direct influence on cardiovascular outcomes remain unclear.

      4. Example: Detailed mechanistic studies on how IPE-induced lipid species changes relate to atherosclerosis progression are needed (Results, Page 11).

      5. Individual Variability Analysis:<br /> The study underscores substantial interindividual variability in response to IPE supplementation, calling for personalized treatment approaches.

      6. Example: Investigate genomic or lifestyle factors contributing to variability in response to IPE (Results, Page 13).

      7. Proteoglycan Binding and Aggregation:<br /> The study notes reduction in proteoglycan binding and different responses in LDL aggregation among participants but lacks detailed analysis.

      8. Example: Provide more comprehensive data and rationale behind the differential LDL aggregation responses post IPE-supplementation (Results, Page 8).

      Recommendations

      1. Larger and Diverse Cohort Studies:<br /> Conduct studies with larger and more diverse cohorts to bolster the reliability and applicability of the findings across various population subsets.
      2. Longitudinal Studies:<br /> Extend the study duration to capture long-term effects of IPE on lipoprotein profiles and cardiovascular health outcomes.
      3. Mechanistic Pathway Research:<br /> Incorporate omics approaches (genomics, proteomics) to unravel the underlying mechanisms modified by IPE that contribute to reduced CVD risks.
      4. Personalized Medicine Approaches:<br /> Develop stratified medicine approaches to optimize IPE dosage and treatment protocols tailored to individual lipidomic profiles and genetic backgrounds.
      5. Detailed Biophysical Characterization:<br /> Enhance the biochemical and biophysical characterization of proteoglycan binding and lipoprotein aggregation properties altered by IPE supplementation.

      Minor Revisions

      1. Textual and Formatting Errors:
      2. Ensure consistency in figure label fonts and styles across the manuscript.
      3. Correct minor typographical errors and ensure uniformity in section formatting (e.g., use of italics, bold).
      4. Specific errors include inconsistent capitalization in headings and figure labels requiring standardization (Introduction, Page 2; Results, Page 8).

      5. AI Content Analysis:

      6. Estimated AI Content: Approximately 10%.
      7. Highlighted AI-Detected Sections: Notable in the background and introduction sections with possible AI involvement in text generation.
      8. Assessed Epistemic Impact: The AI-generated content does not undermine the scientific rigor but would benefit from expert revision to enhance field-specific terminology and depth.

      Overall, the preprint presents insightful preliminary findings on the cardioprotective impacts of IPE supplementation, recommending essential improvements and comprehensive validations for future extensive studies.

    1. On 2024-12-09 20:43:18, user Louis El Khoury wrote:

      Usually methylation changes in response to an environmental factor are slow. How is it possible that there is enough methylation change during the course of a single match to reduce the epigenetic age, and then return to baseline 24hrs later? This is not clear in the discussion section.

    1. On 2025-02-12 19:57:09, user Aron Troen wrote:

      Review Part I: Overview

      Careful, comprehensive, and accurate evaluation of the emergency food supply available to conflict affected populations is crucial for the design and implementation of an effective humanitarian response in any war.

      This study claims to model the caloric content and diversity of the food delivered to the Gaza enclave from October 2023 through August 2024 of the current war, and asks whether it was sufficient to provide for the needs of Gaza’s population.

      To do so, the researchers construct a “retrospective model” of the per-capita calorie supply over time incorporating:

      • A simulation of the baseline food supply at the onset of the war, and its depletion during the initial phase of the war, consisting of assumed household stocks of humanitarian food aid (2.3.1); data on the capacity of UNRWA and WFP warehouses before the war (2.3.2); Estimated private food stores (2.3.3), and; estimated agriculture and livestock production before the war and its estimated rate of decline (2.3.4).
      • A simulation of daily age and sex-adjusted per-capita food pre-war intake of the Gazan population (rather than their consensus humanitarian requirements) based on the distribution of intakes derived from a health survey of non-communicable disease conducted among adults in Gaza in 2020 during the COVID pandemic.
      • Selected, partial data on the supply of humanitarian food aid by UN agencies, and assumed geographic distribution of the food supply and population.

      Unfortunately, the study suffers from fundamental flaws which invalidate its findings and conclusions.

      In any model, simulations depend heavily on the validity of the selected data and of each of the model’s assumptions. This study makes multiple assumptions and relies on heavily on data from the UNRWA dashboard, which the authors and UNRWA acknowledge to be incomplete, and whose reliability is controversial. Notably, the UN data do not fully cover private sector food delivery, which comprise a large proportion (up to 40%) of the total available food supply. It does not make a serious effort to analyze additional data from COGAT that includes more complete coverage of the food supplied to Gaza. Of the URNWA data analyzed, the researchers assign food weights to pallets that underestimate the weight of food provided by as much as half (!) according to publicly available UN food supply requirements. These and other significant limitations, detailed below, are enough to raise serious concerns about the validity of the findings, and to limit the conclusions that may be reliably drawn from them.

      However, an even more basic question must be asked: Why simulate or model the calorie supply, with all the uncertainty that the model’s multiple assumptions introduce into the findings, if the available energy can be simply calculated from the reported weight and type of foods supplied to Gaza, which can then be compared to the humanitarian standards for the energy requirement of emergency-affected populations?

      Some of the limitations of the data and the uncertainty of the results are listed by the authors. However, merely acknowledging limitations is not sufficient to justify overreach in the discussion of the results and their policy implications. In their conclusions, the authors suggest that their study provides valid and useful evidence for a “forensic analysis” of claims that Israel has deliberately starved Gaza’s population, concluding that “Israel, as the de facto occupying power, did not ensure that sufficient food was consistently available to the population of Gaza…”. They further state that their findings will be used to estimate the “resulting effect on nutritional outcomes among Gazan children”. These conclusions are not supported by the findings and appear to reflect political motivation and bias. Indeed, contrary to the portrayal of the results, it is remarkable that the model shows that the overall caloric supply to the emergency-affected population of Gaza was adequate during the majority of the period analyzed, despite a brief shortfall, even with intense combat between Israel and Hamas, and despite the limitations of the model’s questionable assumptions and data.

      Presenting simulations with greater certainty than they merit can be harmful. Past simulations made by the authors about the war in Gaza have proven erroneous (For example, in February they projected that total deaths from the conflict would reach between 58,260 to 85,750 deaths by August , whereas even the problematic Gaza MOH (Hamas) eventually reported a significantly lower number of 39,623 for the same period (see for example: https://gaza-projections.org/; https://www.washingtoninstitute.org/policy-analysis/gaza-fatality-data-has-become-completely-unreliable; https://henryjacksonsociety.org/publications/questionable-counting/ ). The gap between the authors’ past projections on the war and the available information ought to have given them pause before publishing highly consequential political conclusions from tentative simulations. The gravity of the crisis is severe enough without magnifying the uncertainty surrounding the available data. For a discussion of the harms associated with conflating simulated projections with reality, see for example, Beyar R, Skorecki K. Concerns regarding Gaza mortality estimates. Lancet. 2024 Nov 16;404(10466):1925-1927. doi: 10.1016/S0140-6736(24)01683-0. More recently, a US State Department statement reprimanded the irresponsible exaggeration of the food crisis by one of the key international humanitarian NGOs that provides data to the IPC: “At a time when inaccurate information is causing confusion and accusations, it is irresponsible to issue a report like this. We work day and night with the UN and our Israeli partners to meet humanitarian needs — which are great — and relying on inaccurate data is irresponsible.” ( https://il.usembassy.gov/statement-from-u-s-ambassador-jacob-lew-on-fews-net-report/ ).

      Instead of providing clarity based on credible and verifiable research and analysis, this exercise is used for political advocacy in belittling the very serious challenges faced by Israel, humanitarian agencies and the private sector, who collectively have supplied massive quantities of food to the emergency-affected population of Gaza, despite the intense and ongoing war. It is always difficult to obtain accurate information during a war.

      Real-time projections that recognize the inevitably incomplete data (beyond lip-service), with carefully stipulated assumptions and caveats, can be useful to inform prospective decision-making and humanitarian efforts in the face of uncertainty. In contrast, “retrospective modelling” based on blatantly cherry-picked data, questionable assumptions, and presenting simulated outcomes as truth to reach politically charged conclusions does not advance scholarly discourse, and has pernicious real-world consequences.

      Comments on the Introduction<br /> The objectives of the study are not explicitly stated in the introduction. While the authors’ justified dismay over the humanitarian crisis in Gaza and their aim of assessing the food availability of Gaza is clear, the framing of the introduction (as well as the discussion and conclusions of the paper) is selective and tendentious, leaving the impression that rather than evaluating the food supply to Gaza during an intense conflict in order to provide valid scientific insight for improving the humanitarian response, the study is an exercise in political and ideological advocacy under the facade of academic research and analysis.

      The highly selective introduction obscures more than it illuminates. It begins by asserting that “the population of the Gaza Strip has experienced seven decades of protracted conflict”. These seventy years (!) conflate fundamental historical transformations, from the time when Gaza was under Egyptian control until the 1967 war, in which Israel occupied Gaza and the West Bank, followed bythe October 1973 war, the first Palestinian intifada (1987-1991), Oslo Accords (1993) in which the Palestinian Authority was created and assumed control over Gaza (1994-2006), the second intifada (2000-2004) and Israel’s full unilateral withdrawal from Gaza (2005), the violent Hamas takeover in 2007, thousands of rockets launched at Israel and ensuing small wars, and Hamas’s construction of a vast underground military complex under Gaza. Reducing this long and complex history to a simple story of protracted conflict and implied victimization elides complex dimensions including rapid population growth from ~250,000 Gazans in 1950 to ~2.2 million in 2023, major improvements in health and nutrition achieved through cooperation between Palestinian and Israeli health professionals, and significant economic, social and political developments (for example: “Health in the occupied Palestinian territories”; Tulchinsky, Ted H et al. (2009) The Lancet, Volume 373, Issue 9678, 1843).

      Mention of “70 years of conflict” is followed in the same breath with “16 years of enforced restrictions on trade and the movement of people and goods, including food [1]”. The reference given for this statement, which was authored by the UN conference on trade and development in January 2024 is a preliminary analysis of the impact of the current war on the destruction in Gaza. It does not mention restrictions on food. On the contrary, it refers to the massive provision of (food) aid to Gaza by the international community. Moreover, there is no mention that the 16 years of restrictions on Gaza were a response to the election of Hamas, a jihadist terror organization not only dedicated to the destruction of Israel, but also at odds with the PLO-led Palestinian Authority, which it violently overthrew in Gaza in 2006-2007. There is also no mention that during the 16 years since seizing power, Hamas instigated recurring wars against Israel in 2008-2009, 2012, 2014, 2021, and finally in October 2023. This glaring omission leaves the impression that restrictions on Gaza were arbitrary.

      Hamas is only mentioned in a passing reference to “the 7 October Hamas attacks”, which serves as a point of departure for describing the massive destruction and harm inflicted on Gaza by Israel. There is no mention anywhere in the article of responsibility of the Hamas government in Gaza, for the consequences of their failed governance for their own civilian’s welfare ( https://www.nytimes.com/2024/09/13/us/politics/hamas-power-gaza-violence-israel.html) "https://www.nytimes.com/2024/09/13/us/politics/hamas-power-gaza-violence-israel.html)") . The absence of central details of the attack on Israel, which continued long after October 7th – over 1200 people brutally murdered and mutilated, 255 abducted, as well as the parallel bombardment of millions of Israelis with thousands of rockets and missiles – is a remarkable omission and reflects the biased political approach. Similarly, in framing the Israeli response as “large-scale aerial bombing and ground operations,” there is conspicuously no reference at all to the dilemmas posed by Hamas’ strategy of (ab)using the civilian population under their control as human shields, and of the hostages held by Hamas, rocket launchers, and an estimated 500 kilometres of underground military infrastructure constructed by Hamas under hospitals, schools, mosques, residences and agricultural areas in Gaza ( https://mwi.westpoint.edu/gazas-underground-hamass-entire-politico-military-strategy-rests-on-its-tunnels/) "https://mwi.westpoint.edu/gazas-underground-hamass-entire-politico-military-strategy-rests-on-its-tunnels/)") . In artificially removing this core information from the framing of the article, the rationale of Israel’s response and strategy in seeking to disarm Hamas is also erased, preventing the credible analysis of this complex tragedy, including its impact on food availability. <br /> The introduction proceeds to provide fatality figures in politically salient terms: "Israel has conducted large-scale aerial bombing and ground operations in Gaza, resulting in at least 41,272 deaths". The citation of a UN source for this figure creates the misleading perception that these claims are from a neutral source and that they were verified by the UN. However, OCHA cites these numbers with the disclaimer: "according to figures of Gaza's Hamas-run Ministry of Health, which have not been independently verified and may include Palestinian combatants who were killed." Notably, the authors fail to mention the IDF estimates of 17-20,000 combatants killed during this period, and with a natural death rate of ~5,500 people per year, the civilian death rate is lower than implied, although terrible enough without need for inflation.

      In the second and third paragraphs, the introduction does provide background describing the baseline nutritional status of Gaza’s population, and the reported impact of the war. However, many of the statistics cite UN reports which are not always verifiable or impartial, and the presentation is selective, uncritical, and at times inaccurate. For example, the introduction states on p2. line 26-29 that "by December 2023, those who remained [in North Gaza and Gaza City governorates] appeared largely cut off from aid", because "the UN Relief and Works Agency for Palestine Refugees (UNRWA) last delivered food to the north on 23 January 2024, being then barred from further deliveries, while the UN World Food Programme (WFP) ceased its food convoy operations to the north on 20 January [21], only resuming these on a limited basis in March." This implies that, between 23 January and sometime in March no was food supplied to the two Northern Governorates, when in fact COGAT reports on private sector delivery of at least 150 food trucks to the North in this period ( https://gaza-aid-data.gov.il/media/qtvbs5u0/humanitarian-situation-in-gaza-cogat-assessment-mar-15.pdf) "https://gaza-aid-data.gov.il/media/qtvbs5u0/humanitarian-situation-in-gaza-cogat-assessment-mar-15.pdf)") .

      The introduction places the onus for all food scarcity on Israel, asserting for example that “Israel has placed enhanced restrictions on aid flows and distributions, closing all but two southern crossing points into Gaza up to May 2024 and rejecting multiple consignments for ostensible security reasons [18]." This arguably misrepresents the complex and objectively challenging situation, including attacks, looting and hoarding of aid by Hamas, and omits the well-documented controversy and contrary evidence. Furthermore, the authors fail to mention that Erez crossing was destroyed by Hamas terrorists during the October 7th attack on Israeli borders and that this is the reason it was closed. Moreover, prior to the war, Erez was a pedestrian crossing, and extensive work by Israel in collaboration with the US, Jordan and international agencies, allowed its reconstruction and opening in April 2024 as a truck crossing.

      On the specifics of food supply, the introduction cites IPC projections issued in December 2023 and March 2024, but ignores the FRC report published on June 4 ( https://www.ipcinfo.org/fileadmin/user_upload/ipcinfo/docs/documents/IPC_Famine_Review_Committee_Report_FEWS_NET_Gaza_4June2024.pdf) "https://www.ipcinfo.org/fileadmin/user_upload/ipcinfo/docs/documents/IPC_Famine_Review_Committee_Report_FEWS_NET_Gaza_4June2024.pdf)") acknowledging that the previous analyses were based on significant undercounting of the amount of aid. <br /> Furthermore, the authors fail to note that IPC reports are intended to sound the alarm and mobilize international action to prevent famine before it occurs, because once it occurs, it is often too late to save lives of those acutely affected. Despite the institutional processes designed to obtain political and technical consensus, such reports are often based on inevitably flawed and limited data from actors involved in the conflict. Given the contentious nature of the war in Gaza, projections made by the IPC and others have often been conflated with the actual situation, and abused to advance political agendas. [See for example: GM Steinberg and LD Klaff, “Politicization of Tragedy: The Case of the Gaza Conflict and Food Aid” in The American Journal of Clinical Nutrition 120 (2024) pp. 749-750; and a critique of the reports by Caner, INSS special publication July 2024 ( https://www.inss.org.il/wp-content/uploads/2024/07/special-publication-240724-1.pdf ); and by the Israel Ministry of Foreign Affairs https://www.gov.il/en/pages/transparency-and-methodology-issues-in-the-ipc-special-brief-of-18-march-2024 and https://www.gov.il/en/pages/the-third-ipc-report-on-gaza-june-2024-3-sep-2024 ]. Unfortunately, this study echoes the tendentious discourse. Examples of its selective and misleading use of the IPC reports include:

      • "In December 2023 the Integrated Food Security Phase Classification (IPC)… classified 25% of the population in the northern governorates as experiencing catastrophic acute food insecurity, updating this projection to 55% in March 2024": Firstly, it is misleading to compare the "current" classification in Phase 5 in December (25%) with the projected classification in March (55%, although it was 50% in the actual report). The "current" classification in the March report was 30%. Secondly, and much more problematic, the article doesn't refer to the IPC reports which covered the period from March to September (published in June and October) which pointed to a steady decline in the population classified in phase 5 to 15% in June and 6% in September-October.<br /> • "In March 2024 Oxfam claimed that the population in northern Gaza had only 245 Kcal per person-day available": apart from the referral to March, the press release cited here does meet basic academic standards ( https://www.oxfam.org/en/press-releases/people-northern-gaza-forced-survive-245-calories-day-less-can-beans-oxfam) "https://www.oxfam.org/en/press-releases/people-northern-gaza-forced-survive-245-calories-day-less-can-beans-oxfam)") . Although it says that "Oxfam’s analysis is based on the latest available data used in the recent Integrated Food Security Phase Classification (IPC) analysis for the Gaza Strip.", it seems to refer to a graph on page 8 of the March 18th report presenting similar numbers for Northern Gaza, yet no source is given for that graph, nor is it clear who conducted the analysis, based on which data and using which methodology. The IPC report only describes the study in vague terms: "An in-depth analysis of the border crossing manifest allowed to generate approximate kilocalories values per truck and per unit of analysis then distributed per area, using information provided by OCHA and the Food Security Sector." It should be noted that following criticism from Israel on this improper conduct which violated the IPC's standards of transparency, the subsequent IPC reports on Gaza omit any caloric analyses of aid. The 245 Kcal per person-day is about a quarter of the lowest figure for Northern Gaza in this article (1000 Kcal) which only highlights that Oxfam analysis is detached from reality and not worthy of being cited. This value is contrasted with “Israeli academics, working with data from the Israeli Ministry of Defence’s Coordination of Government Activities in the Territories (COGAT) agency, put this figure at 3160 for all of Gaza during January-April 2024 [25] (p2. l41).” The citation is out of date. A revised study assessing the food supply for the period of January-July 2024 is in press. The nationality of the authors of the cited research ought to be irrelevant.

      • "Since May 2024, the re-opening of crossings into northern Gaza and increased food deliveries appeared to mitigate food insecurity, though the IPC projected that 22% of Gaza would remain in catastrophic food insecurity conditions between June and September": However, the authors of this article downplay this acknowledgment of the improvement by citing a reference to a projection which proved drastically wrong. While the IPC report from June projected 22% in phase 5 in September, the IPC report published in October found that the actual share in September was 6%. However, the article does conclude that "a steep increase in food availability occurred from late April 2024, coinciding with the reopening of crossings into northern Gaza, and by June acute malnutrition prevalence appeared to be relatively low, despite very limited dietary diversity." Thus, based on the authors’ inclusion of this data, their reference to the 22% should be removed and replaced by the actual decline to 6% as reported in the October IPC report.

      • "the consumer price index for food rising from 210 pre-war to 600 by March 2024" citing the WFP's unofficial calculations. While it is true that according to the official statistics from the Palestinian Central Bureau of Statistics the price index for food nearly tripled from September 2023 to March 2024 following the outbreak of the war, the index subsequently decreased by 28 percent from 332.70 to 240.01 as the food supply improved during the analysis period ( https://data.humdata.org/dataset/state-of-palestine-consumer-price-index) "https://data.humdata.org/dataset/state-of-palestine-consumer-price-index)") .<br /> • An analysis of the IPC report from June by the Israel Ministry of Foreign affairs highlights several positive trends in the IPC's main outcome indicators between March and July ( https://www.gov.il/en/pages/the-third-ipc-report-on-gaza-june-2024-3-sep-2024) "https://www.gov.il/en/pages/the-third-ipc-report-on-gaza-june-2024-3-sep-2024)") . The positive trends reflect the impact of the humanitarian efforts which are analyzed in this study and which should not be ignored. <br /> If the purpose of the paper is to contribute to an understanding of how to fix the problem rather than the blame, then the framing of the introduction and subsequent discussion ought to recognize that Hamas exercises agency and has made decisions that have contributed to the plight of the Gazan population whom they govern, including with regard to the nutritional aspect of the humanitarian crisis. A more balanced study could be helpful to further understanding and foster cooperation instead of inflaming controversy. This would help address the present crisis and advance future rehabilitation. In short, the introduction (and the rest of the paper) should present a balanced account of the knowns and unknowns regarding the present food security crisis, the challenges of obtaining valid and verifiable data, which also plagues the current analysis, and the need for clarity, specifically with regard to the adequacy of the international humanitarian effort in supplying food to the emergency-affected population.

    1. On 2025-02-13 03:06:12, user Metin Çinaroglu wrote:

      Update on Manuscript Status

      This manuscript was initially preprinted as part of its submission to another journal. Following substantial revisions, including the removal of one author (with consent) and significant modifications to the manuscript, it was subsequently resubmitted and accepted for publication in BMC Public Health. It is now in the process of publication.

      Since medRxiv does not allow withdrawals, we would like to note that this preprint does not fully reflect the final published version. Readers are encouraged to refer to the forthcoming article in BMC Public Health for the most updated and peer-reviewed version. Once available, we will provide the DOI for the published article.

      For transparency, we acknowledge the differences between this preprint and the final published manuscript and appreciate the understanding of the research community.

    1. On 2025-02-24 23:42:40, user Stephen Goldstein wrote:

      Manuscript summary

      The authors report a small study comparing patients with “post-vaccination syndrome” or “PVS” with vaccinated, healthy controls. They used a variety of immunological techniques and report they have identified potential immune signatures in PVS patients, which may reflect an underlying mechanism of this condition.

      Personal disclaimer

      This manuscript has received considerable attention and attracted much commentary, including critical commentary from myself on twitter (@stgoldst). I was immediately skeptical of these findings given the attention to it, small study size, and amplification by anti-vaccine activists. However, the potential for vaccine injury is a serious matter, so a rigorous review of this manuscript is a critical need. I attempt here to account for my biases, and to check for these I used a Google AI model to conduct an orthogonal review. That is posted separately.

      Review

      Overview

      This study described by this manuscript is methodologically flawed to a degree that undermines the authors’ stated goal to identify biomarkers for post-vaccination syndrome (PVS). These flaws are systematic, ingrained into the study design, and compounded by analytic flaws throughout the manuscript. As is, this study provides weakly informative data at best towards understanding chronic illness following vaccination. The methodological flaws are listed below and subsequently expanded upon.

      1. PVS and control cohorts are very small, and even smaller when stratified by infection status.
      2. Prior infection status is poorly controlled – though this may be difficult to overcome
      3. The study does not include a control group of unvaccinated individuals reporting similar chronic symptoms as the PVS cohort.
      4. PVS is defined by self-reported symptoms with no clinical assessment or classification system.
      5. Small effect sizes and weak correlations are repeatedly described via their statistical significance, with no biological context provided by the authors.
      6. The study provides no evidence for a causal link

      7. PVS and control cohorts are very small, and even smaller when stratified by infection status.

      The PVS cohort comprised only 44 patients originally, and was reduced to 39 due to pharmacological inhibition in 2 patients. The authors acknowledge that due to the small size of the study and its exploratory nature they did not conduct a power analysis. They acknowledge the difficulty in producing robust results due to the sample size. Despite acknowledging these problems, the authors repeatedly invoke the statistical significance of various analyses and in some cases rely on extremely involved statistical testing to identify weak signals. This presents an impression that the authors understand the inability, baked in from the start, of the study to be informative yet press ahead anyway.

      1. Prior infection status is poorly controlled – though this may be difficult to overcome. T

      he authors stratify the cohorts by infection status, with the primary determination based on serological status of anti-nucleocapsid (N) antibodies. The study participants were recruited in December 2022 at the earliest, nearly 3 years after the first SARS-CoV-2 infections were identified in the United States. Given the expected decline in serum antibody titers over time, it’s likely that people infected in the first year of the pandemic (and possibly even later into the pandemic) would test seronegative. Therefore, the -I cohorts likely include individuals who were in fact infected with SARS-CoV-2 at some point. This is a critical issue. The number of individuals without infection history is likely even smaller than presented, reducing the utility of stratification. In addition, this may actually confound the ability to disentangle the effects of vaccination vs infection in the development of chronic illness. It would be difficult to methodologically correct for this without a prospective longitudinal study. However, larger sample sizes might allow researchers to mitigate its impact. Given these sample sizes and the inability to reliable sort by prior infection status, the issue precludes making robust inferences from the data.

      1. The study does not include a control group of unvaccinated individuals reporting similar chronic symptoms as the PVS cohort.

      The authors describe the health of study participants based on GH VAS scores and note that PVS participants were in worse health than the control participants. In the Discussion, the authors expand on this, noting that PVS participants also had worse health than the U.S. general population. Given the real potential for other disease processes to impact every one of the biomarkers tested, the lack of unvaccinated, chronically ill participants (reporting the same syndromic profile as PVS patients) confounds any correlates between these biomarkers and vaccination. The study analyses are uninterpretable with respect to the impact of vaccination on health, as a result.

      1. PVS is defined by self-reported symptoms with no clinical assessment or classification system.

      PVS was previously described by some of the same authors based on self-reported chronic sequelae following vaccination. This definition is then relied upon in this study. However, many of these symptoms are non-specific and certainly there is no evidence, given the lack of complete overlap, that they represent a single syndrome. There does not appear to be any clinical assessment to verify any of them. This is a repeated issue with descriptive studies of long covid (PACS) and now PVS, and I acknowledge the inherent challenges in establishing other criteria. Nevertheless, it represents a major problem in trying to describe a unified syndrome downstream of vaccination.

      1. Small effect sizes and weak correlations are repeatedly described via their statistical significance, with no biological context provided by the authors.

      Throughout the manuscript the authors describe differences between PVS and patient cohorts solely through the p-value returned by statistical testing. Looking at the figures themselves the effect sizes turn out to be extremely small in virtually every case. Small effect sizes don’t mean there is no biological significance, but the authors in this study expend no effort to offer context or even a coherent hypothesis for why these effect sizes are significant. Expecting the reader to favorably interpret the data, or indeed interpret it all, based purely on p-values is…disconcerting. It’s not clear in the writing that the authors even consider effect sizes to be relevant, or if getting a sufficiently small p-value is good enough to report and believe a major finding. I’m not confident that the authors really interpreted the data to any depth themselves.

      1. The study provides no evidence for a causal link.

      There is simply no causality evident in the data or really presented by the authors. Given the generally poor health of the PVS participants, all of the elevated inflammatory biomarkers and the elevated EBV reactivity could all be due to varied other disease processes, infectious or not. One clear example of this is Figure 4K where the authors correlate EBVgp42 reactivity with the percentage of CD8+ T cells producing TNF?. The Correlation R value is 0.47, indicating a weak to moderate link. Because EBV reactivation is tightly linked to general stress, the weakness of this correlation is highly suggestive of other disease processes making a significant contribution, or the PVS link being artifactual. The authors make no effort to account for this.

      Specific Points

      References 16 and 18 need to be corrected

      “interaction with full-length S, its subunits (S1, S2), and/or <br /> peptide fragments with host molecules may result in <br /> prolonged symptoms in certain individuals16.”<br /> -Ref16 is a study describing circulating spike and S1 <br /> following vaccination, but does not mention anything about<br /> prolonged symptoms.

      “Recently, a subset of non-classical monocytes has been shown to harbor S protein in patients with PVS18.” <br /> -Ref18 is a study on PACS (post-acute covid-19 sequelae) <br /> and does not mention vaccination or post-vaccination <br /> syndrome<br /> -Ctrl+F for “vaccine” “vaccination” “PVS” returns no results in <br /> this manuscript

      Figure 3 on the kinetics of serological findings is generally confusing<br /> -For Control and PVS+I groups the authors report no decline <br /> in anti-spike antibodies over the course of months to year. <br /> -This runs counter to basic immunological principles and <br /> robust, repeatable findings with respect to anti-SARS-CoV-2<br /> spike antibodies in particular<br /> -One explanation for this is subsequent mild infections that <br /> boost antibody levels, but there are no spikes evident, but <br /> rather a steady maintenance.<br /> -The exception to this is PVS-I antibodies which decline at <br /> what is to the naked eye a normal rate. <br /> -This suggests an issue with the control or PVS+I cohorts, or <br /> a disturbing indication that they are not representative of the <br /> immunological state in their respective populations. Due to <br /> the small sample size, this seems likely<br /> -The authors should explain that because the PVS-I <br /> participants weren’t infected, their “days since post-<br /> exposure/vaccination” data are identical. Absent that, it’s <br /> confusing to notice that the PVS-I data in rows B and C are <br /> identical and raises concern about duplication in figures

      The authors don’t describe the rationale for the EBV coinfection analysis displayed in Figure 4, and so there’s no way for the reader to interpret what (if any) significance to ascribe to it.<br /> -Figure 4D shows a small but statistically significant <br /> increase in IgG against EBVgp42 for PVS cohort relative to <br /> controls – however...<br /> -When the PVS cohort is stratified by prior infection status <br /> there is no statistically significant difference<br /> -This make it really difficult to interpret the difference when<br /> the PVS group remains together<br /> -It raises the question for me of whether the statistical <br /> significance is just sensitive to the number of data points,<br /> which for me makes it not robust<br /> -Again – as throughout the paper no biological context is<br /> given

      Even the correlation between EBVgp42 in serum and EBVgp42 antibody reactivity is low<br /> -Again very difficult data to interpret and unclear what the <br /> biological significance would be<br /> -Problems with the correlation analysis in Figure 4K were <br /> discussed above<br /> Figure S4C is discussed in the text, but briefly and important data is ignored<br /> -It appears true that PVS participants have elevated<br /> autoantibodies of IgM and IgA isotypes, but their IgG <br /> autoantibodies are actually similar to controls<br /> -Not clear if there might be a class switching defect that <br /> could be related to a pathogenic process, or other<br /> explanation – the authors don’t address<br /> -The authors just say PVS patients just have autoantibodies,<br /> which obfuscates their own data that it’s isotype specific<br /> The interpretation of Figure 5C is also strange – most PVS patients have no circulating anti-S1 antibodies and the statistically significant difference is driven by a minority who do<br /> -The authors state there’s a difference without any effort to<br /> interpret it<br /> -This suggests that PVS, which the authors are trying to<br /> characterize as one syndrome, is either not one thing, or the<br /> presence or absence of anti-spike antibodies is ancillary<br /> -Unfortunately the authors gloss over any nuance in the data<br /> The data on specific biomarkers in Figure 5H is based on such small sample sizes I question whether it was even appropriate to do this analysis at all<br /> -To be clear, the issue isn’t whether the question is worth<br /> asking, it is. The issue is that one should not do an analysis<br /> that is so underpowered it will be definitionally <br /> uninterpretable<br /> -The fact that the authors had to jump through statistical<br /> hoops to find a statistically significant effect is concerning <br /> -the fact this includes a sub-group of only three patients is <br /> just methodologically inappropriate.<br /> Given the authors’ use of machine learning failed to reveal any coherent set of biomarkers further argues against the contention that PVS is a definable syndrome<br /> -Or, that this study is so small it lacks value in defining the <br /> syndrome

      Final summary

      Ultimately this study adds little value, at best, towards understanding post-vaccination sequelae experience and reported by some individuals. At worst, it injects claims and interpretations into the field and discourse that are unfounded, and will ultimately slow efforts to help patients. These results have already been used to advance anti-vaccine narratives in online discourse. If the data were robust, no one could complain. Because the data are not, it is tragic. Ultimately, there is no compelling evidence in this paper for an immunological signature associated with chronic illness following vaccination. Perhaps reflecting this, the authors provide almost no biological context for any of their findings, often reporting data merely as a p-value with no comment on the effect size (whether large or small). This leaves it unclear to a reader whether the authors are even aware of flaws in their work. Given the methodological flaws of this study, it is a questionable investment for researchers to follow up on it in a targeted way. Rather, well-powered, controlled, and methodologically sound studies should be conducted at scale to enable actionable findings to be made.

    2. On 2025-02-28 22:17:07, user Brian wrote:

      I’m a nobody, however I’m able to use the resources which are at my disposal to better understand this study. I have constructed the following logical explanation. I thoroughly invite anyone to dismantle this explanation. It is to the best of my knowledge and understanding that I’ve created this.

      It found that CD4 T cells are reduced, and TNFa producing CD8 T cells are increased. It found that cDC2 cells were reduced while non classical monocytes were elevated. To also include that elevated cytokines and IgG subclass shifts did not occur. In a healthy immune system, elevated cytokines and IgG subclass shifts indicate a healthy immune response. Furthermore, a reduction of cDC2 cells means that without sufficient numbers of cDC2 cells, the body struggles to activate T cells effectively, which is key for a strong immune response. Next, elevated non classical monocytes means that the body is in a state of immune activation, but instead or responding efficiently to the threat (due to a lack of other immune cells like cDC2 cells), the system is stuck in a more passive or inflammatory state. And let’s not forget AIDS is characterized by a reduction of CD4 T cells and elevated TNFa-producing CD8 T cells. I rest my case.

    3. On 2025-03-01 16:56:25, user andreaclovephd wrote:

      Does this actually identify persistent immune dysfunction after COVID-19 vaccination? No.

      The big takeaways:

      The study did not accurately correct for past infection. The methods used to “exclude” past infection is not accurate–the data presented suggest everyone has similar history of past infection, which means the PVS symptoms reported by participants cannot be attributed to vaccination.

      The study didn’t actually assess T cell exhaustion. This would have needed to show markers of T cell exhaustion (TIM-3, CTLA-4, PD-1, etc) combined with impaired function: cytokine levels, <br /> proliferation, metabolic defect, & gene expression changes. They don’t do any of this. IFN-? and TNF-? are comparable between groups and suggest activated T cells, not exhausted.

      They did not use a method to assess EBV reactivation. They assess serology, not EBV replication, which is required to show reactivation.

      CD4 T cell populations aren’t meaningfully different between groups & are within normal ranges for healthy individuals.

    4. On 2025-04-09 10:34:28, user M. Key wrote:

      As someone who has experienced this, I am so grateful this research is being done.

      I would like to put forward that, in some cases, what is actually happening is very long Epstein Barr. That is, every exposure reactivates Epstein Barr. Symptoms peak a few weeks after, then slowly resolve after a matter of many months, leaving more residual damage each time. This means that while the experience is chronic, it also makes it possible to trace it directly to certain events like testing positive for covid or getting a booster shot.

      As someone who is vehemently pro-vax, I had been getting all the boosters.

      If, for some people, this is an extended expression of both covid and EBV, and EBV is also known to create a host of other autoimmune conditions, then it seems like focusing on EBV is important. From the patient end, that seems a bit like a black hole; it doesn't seem like there's an established diagnostic or treatment pathway, so it's ignored. So I hope this focus continues.

    5. On 2025-08-13 20:05:34, user Zach Hensel wrote:

      This preprint was cited in a movie that was released on streaming media platforms today called "Inside mRNA Vaccines - The Movie". The movie was produced with substantial participation by the REACT19 organization, with which at least two of the authors of this study are affiliated.

      The declaration of interests section of this preprint does not include authors interest in the new movie, and the movie attributes the result to "a Yale preprint" without noting the involvement of REACT19 in recruiting for the study.

      To say the least, the movie is problematic on the facts. It is being most heavily promoted by Peter McCullough, who is currently selling the "Ultimate Spike Detox" supplement for only $80.99 every 30 days.

      Another movie was released on streaming video from the same production team last month ("Inside the Vaccine Trials—Lived Experiences") and also features study author Brianne Dressen. Dressen is thanked for her contributions in the credits for both movies.

    1. On 2025-03-14 10:52:07, user Sasan Hekmat wrote:

      The discussion of the “Mostaan 110” device is particularly problematic; the paper relies on this debunked technology as a symbol of science-related populism despite clear evidence that the Iranian Ministry of Health has rejected it, thereby misrepresenting the facts.

      One of the major weaknesses of the paper is its failure to clearly define central concepts like “science-related populism,” leaving readers with ambiguous terms that dilute the precision and impact of the argument.

      The manuscript’s reliance on media reports and non-peer-reviewed sources to substantiate key claims undermines its scientific rigor, as these types of sources are inherently more prone to bias than rigorously vetted academic literature.

    1. On 2025-03-21 22:17:58, user Catherine wrote:

      Oestrogen is a inflammatory hormone whereas progesterone is the opposite. I would suggest use of progesterone is the way forward not Oestrogen.

    1. On 2025-04-08 13:30:55, user Malin wrote:

      Interesting study! The method description is quite limited, what was the size and material of the funnel? What was the airflow through the funnel and the residence time for aerosol particles from exhalation to detection?

    1. On 2025-04-10 16:27:18, user Epidemiologist wrote:

      This is a phenomenally bad study, which contains stark evidence of its bias in the Figure purportedly supporting its conclusions. To summarize:<br /> 1. They compare two groups of employees who received a trivalent, inactivated influenza vaccine. Those who received the vaccine (82%) and those who sought an exemption (18%).<br /> 2. As hospital employees, they are aware of the extent to which their work puts them at risk of exposure but the investigators make no effort to determine differences between these groups beyond very crude categorizations.<br /> 3. They find that, after 100 days, they see higher influenza rates in the vaccinated.<br /> 3. They provide no plausible explanation as to how the inactivated vaccine puts one at increased risk of influenza 100 days after vaccination.<br /> 4. That means the ONLY plausible explanation for a significantly higher risk in the vaccinated is a significantly higher exposure risk in the vaccinated. Ergo, the sample is biased.<br /> 5. It is notable that the infection rate among the vaccinated was only 2.5% in a high risk setting for infection. <br /> 6. In sum, the best explanation for their results is that the vaccine was very effective and their sample was biased.

    1. On 2025-04-30 01:56:31, user Steve Kirsch wrote:

      Why isn't every state doing this? Why isn't the CDC publishing the brand comparison using the Medicare data? This was a carefully done study that took a long time to do. The lengths they went through to do the matching was extraordinary. Retsef did that so the study couldn't be attacked. Having negative controls was excellent; so rarely do you see that in a paper. These odds ratios show the Pfizer shots are too deadly to be used; vaccines are never supposed to increase your all-cause mortality. We can only wonder if the CDC will at least warn people that the paper might be right. The CDC has ALWAYS had the data to be able to replicate this study and prove to us that the shots worked. They have access to the Medicare records and could easily replicate the study. The only study I've seen doing brand comparisons was fraudulent because they did the Cox coefficients adjustments by assuming vaccines don't increase non-COVID all-cause mortality as their negative control. So they assume away the outcome. In my view, this is the most important paper of the COVID pandemic so far because it shows that at a minimum, all papers claiming huge benefits could be wrong. One thing is for sure: both positions can't be right. There is only one truth here and this paper is consistent with the record level data in the Czech Republic which showed ACM differences by brand as well.

    2. On 2025-05-23 02:41:40, user Robert C. Speth wrote:

      Aside from making unfounded claims of adverse effects of vaccines to distract from the lifesaving benefits of vaccination against COVID-19 and other communicable diseases; the data presented by Levi et al. to support the conclusion that more people die after receiving the BNT162b2 mRNA vaccine compared to the mRNA-1273 vaccine are dubious.<br /> The authors report a population of 9,162,484 Floridians who were immunized with these two vaccines, however, data comparing matched cohorts - the primary information conveyed in this report - is for only 1,470,100 Floridians, 16% of the total population receiving these vaccinations. Discarding 84% of the population opens up the possibility that the data was cherry picked to match the authors’ hypothesis. Indeed, in Supplement Figure 3, which is not mentioned in the main text of the report, the results of the entire 9,162,484 Floridians who were vaccinated with these two vaccines are opposite to those reported in the manuscript. It shows that vaccination with the mRNA-1273 vaccine increased the risk for all cause deaths, cardiovascular deaths and non-COVID-19 deaths in the 12-month period following the second vaccination. <br /> Of greatest concern, however, is the failure of the authors to compare death rates of non-vaccinated matched controls with those vaccinated by either of the two mRNA vaccines. It is well established that millions of lives were saved by vaccination with these two vaccines, which is of much greater significance than any small difference in efficacy of the BNT162b2 and mRNA-1273 vaccines to protect against a subsequent variant of SARS CoV-2.

    3. On 2025-07-03 22:24:25, user Madison Gammill wrote:

      This is harmful information. Taking numbers of how many people got COVID-19 immunizations and the number of people that died within the given timeframe is the epitome of correlation is not causation. The "non-COVID-19" death group is defined as deaths that did not have an ICD-10 label of COVID as the primary or underlying disease. This implies that any death could've been accounted for in this group, including car accidents, trauma, old age, etc. There is no way to infer that the vaccine causes death without tissue samples, pathology, etc. Not just putting numbers on a graph. Also, any valid study should have a control group. Period. The whole point of this study was to determine if the vaccine causes deaths, and not including unvaccinated people throws these results out the window.

    1. On 2020-04-18 18:54:22, user Tomas Hull wrote:

      Germany had a similar study done published on April 9, where they combined the antibody test with the polymerase chain reaction test in active infections.

      Nature reviewed both this study, and the one from Germany, where the combined results in a population of a town of 12,000 revealed overall 15% infection rate.

      https://www.nature.com/arti...

    2. On 2020-04-19 05:26:23, user David Feist wrote:

      This study has a very high probability of being correct in my opinion, as it is in line with three or four recent seroloprevalence tests (conducted by experts). The Gangelt, Germany test suggested that 15% of the population was infected and that under reporting was of a similar level as that in California.

      The high levels of infection explain of course why the pandemic turned down two weeks BEFORE the lockdown in Wuhan, (Wittkowski, MedArxiv, April 2020); herd immunity levels of infection were probably being reached. If there was actually 50 times more cases in Wuhan (ie at least 2.5 million people infected out of a 10 million polualtion) herd immunity may have been reached - if a further 35-30% of residents had cross immunity from prior exposure to cold corona viruses.

      The IFR is also now in line with what this study's primary author, John Ioannidis, predicted in the beginning from the Diamond Princess cruise ship data, namely about 0.1%. No Government in the world should have commenced lockdowns based on that February, 2020 data and prediction. Mr Ioannidis should be consulted in the future.

    3. On 2020-04-19 08:32:24, user Matthew Markert wrote:

      What is the specific data on cross-reactivity of other CoV strains on this Ab test (Premier), and what is the expected or known prevalence rate of those strains in the background population?

      If that is unknown or unknowable, can you instead run PCR on all the tested Ab samples for other common CoVs? If you can rule out that as an underlying confounder, or can show that they are present in people who tested negative for SARS-CoV-2, it would strengthen the data.

      As written, and also for other reasons stated elsewhere (including your reported false positive rates and potential to explain a section of the 50 cases), the true positivity rate remains unclear. https://uploads.disquscdn.c...

    4. On 2020-04-19 12:55:20, user C'est la même wrote:

      99.5% specificity in the general population is wildly optimistic compared to serology tests developed by other labs. Claims of "asymtomatic" carriers also reflects symptom reporting biases (surveys/questionnaire answers are not symptoms).

      I suggest caution trusting serology based studies like this, unless all positive cases are also confirmed using CT or RTPCR testing.

    5. On 2020-04-19 14:45:07, user Tomas Hull wrote:

      1. This study is not perfect but no studies ever are.

      2. The study shows pretty close estimates of the much lower mortality rate than previously estimated, likely slightly above the seasonal flu of 0.1% and lower than the German study estimates of 0.37% of a town of 12,000 inhabits where an accelerated infection likely happened due to the town carnival 2 months earlier.

      3. More similar study results will soon be published, including L.A. County, MLB organization from 27 cities, and many European countries, which will probably confirm that CoV2 is much more widely spread than initially thought, and with the infection mortality rate slightly above the seasonal flu of 0.1% and below 0.37% from the German town of Gangelt, where the much faster infection rate initially occurred due the the town festival in February.

    6. On 2020-04-19 14:46:45, user IJ wrote:

      A critical piece of information that is missing is the precise ad (graphics and all) that was used on Facebook, and the exact script that was followed when people responded to that ad. Since response bias is an important concern for this study, and since it is well known that even minor changes in how a question is asked can have a large effect on the results of a survey, this needs to be included as supplemental material.

      Of particular concern is how the ad shaped participants' expectations about whether they would be told the results of their own test. Since people who have had symptoms or who have had contact with someone who has tested positive are more likely to have been infected, and since those people might be more curious about whether they had been infected, the desire to find out their own results could significantly bias the results of the study. Ideally, to eliminate this effect, the ad would say up front that participants would not be told the results of their own test. In any case, the way the ad shaped expectations about this is important information that is necessary to interpret the results of the study.

    7. On 2020-04-19 16:32:58, user Robert S. wrote:

      What is the false positive rate of the test used? What is the cross-reactivity of the test with other coronaviruses that share sequence homology with the spike protein of nCoV-2?

    8. On 2020-04-21 20:17:21, user tom wrote:

      The test kits were marketed under Policy D, i.e. no FDA validation or review, not even a EUA. It defies reason that they could be relied upon merely on the word of the foreign manufacturer (Hangzhou Biotest Biotech) and a perfunctory (and potentially skewed) in-house specificity validation run on a mere 30 control samples. The UK bought millions of £ worth of antibody tests from Hangzhou Alltest Biotech (maybe an affiliate of HBB, as there are curious similarities between the two tests' package inserts) and then shelved them due to inadequate specificity.

      And how could a responsible researcher pre-print survey results based on unapproved rapid test kits without following up on the indicated positives by blood draw and ELISA, knowing full well the half-baked, incendiary results and conclusions would be picked up by media worldwide and potentially impact life-and-death decisions of massive scope by public health officials?

      It is an extraordinary - indeed sensational - claim that the entire world has missed a silent, benign spread of SARS-CoV-2 that's 40x larger than recognized, and it runs counter to significant evidence of the virus having an asymptomatic fraction similar to influenza based on contained, fully-tested outbreaks in the Diamond Princess, Roosevelt, and Skagit Valley choir. Extraordinary claims require extraordinary evidence; this work is, to put it euphemistically, certainly not that. If these results do not bear out, Stanford has some serious explaining and housecleaning to do.

    9. On 2020-04-23 23:47:11, user Tesla Coil wrote:

      In addition to the criticisms raised in other comments, I see a fatal flaw in the study's "Statistical Analysis" that I believe has not been raised (apologies if I have missed it).

      The authors appear to first re-weight the sample by demographic factors, and only then adjust for test sensitivity and specificity. This appears to me to be the obviously incorrect order.

      If, say, in the unweighted sample, true false positives of the test were 1.5% (which is within the 95% confidence interval of 0.1% to 1.7% calculated by the authors), and the authors only found 1.5% positive samples, the actual true positives would be 0%. So for the unweighted sample, the lower bound for prevalence of antibodies should be 0% true positives. Any further re-weighting of the sample cannot change this and the lower bound must remain 0%.

      However, as the authors re-weight the sample first, they apply the false positive rate of 0.1% to 1.7% to their re-weighted estimate of 2.8% positive samples.

    10. On 2020-04-24 16:22:33, user gfrenke wrote:

      According to the CDC website their flu statistics are based solely on people who were symptomatic. The CDC doesn’t’ do antibody tests after the flu season to see how many were infected with a flu virus but never had symptoms.

    11. On 2020-04-26 15:47:52, user DaveSezThings wrote:

      The analysis has made a significant, basic error in handling the uncertainty associated with the specificity of the test. Leaving aside concerns regarding the applicability of the delta method, the mistake arises in that computing the standard error the values for Var(s) and Var(r) should be divided by the sample numbers used in the studies to establish these values, not the main study sample size n=3,330, which is used across all terms in the relevant equation in the appendix (it's on the middle of page 3). We can see this as the range of specificity (95% CI 98.3% to 100%) is sufficient to explain the observed data with zero genuinely positive cases.

      Basically this destroys the conclusions which should now be along the lines of "unfortunately the test used for this study was not specific enough to support any conclusions beyond setting a maximum level of infection."

      Stuff happens, time is short etc... the authors should just issue a correction. It'll be quick and easy and save a lot of irrelevant speculation,

    1. On 2025-06-19 15:05:58, user Innocent Chidandale wrote:

      am interested in this article. but i would like to do whole genome sequencing to track those resistant strains in malawi, their transmission modes and their phenotypic characteristics in patients and the possible drugs to target the resistant genes of interest. if everything is open i may love it to proceed as my research project in my next year's bachelor's degree at Mzuzu university.

    1. On 2025-07-22 09:31:17, user Abdullah Jinah Ali wrote:

      Boys under 18 and men over 65 can also be mistaken for combatants, and your data does show evidence ifvthus. Therefore it would be useful to have a gender breakdown among child and senior citizen fatality estimates.

      In addition, I would like to emphasize that your survey missed the most vulnerable group - large households that did not leave Rafah and the north. They are more likely to be targeted by AI due to having more combat aged men, and more likely to suffer nonviolent deaths due to the poorer conditions in these governorates and the disproportionate number of children and the elderly in these larger households.

    2. On 2025-07-24 13:10:28, user Abdullah Jinah Ali wrote:

      Just discovered possible major issue.

      Reviewing Table S7, it seems you misinterpreted the MICS data as referring to the percentage of households, whereas it actually refers to that of population.

      I came to this conclusion after doing hand calculations, where it turned out that with 5.5 per household and 400k households (expected for 2.2m), it gave a population in excess of 2.6m.

      But when interpreted as population, it gives 5.2 for all households 9 and below.

    1. On 2025-08-10 22:18:50, user Ashebir Gurmessa wrote:

      This study is an outstanding and timely contribution to HIV care in Ethiopia. Your rigorous mixed-methods approach offers critical insights into the risk factors for virological failure among second-line ART patients, which is invaluable for clinicians, policymakers, and program implementers. The finding that *loss to follow-up* and *regimen changes* significantly increase the risk of virological failure highlights the urgent need for patient-centered adherence strategies and continuity of care.

      Thank you, Bekelch Bayou and team, for shedding light on this vital yet underexplored issue. Your work not only fills a crucial data gap but also provides a strong foundation for targeted interventions that will improve outcomes and quality of life for people living with HIV.

      I am grateful for your dedication and the ethical rigor with which you conducted this study.

    1. On 2025-08-21 13:47:47, user J.C. Rome wrote:

      The acronym makes no sense. STARD, meaning Standard, Accuracy, Reporting, Diagnostic? It should be SRDA, I understand that you're trying to make it into a word, but it doesn't work.

    1. On 2025-08-24 14:42:39, user Naoto T Ueno wrote:

      We present a high-throughput assay that identifies a TRBJ1-6–derived TCR? pre-mRNA fragment as a potential blood-based biomarker for inflammatory breast cancer (IBC). Using a novel sequencing method (TGIRT-seq) for discovery and a scalable RT-PCR/Cas12a workflow for validation in peripheral blood mononuclear cells (PBMCs), we focused on a short RNA fragment (20–23 nt) spanning an exon–intron junction. This fragment likely originates from pre-mRNA and is stabilized by a 2´,3´-cyclic phosphate end, making it unusually detectable.

      Our results show that this fragment is consistently different in PBMCs from IBC patients, with validation in larger follow-up cohorts. Importantly, the RT-PCR/Cas12a platform provides a path toward rapid, high-throughput screening—something that could make this approach practical in clinical settings.

      That said, several challenges remain. Our current cohorts, while expanded, are still not large enough for definitive clinical conclusions. Independent replication across multiple centers will be essential. We also need broader comparisons to confirm whether this signal is specific to IBC and not just a marker of other inflammatory or cancer-related conditions. Finally, we see the importance of longitudinal studies to track whether this biomarker changes with treatment and outcomes.

      Looking ahead, our priority is prospective, multicenter validation under standardized conditions. Direct comparisons with imaging and clinical features are also needed—though difficult, since IBC still lacks a definitive molecular diagnostic standard.

      We welcome feedback and collaboration on validation studies, methodological improvements, and clinical translation. We are especially interested in exploring the biology of this RNA fragment and understanding why it might be unique to IBC.

    1. On 2025-10-20 15:20:57, user xPeer wrote:

      Courtesy Double-Blind Peer Review Simulation from xPeerd :

      Reviewer #1 Report

      Summary<br /> The study aims to assess and compare the effectiveness of three advanced large language models (LLMs)—ChatGPT-5, DeepSeek V3, and Grok 4—in generating educational content about ADHD for non-specialist educators and outsourced physical education coaches. Employing a controlled prompt-based methodology and multiple readability/complexity indices, the manuscript investigates response accuracy, clarity, stability, and potential public health communication barriers in AI-generated outputs.

      Major Comments

      1. Methodological Rigor & Generalizability<br /> The authors delineate a robust comparative framework, utilizing three guiding questions on ADHD for model interrogation. However, the scope is limited, as the testing population pivots exclusively on English-language outputs and Melbourne-based prompts. The authors themselves acknowledge:
      2. "The study was conducted exclusively in English within a Melbourne-based testing environment, limiting generalizability to non-English-speaking populations" (page 21, Strengths and limitations).<br /> Reviewer suggestion: Future analyses should encompass a broader linguistic and cultural spectrum to truly capture the global applicability of AI for health education.

      3. Depth of Statistical/Computational Analysis<br /> The study makes extensive use of readability indices (FKGL, SMOG, etc.), but does not sufficiently discuss their limitations when assessing AI-authored medical content. There is potential for bias when equating increased complexity with reduced accessibility; often, necessary clinical nuance may inherently raise reading levels. The manuscript states:

      4. "Readability analyses further showed that DeepSeek V3 had the greatest variability, GPT-5 displayed steadily increasing complexity, and Grok-4 remained the most stable and comparatively less complex" (Discussion, page 17).<br /> Reviewer suggestion: A more critical lens is warranted—consider a combined readability/accuracy approach to better contextualize the trade-offs between precision and simplicity.

      5. Real-World Impact and Usability<br /> Despite extensive quantitative comparison, the practical implications for coaches, teachers, and parents are relegated to future work. The manuscript admits, "The study focused primarily on textual readability and stability, rather than evaluating real-world comprehension or decision-making by specific user groups" (page 21).<br /> Reviewer suggestion: The next phase should prioritize empirical user testing to validate whether model outputs actually enhance pedagogical or clinical understanding and decision-making.

      6. Novelty and Ethical Perspective<br /> The comparative model analysis is novel, considering recent LLM advances and lack of similar head-to-head studies tailored for disability inclusion in school settings. However, no ethical concerns are addressed regarding AI output veracity, data privacy, or the risk of erroneous instruction imparted to underqualified staff.

      Minor Comments

      • The referencing format is occasionally inconsistent and page numbers for tables/figures are absent in some cases.
      • The abstract is concise and provides a clear structure; nonetheless, the results section could briefly mention statistical significance values or variability ranges.
      • Some sentences are overly long or complex, detracting from readability—ironically contrary to the study's focus.
      • In "Ethics approval and consent" (page 22), it is useful to state "Not applicable," but the authors might clarify that all AI-generated responses involved no human data or interventions.

      Recommendation <br /> Major Revision. The manuscript exhibits methodological strength and addresses a pressing question. However, broader evidence on practical efficacy, nuanced readability analysis, and an explicit discussion of ethical boundaries are required prior to acceptance.

      Reviewer #2 Report

      Summary<br /> This manuscript sets out to systematically evaluate the readiness and reliability of LLMs to deliver inclusive, high-quality ADHD education materials, especially for outsourced PE instructors and non-specialist users—a group often neglected in the literature. The three chosen models represent current state-of-the-art options. The topic is pertinent and innovative.

      Major Comments

      1. Overstatement of Claims and Realistic Outcomes<br /> The conclusion suggests that "model selection should be tailored to specific use cases," advocating for Grok-4, DeepSeek V3, and GPT-5 each in particular contexts (page 20, Discussion). However, the comparative exercise data provided fall short of substantiating such a granular recommendation; the outcome differences, though statistically noted, remain within a similar range of excessive complexity:
      2. "All models exhibited high reading levels (FKGL > 12), exceeding recommended public-health standards" (page 2).<br /> Caution should be exercised when suggesting differential real-world deployment based on such preliminary and textual-only evidence.

      3. Potential for Algorithmic and Sampling Bias<br /> The study design is at risk of sample/data selection bias by exclusively testing models with English-language queries and drawing all responses from the same geographical/IP base (Melbourne). This potentially disadvantages queries that might behave differently in other contextual deployments; more granular breakdowns by topic or scenario might add value.

      4. Empirical/Practical Verification—A Missing Piece<br /> While the authors readily admit the absence of real-world user testing (page 21), at a minimum, the study could have incorporated expert review(s) by practicing educators or clinicians to validate the appropriateness, accuracy, and utility of the outputs. Relying strictly on “readability” as a performance surrogate is insufficient.

      5. Accessibility and Communication Gaps<br /> The core finding—that "readability emerged as a persistent barrier across all models" (page 20)—is highly significant. However, the manuscript stops short of offering actionable guidance to AI developers or educators on how to bridge this gap (e.g., adaptive output tuning, multilayered content, or collaborative design with stakeholders).

      6. Risk of Exacerbating Health Inequities<br /> The text insightfully warns, "the broad dissemination of LLM-generated health information risks exacerbating health inequities" (page 20). Surprisingly, no strategies or intervention suggestions are offered. It would strengthen the manuscript to suggest how LLM output might be scaffolded or tailored for vulnerable groups.

      Minor Comments

      • In the methods section, the protocol could be described more clearly, including how the ten independent attempts for each prompt were randomised or sequenced.
      • The discussion occasionally rehashes results rather than linking them to broader theory or policy implications.
      • The limitations section should be expanded to acknowledge not just the lack of user participation but also incomplete handling of model drift and update cycles.

      Tone and Style<br /> The review has detected sporadic verbosity or ambiguous phrasing (e.g., “the findings demonstrate that stability of response generation is varied between models”—page 20). Succinct, active language would benefit the overall clarity.

      Recommendation <br /> Major Revision. Useful, important groundwork is laid here, but the manuscript requires deeper, more practice-oriented exploration, and a more measured, cautious reporting of implications. The lack of empirical field validation is a critical limitation.

      Editorial Decision<br /> Decision: Major Revision Required

      Both reviewers acknowledge the relevance and methodological rigor of the comparative approach, but insist on more empirical user validation, a critical reappraisal of the readability/accuracy trade-off, and practical translation of findings for end-users and policy-makers. Ethical considerations and limitations should be explicitly elaborated.

    1. On 2025-11-05 15:26:49, user Gyula Maloveczky wrote:

      Some samples in the GSE86978 and GSE51827 datasets are identical. For example, GSE51827's Cluster_Patient_1 corresponds exactly to GSE86978's Cluster_Brx53.2. In the preprint, these samples appear to show differences in some analyses, while the FASTQ files uploaded to SRA for these two samples are identical (SRR4246525 and SRR1020057 for example are identical except for the header lines).

    1. On 2025-11-11 14:27:16, user Evolutionary Health Group wrote:

      We at the Evolutionary Health Group ( https://evoheal.github.io/) "https://evoheal.github.io/)") really enjoyed this paper.

      Here are our highlights:

      Outlines a research roadmap connecting climate-driven stressors such as heat, flooding, and water scarcity to hygiene behavior and infection risk.

      Emphasizes the absence of temporally aligned datasets linking climate exposure, human behavior, and microbiologic outcomes.

      Proposes a coordinated data infrastructure for longitudinal monitoring of climate-hygiene-health interactions. This roadmap focuses on behavior and adaptation, grounding climate health in everyday lived contexts rather than abstract exposure models.

      It demonstrates how human factors research can complement environmental surveillance to guide intervention design.

    1. On 2025-11-13 16:48:17, user Kevin Davy wrote:

      Interesting paper! Is self-reported health in 80+ yo really "superaging"? The most common definition is ~ 80 plus year olds who have the memory capacity of individuals 30 years younger. Mortality has been studied as an outcome since the inception SRH measures. Don't quite a few studies report lower mortality among those who self-report better health?

    1. On 2020-04-18 11:10:24, user Ramananda Ningthoujam wrote:

      I agree with the point made by Muhammad Saqlain and co-author in their article that the government should take immediate policy plan to contained COVID 19. It is said that South Korea is winning the fight against COVID 19 because they have learnt a lesson from the past epidemic (MERS) outbreak. Every nation fighting against the pandemic is asking help from South Korea. However, I disagree the point that "Pakistan due to its geographical location is vulnerable to a worst outbreak" My question to the author(s) is "How is geographical location associated with vulnerability of COVID 19 outbreak."? Kindly explain with a valid point. <br /> Thank you.

    1. On 2020-04-19 17:16:34, user David Steadson wrote:

      I now realise I made an error in the last comment, taking the cumulative Swedish total instead of the Stockholm total. On April 1 FHM reported 148 deaths for Stockholm. Given delays in reporting 200 may be a reasonable guess of the actual numbers then. Unfortunately FHM only retroactively updates national totals, not regional

    1. On 2020-04-20 11:52:48, user Yi-Hsuan Wu wrote:

      A corrigendum should be made as the authors misused "Taiwan, China" instead of "Taiwan." That's not acceptable error a group of specialists would made.

    1. On 2020-04-20 17:25:50, user Wei Zhou wrote:

      Sorry. I cant find the supplemental figures and tables, even though I found the rest of the supplemental data in the pdf. If you have seen it, can you give me a pointer? Thanks!

    1. On 2020-04-20 17:37:29, user Philip Davies wrote:

      The low dose arm of this study is worth following.

      The big problem for this study is comparison. It really has not defined the control population at all. The Italian and Chinese references are entirely different. Even the 2 Chinese populations referenced had massively different outcomes because the populations examined were different.

      The Italian mortality rate was actually similar to the overall study average here (but much higher than the low dose arm). The Chinese study involved all patients admitted to the two hospitals ... that included a majority of patients with moderate ("ordinary" as the Chinese class it) disease severity. The patients in this Brazilian study were regarded as severe or critical ... such patients (looking at worldwide stats) would attract a mortality of 30-40% plus.

      This is the most important factor. Do not compare apples with pears. So far this study points the "swingometer" in favor of benefit versus harm for the use of HQN in patients with advanced disease.

      Once again however, we are looking at the potential impact of an orally administered drug to patients with advanced disease. That's a big ask.

      For CQ and HCQ the most interesting results will likely come from studies looking at prophylaxis and early treatment (using safe doses, not silly high doses with added drugs that also lengthen QT). We can't yet guess how they will pan out.

      Dr Philip Davies<br /> GP<br /> Aldershot Centre For Health, UK<br /> http://thevirus.uk

    1. On 2020-04-21 03:30:02, user UFO Partisan wrote:

      We need to be doing this kind of data gathering and reporting here in the States. The above results aren't shocking though. Once someone in your home is infected, you have a serious problem and the person initially getting infected seems most likely to be picking it up through mass transit. Protect yourself at all times everybody.

    1. On 2020-04-22 01:02:04, user Michael Kyba wrote:

      A cursory browse through Table 2 of the paper shows that the patients that would eventually comprise the HC group were the sickest upon admission, the HC+AZ patients were intermediate and the patients that would elect no HC group were the least sick. This is prior to intervention.

      This sort of sampling bias highlights the importance of double blind randomization to determine efficacy. Such an a priori correlation might be due to sicker patients opting for experimental treatments at a higher rate. In any case, it would not be wise to interpret these data as indicating that the interventions cause the worse outcomes. The underlying health state is probably responsible.

      Some examples follow, then a criticism of what the authors have written into their Results and Discussion.

      Known risk factors include age, weight and blood pressure; and signs of severe disease include kidney damage.

      Browsing through table 2 looking for parameters with lowish p-values:<br /> Mean systolic blood pressure differences between groups showed a p-value of just under 0.05 (statistically significant), with values of 136, 132, and 129 across the groups (HC, HC+AZ, no HC), but more significantly, the HC group had 34% of patients with BP information showing up in the very highest pressure group (27.8/0.804, the denominator being the fraction with information on BP in that group), while HC+AZ had 30% and no HC had 25%.

      Creatinine (high levels indicate impaired kidney function) was even more divergent: HC had 17.7% in the highest group, HC+AZ 11.7%, no HC only 8.1%.

      Pulse Oximetry showed the largest number of patients with low blood O2 in the HC group.

      I don't like that they did not break out the age and BMI into bins, but reported only means. Interesting distributions in these parameters might be buried in the mean.

      The write-up of this data is quite unusual in being very abbreviated and lacking any thought to potential problems.

      The Results section of the paper does not address the issue of a priori differences in health parameters at all except for saying "There were significant differences among the three groups in baseline demographic characteristics, selected vital signs, laboratory tests, prescription drug use, and comorbidities (Table 2)". Having said this, the authors proceed as if there were no significant differences.

      In the Discussion section, the only comment to this issue is to state: "Despite propensity score adjustment for a large number of relevant confounders, we cannot rule out the possibility of selection bias or residual confounding."

      It is in a preprint archive, which means this is a pre-review manuscript, but even so, it is quite unusual for such a study to completely lack any address of specific and obvious limitations. After review, hopefully reviewers will require the authors to analyze and discuss the divergent a priori health of the 3 groups.

    2. On 2020-04-22 01:10:17, user Gunnar V Gunnarsson wrote:

      After reading the paper I unfortunately find the usage of data to be misleading and I think you might have drawn the wrong conclusions.

      The problem lies in the fact that once people went on ventilators they where given HC or HC+AZ. This re-categorised the patients by increasing the number of high risk patients in the HC and HC+AZ groups making the No HC an invalid control group.

      Before ventilation the statistics was like this: (Table 4 in paper)

      HC: 90 - 9 (10.0%) deaths - 69 (76.6%) recover - 12 (13.3%) onto ventilation HC+AZ: 101 - 11 (10.9%) deaths - 83 (82.2%) recover - 7 (06.9%) onto ventilation No HC: 177 - 15 ( 8.4%) deaths - 137 (77.4%) recover - 25 (14.1%) onto ventilation

      We see that death-rate is about the same for all groups but HC+AZ seams to have the highest recovery rate but it might not be statistically significant.

      Now once people hit ventilation the re-categorisation occurs. More patients where given HC and HC+AZ which moved them from the No HC group to the HC or HC+AZ group. These groups therefore have a much higher % of ventilation patients because they where given the drugs after they hit ventilation.

      The following data can be derived from the paper but is not presented:<br /> Once people hit ventilation we have the following results.

      HC: 19 - 18 (95%) deaths - 1 (11%) recover HC+AZ: 19 - 14 (73%) deaths - 5 (27%) recover No HC: 6 - 3 (50%) deaths - 3 (50%) recover

      If you compare these 2 tables, you see that 25 patient with No HC reach ventilation. Once they reach ventilation, 19 of these where give HC or HC+AZ, thereby moved from the No HC group to the other two. 79.5% of all patients reaching ventilation died so arguably 14 patients that died where moved from the No HC group to the other 2 groups only once they reach the much higher risk state.

      Here are the number of people per group that got ventilation:

      HC: 97 - 19 (19.6%) got ventilation HC+AZ: 113 - 19 (16.8%) got ventilation No HC: 158 - 6 ( 3.4%) got ventilation

      So in the end result the No HC group had a very low % of patients who got ventilation and therefore should have a significant lower death rate which is then totally unrelated to the treatment.

    3. On 2020-04-24 14:50:11, user BR wrote:

      This study compares a group of patients with "more severe disease" that needed medication ("many times as a last resort"-VA) and where they "expected, increased mortality"----with a group with milder forms of the illness that didn't need medication. I'm not sure what the value is in making such a study.

    1. On 2021-05-28 05:26:03, user Enzo wrote:

      Other mistakes seem to weigh on the results and on the conclusions. Examples :<br /> In Figure 3, how could the mean length of stay in Niaee's control group be smaller than the one in ivermectin group, when Niaee finds a significant reduction of stay with IVM ?<br /> In Figure 2 (Mortality), how can RR for Chaccour be 1 when there's no event ? (Shouldn't it be "not calculatable" ?)<br /> In Figure 5, (severe adverse events), one is included despite Krolewiecki mentions the one reported "has not been reported in association to IVM"

    2. On 2021-05-28 08:48:55, user Enzo wrote:

      Yet another huge double mistake :<br /> In Fig.6 (viral clearance) for Karamat study, the number of events are WRONG, and they're represented on the WRONG SIDE. <br /> Karamat counts nb of viral clearance at day 3 (17 IVM vs 2 control), then the *additional* number at day 7 (20 more in IVM group vs 18 more in control group). Fig.6 only counts the additional count between day 3 and day 7, and forgets what happened till day 3. <br /> Hence the figures in Fig.6 for Karamat should be either "17 for Ivermectin and 2 for Control" or "37 for Ivermectin and 20 for Control". <br /> And these figures are IN FAVOR of ivermectin. Even the wrong "20" and "18" should have been represented on the "favors IVM " side.

    1. On 2021-06-02 21:08:40, user Mike wrote:

      "no pregnant or lactating individuals were included in the Phase 3 clinical trials of these vaccines despite belonging to a group at high risk for severe complications of COVID-19 infection" - Ok, so how are you concluding that it is not affecting these women when they weren't included in clinical trials?

      "We show here that the mRNA from anti-COVID BNT162b2 (Pfizer) and mRNA-1273 (Moderna) vaccines is not detected in human breast milk samples collected 4-48 hours post-vaccine" - Two concerns with this statement: 1) they were only tested up to 48 hours afterward? Why are we to conclude that if they don't show up in 48 hours they never will? When other vaccines NEVER leave the shoulder muscle (according to Dr. Bridle) that would indicate that the possibility for much slower movement to the blood exists. 2 - Are you testing for the correct substance? Are you looking for the spike protein or mRNA? Are those the same?

    1. On 2021-06-11 05:23:39, user Sock Dollager wrote:

      Did I read correctly that the patients in your observational study were not given Zinc?

      My understanding is that while AZM was discovered to have some unsuspected anti-viral properties, it was in combination with Zinc, with the Hydroxychloroquine acting as a Zinc ionophor that has had the best results.

      You mention the French doctor Raoult, I presume it’s this one?

      https://duckduckgo.com/?q=d...

      Between his work and the pioneering Dr. Vladimir Zelenko and all these studies at<br /> https://c19study.com and this one at American Journal of Medicine https://www.amjmed.com/arti... and now your study, let’s hope more and more physicians will prescribe the Hydroxychloroquine Protocol (Hydroxychloroquine plus azithromycin [or doxycycline] plus zinc) promptly for their infected patients.

    1. On 2021-06-11 13:41:25, user Christy Blanchford wrote:

      We don't develop long lasting immunity to the other 4 common covid viruses so why would we have long term immunity to covid 19? This was only 42 days out, we get reinfected with the covid common cold after 1-2 years. Manus, Brazil showed us that despite 80% covid infection rate that should have conferred herd immunity , 6 months later they were digging mass graves again. This paper is doing a disservice....

    2. On 2021-11-07 06:10:27, user Ina wrote:

      I haven't really understood what happens when a recovered subject is taking the vaccine. I mean there are already neutralising antibodies which will come in contact with the Spike protein leading to destruction of the cells that express it... right? Is that happening in the muscle or it has a more extensive degree? <br /> Those recovered from Covid are more likely to have adverse reactions after the jab, says a Harvard study... what is the explanation, what do we know about the molecular mechanisms ?<br /> I would love to know more about this and I would appreciate very much your opinion on this.

    1. On 2021-06-15 08:35:55, user J W wrote:

      The two underlying studies are not compatible, they have different outcomes regarding effect of school closures, obligatory masks effect etc. Using them as a base then does not seem plausible. Next, a significant impact which might mimic sesonality is vaccination, any filtering of the effect out? Why a sinusoid should be the right function, why not to model seasonality based on the fundamentals like actual temperature, sun hours proportion of a day and maybe regular school holidays as a proxy for vaccations and holidays? The two studies do not cover a year and do not cover even full switching on/off the government intervention, thus they may transfer other causes to impact of interventions and/or seasonality.

    1. On 2021-06-17 09:47:55, user Subhajit Biswas wrote:

      Dear Readers,

      I am pleased to inform you that the above preprint posted in medRxiv has now been accepted and published by the "Journal of Medical Microbiology". Please see link below:

      Title: Archived dengue serum samples produced false-positive results in SARS-CoV-2 lateral flow-based rapid antibody tests

      Link: https://www.microbiologyres...

      Best wishes to all; stay safe.

      Yours sincerely,<br /> Subhajit Biswas (Corresponding Author).

    1. On 2021-06-17 11:47:05, user Jay wrote:

      What are the doses or the duration of the treatment with glucocorticoids<br /> of this study? I'm in a middle of a prednisona treatment I will get the<br /> Pfizer vaccine in a few days. I have a cycle of 50mg x3days, 30mg <br /> x3days, 15mgx3 days and 5mg x2months and I want to know if it's better <br /> now (15mg) or next week with 5mg but more time with the treatment.

    1. On 2021-06-18 07:48:44, user Tobi wrote:

      This is really interesting and needs to be considered further.

      However, such fine tuning effects upon natural infection of artificial eliciting events are - at least from my experience - not unusual, since induced reactions to a specific PAMP always lead to physiological readjustments of "the whole system" and, thus, also affect reactions to certain other immunological stimuli (no upregulations without downregulations).

      Therefore, it's a shame that this preprint is already misused by YouTubers to generate fear of COVID vaccines due to long term effects.

    1. On 2021-06-18 22:31:20, user Colin McCulloch wrote:

      Noticed a small typo in the interpretation section. Search for "B.1.617.2 vaccine". I believe you mean "B.1.617.2 variant".

    1. On 2021-06-19 05:24:16, user David Gurwitz wrote:

      On June 18 2021, Duarte et al. published in Lancet EClinical Medicine this peer-reviewed article, reporting on the outcome of a clinical trial conducted at at a university and a community hospital in Buenos Aires, Argentina: "Telmisartan for treatment of Covid-19 patients: An open multicenter randomized clinical trial".<br /> DOI:https://doi.org/10.1016/j.e...<br /> From their Abstract, note in particular this:<br /> "Death by day 30 was reduced in the telmisartan-treated group (control 22.54%, 16/71; telmisartan 4.29%, 3/70 participants; p = 0.0023). Composite ICU, mechanical ventilation or death was reduced by telmisartan treatment at days 15 and 30. No adverse events were reported."

    1. On 2021-06-26 09:53:47, user Maurizio Rainisio wrote:

      Once more a paper on the effect of school closure aimed at supporting the strong belief that schools are one of the major causes of the CoViD-19 epidemic, trying to prove hypotheses that are set post hoc in a framework where the very concept of hypothesis testing is meaningless.

      The way how inferential statistics is used is embarrassing; the misuse of the word "significant" while no null hypothesis is predefined (nor could be), and, if any were implicit, it would be adapted to the situation post hoc (the parameter k is selected to maximize Student's t).

      Causality is inferred, while just coincidence is measured, without any consideration for other possible concomitant events. E.g.: The national referendum involving 25 millions of voters on September 20-21, would prove a better predictor of the increase of the epidemic curve using exactly the same method.

      Wording like "The estimated overall impact of schools reopening is quantified in around 227,724 positives" and the header "school related positives" without the benefit of doubt is not a scientifically sound way to approach the issue; statements of this kind should not be accepted by any scientific papers reviewer.

      A more careful analysis of the data looking at the changes the Rt parameter (or growth rate) that is the trigger to any increase in absolute number of infections, would show that in most of the regions the school openings happened after clear signals of trend changes provided by a properly computed Rt.

    1. On 2021-06-30 18:55:18, user Medini A wrote:

      Hello! Thank you for flagging this typo; yes, this is meant to be March 2021. The text will be updated shortly in the next version.

      -Medini

    1. On 2021-07-05 22:52:47, user Robby-D wrote:

      Line 108 - the paper cites "five other patients" [testing positive for COVID, beyond the initial patients 0a and 0b]. I believe this should read four?

      A more thorough description of total time of the event, what constitutes an "open air tent", and how many other people reported close contact with patients 0a and 0b would be very helpful in assessing the scenario described and the breakthrough cases that occurred. In an open air situation especially, attendees that remain up-draft and far from Patients 0a and 0b should not be included in any denominator for calculating breakthrough percentages.

      Condolences to the gentleman who passed and his family and friends.

    2. On 2021-07-13 17:09:58, user intros pector wrote:

      "Fully vaccinated"? Full immunity by Covaxin is considered to be achieved only 2 weeks after the second dose, but patients 0a and 0b are reported above to have travelled a few days before that, unfortunately. Plus, sitting in a plane for ~20 hours would have resulted in virus overload for patients 0a and 0b. Providng more such details would make the paper more amenable to detailed conclusions.

    1. On 2021-07-06 06:43:05, user Fat wrote:

      Someone please correct me if I'm mistaken, but this study relates only to T and B cell antibody reactions to the spike protein. It says nothing about all the other antibody proteins that the disease might have induced to differentiate and create variants, no?

    1. On 2021-07-08 18:27:10, user Jeff Andrews M.D. wrote:

      The authors have misrepresented the use of biostatistics. In line 99 they state that APOCT have poor specificity. The authors cannot comment on specificity because they do not know the number of true negatives (line 156-7). However, the prevalence is so low that using the [total number of tests – known PCR positives] approximates the true negatives. The approximate specificity in this study would be 71768/71808 = 99.94%. This is an incredibly high specificity, not poor.<br /> In lines 106-107, the authors ask us to consider the negative impact of a false positive result, without putting it in context. Every HCW with a positive APOCT had confirmatory PCR. The authors do not state the interval between the two tests; the impact was likely over a period of two days. Moreover, the authors do not consider the alternative scenario, which would be two days of HCW exposures to other HCWs and patients in a HC setting, while waiting for the PCR results. If 39 HCWs were identified with COVID-19, immediately put on isolation (due to 15-minute APOCT), what value did that have for the healthcare system, and for all of their HC and personal contacts? And was that protective value greater than the collective harm caused by identifying 48 HCWs as positive by APOCT who were released from isolation two days later when the PCR result was known?<br /> “False detection rate” is not a terminology of biosciences and was not defined by the authors. In fact, they are presenting [1-PPV]; which seems pointless since they also present PPV in the next sentence.<br /> When prevalence is very low, 5 per 10,000 in this study, a slight difference in prevalence can greatly influence PPV. The reason is that false positives tend be fairly static and not influenced by prevalence, but true positives are directly influenced by prevalence. It is wrong to say that APOCTs have a low PPV, unless the sentence includes the prevalence. <br /> Because the two tests were not conducted on the same subjects, it is wrong to publish results that represent a head to head comparison. In such a case, the authors were required to use more sophisticated Bayesian statistics, in order to apply a ‘penalty’ or adjustment to account for possible differences due to the differences between the two different populations and the two different test workflows. <br /> The authors fail to describe in Methods that one test was visually-read and the other test was read by an analyzer device, and to point out that human variation in ‘reading’ lines on the visually-read test could contribute to the differences. <br /> The authors fail to acknowledge that they do not know the numbers of cases of APOCT negative and PCR-positive (false negatives) for each of the two tests; it is possible that one test has a much lower sensitivity and that would need to be considered in context with the false positives and prevalence.<br /> Finally, the authors did not take the opportunity to discuss what the reasonable prevalence level is for APOCT. In their study, prevalence was 0.05% or 5/10,000. They do not discuss whether it is reasonable to test asymptomatic HCWs at this prevalence level or lower. They do not discuss whether it is reasonable to test asymptomatic non-HCWs at this prevalence level or lower.

    1. On 2021-07-09 06:37:13, user ndk wrote:

      This is a retrospective study as the predominant strain at the time was Alpha, in concurrence with their findings, but we currently face the radically different strain Delta, and perhaps Lambda. We can't glean much from it other than a snapshot in time.

    1. On 2021-07-14 11:58:00, user Karan Srisurapanont wrote:

      This is a very interesting research paper. I am planning to do a systematic review about the efficacy and safety of COVID-19 vaccines in solid cancer patients and I will definitely cite this article. However, I was wondering why the number of controls in Figure 3b added up to 60 instead of 50. I am looking forward to receiving your answer and would like to thank you for answering in advance.

    1. On 2021-07-15 16:14:14, user Tanavij Joob Pannoi wrote:

      I am wondering about the total number of recruited participants divided by VOC, particularly in figure 2A, while, the results from linear mixed model were not reported elsewhere. Researchers should provide more details of study limitations since it could be published on social media, while, many audience might misinterpret or exaggerate the result.

    1. On 2021-07-26 09:07:20, user Jörg Hennemann wrote:

      Dear authors, I do not get the point: In your raw data (table 1) the percentage of people dying from Corona Delta is 0.7%. All other variations cause 0.9% deaths for infected people. So, how can the risk to die from Delta be higher than for other variants? Where can we see how the "adjustment for age, sex, comorbidities, health unit, and temporal trend of the raw data works? Here in Germany people go wild because of this study, but I can not comprehend it. Thank you very much!

    1. On 2021-07-26 17:37:56, user Fortu Nisko wrote:

      From methods.

      We will include only studies that provide proof of transmission outcome using culturable virus and /or genetic sequencing. The inclusion of this higher-quality evidence aims to overcome the methodological shortcomings of lower quality studies. We will assess the microbiologic or genetic sequencing evidence in an effort to inform the quality of the chain of transmission evidence and adequacy of follow up of sign and symptom monitoring.

      This is reasonable and, in my view, essential.

      Also, the malady for which the infection is the cause must be very well ldefined. It would be false to claim that a person was pre-symptomatic if they did not present the definitive symptoms arrayed in the Severe Acute Respiratory Syndrome. Lack of symptoms specific to this particular malady would exclude the individual from the chain of transmission. In effect, the research must be based on the SARS patient and then work back from that. NOT the other way around, which depends on speculations and non-specificity.

      Likelihood is that you will end up making a rather vague analysis on influenza-like illness the symptoms of which may be presented in the chain of transmission toward an endpoint where the patient suffered influenza-like respiratory distress. The challenge you face is distinguishing that sort of chain from the chain specific to SARS-COV-2; and that means facing the ambiguities that define a distinction, such as it may be, between SARS-COV-1 and influenza and between SARS-COV-1 AND SARS-COV-2. If patients fall into a category that incudes infuenza-like illness, then, that places a heavy limitation on your research. It is the same limitation that applies to most of the research regarding what was deemed, by mere assertion alone, a new or novel pathogen.

      This must be addressed.

    1. On 2021-07-28 00:17:17, user LJV wrote:

      How was long term Covid defined? What questions were asked? Had these individuals had Covid prior to vaccination? How long after vaccination did symptoms emerge? This report is quite vague, as it does not clearly define the parameters and definitions of the terms.

    1. On 2021-07-29 06:28:47, user Astrid Fuchs wrote:

      The household follow up included the development of symptoms, why wasn't this evaluated?<br /> In case unvacvinated cases with symptoms contract to others this would be expected, but would enable to have respective mitigation measures like quarantine.<br /> But the open question is, if asymptomatic vaccinated can contract the virus without knowing which is higher risk to society. <br /> Would appreciate to see a split in symptomatic and asymptomatic too to adress most important question for society, esp. after CDC information this week that their data shows similar infection risk by vaccinated and unvacvinated even when in different setting.

    1. On 2021-12-03 04:53:25, user Emmasmom wrote:

      Can someone please tell me where Table S3 is? I have downloaded the pdf but it only contains three tables. How do I find the supplementary materials?

    1. On 2021-11-30 09:10:43, user Glenn LGG wrote:

      With the study running as far back as December, it's bound to capture a lot more unvaccinated during a time with higher risk of exposure.<br /> This is just one example of how the study does not take into account inherent risks.<br /> That makes it flawed.<br /> Furthermore the incidence of 5.2 /10k only increased to 5.8/10k in the control group during the delta wave. As this is supposedly time (duration) adjusted, what does it say about delta being far more infectious? (As claimed by the CDC)<br /> It seems to be contradicted by this finding.

    1. On 2021-12-01 22:50:20, user Depp Jones wrote:

      "This article is a preprint and has not been peer-reviewed "<br /> 32 times you can find in this article the wording "assume", 10 times assumption and 4 times "we set" and what else ever. Really, you think that will pass a peer-review? And if so apparently only possible in the pharmaceutical or medical industry.

    2. On 2021-12-02 13:00:06, user Jörg Hennemann wrote:

      I don`t get the way the data are aquired. The outhers are always refering to "symptomatic infections" and calculate frome those the pandemic development? Including the just positiv tested people without any symptoms? As valid base for modelling such things the real(!) distribution of all (with or without symptoms) infected people or at least those data gotten from a representative cohort of German people has to be used. Those data are not available. I also do not understand the conclusion. If the contribiution to R is 67-76% for unvaccinated and 38-51% of the vaccinated, why i targeted NPIs for the unvaccinated the only solution? How about targeted NPIs for the vaccinated? From a scientific point of view this is an important question that should be answered in your discussion - especially because of the unknown amount if virus carrying vaccinated people. If I had to do the peer review, I had substential questions to be answered befor publishing this...

    3. On 2021-12-04 11:17:55, user damator1985 wrote:

      The model is done with only one parameterset.<br /> A sensitivity analysis shall be added. What is the main study result (r value reduction) if you would change each of the input parameters of your model by +/- e.g. 20%? How does each input parameter influence the r value reduction? Is the r value reduction at the end much lower / higher than expected? Without that it remains fully unclear how significant the result of r value reduction is. But you already understood that the significance of the input parameters is not high. So please add at least 20%-30% uncertainity to all the input parameters listed in the appendix B, not only the matrix S, also in their combination!

      Please consider that based on such studies our policy in Germany is desciding on measures against the unvaccinated. You could have some responsibility for these descisions.

    4. On 2021-11-30 07:55:24, user Hartwig Zehentner wrote:

      What a tremendous model to prove ones view of life. Models are great, if they do, what they are supposed to do. I have a completely different idea, about the situation: If you force unvaccinated people to do tests for daily procedures or as entry ticket for work. Even if they are asymptomatic. And on the other side estimate even symptomatic (sneezing, cough, etc.) vaccinated people as "negatively tested".. (Example: I had two patients lately with confirmed COVID 19 despite being "fully vaccinated"; if they hadn´t had severe symptoms needing to go to the hospital, they both could have shown their "Vaccinepassport" for a tour through discotheques all night, where unvaccinated people are restricted from entering). And maybe many vaccinated but infected people have only mild symptoms, they surely don´t get tested, because of the green pass...<br /> So i´m very sure you can´t compare the groups of vaccinated and unvaccinated in regard of amount of tetsting. And with the background of vaccinated people with breakthrough infections being at least as infectious as unvaccinated people, for me this blaming of unvaccinated people is only propaganda, reminding me of germays worst times.<br /> Dr. med Hartwig Zehentner DESA EDIC

    1. On 2021-12-13 17:29:17, user Nico wrote:

      Thanks so much for this research! I work with people with missing periods (hypothalamic amenorrhea) and a common concern is whether vaccination might negatively impact the hypothalamus and period recovery. It is fantastic to be able to have some real data to share, not only on impact of vaccination but also impact of covid. One suggestion - in figure 6, perhaps consider changing the color scheme (or maybe truncating the color scale at -0.2 to 0.6?) so that it is easier to distinguish between the colors in the range in which most datapoints seem to be falling.

    1. On 2021-12-16 17:59:27, user rick wrote:

      No discussion of side effects of non pharmocological interventions. It's like recommending coronary bypass without discussion of morbidity and mortality from the procedure.

    1. On 2021-12-24 21:24:16, user Shannon Rowland wrote:

      How much shorter was the duration in the vaxed group vs unvaxxed? And those who took something other than Moderna- can I assume that they didn’t have the same benefit?

    1. On 2022-01-05 16:39:51, user Mike B wrote:

      This is obsoleted by the Omicron variant.<br /> Publication of this data may be misleading due to the immune escape of Omicron being much higher than Delta.

    1. On 2021-10-30 19:49:57, user Sanghyuk Shin wrote:

      Congrats to the authors on this monumental effort. However, the paper will be much stronger by drawing from the extensive literature on racism and its impact on mistrust of health system among Black and other minoritized people as summarized in https://www.healthaffairs.o...

    1. On 2021-11-01 21:20:05, user JS wrote:

      Any plans of clinical efficacy trials regarding:<br /> 1) prevention<br /> 2) transmission (effect of index patient using the spray agains infecting contacts)<br /> 3) early treatment (efficacy of antibody spray started after infection against symptomatic illness)?

    1. On 2021-11-14 09:40:17, user disqus_1lj0sBLhKD wrote:

      No P values are given.

      It would be very useful if the specific ARBs used and the various doses used were given, and correlate this with the risk of getting COVID-19 and dying. There was a study done in Argentina:<br /> https://www.dropbox.com/s/t...<br /> using 180 mg Telmisartan (80 mg BID) which showed excellent results. Another study, using Losartan at a dose of only 50 mg per day, did not show any advantage for Losartan. See: https://www.medrxiv.org/con...<br /> The people who did the Telmisartan study chose Telmisartan because of its reputed superior binding affinity, longest half life, high tissue concentrations, superior insurmountably, and superior activation of the PPAR gamma receptor. See page seven of the Argentina study.<br /> It would be very useful for the selection and design of future studies if any additional data could be shown that would shed light on this.

    1. On 2021-11-23 01:10:34, user Charles Warden wrote:

      Hi,

      Thank you very much for posting this preprint.

      I have a couple questions:

      1) Is it possible to add a bar for predictions made with clinical data alone (without genomic data) in Figure 3?

      2) Is it possible to give some sense of the number of samples / SNPs affected by each of the criteria in Supplemental Figure S9?

      Thanks again!

      Sincerely,<br /> Charles

    1. On 2021-11-28 23:39:37, user Silje Nes wrote:

      The study compares a group of PCR positive individuals to a group of PCR negative individuals, in order to find out what impact Covid infections have had on the use of healthcare services. The underlying assumption is that the PCR negative group is a representative selection of the general population that have not had Covid. This fails to acknowledge some important characteristics of the PCR negative group.

      To distinguish between individuals who had Covid or not, the authors look at all positive and negative tests within a given time frame. As mentioned in the paper, there was limited testing capacity in Norway during the first 3 months of the pandemic. Only a minority (typically healthcare workers and close contacts of confirmed cases) had access to PCR testing at the time of their first symptoms. The study concludes that the limited testing «affects the groups with COVID-19 and no COVID-19 to an equal extent». This is not entirely correct. Individuals without access to testing in the early months who were to develop persistent symptoms, would typically be tested several weeks or even months after first symptoms. Most individuals with Long Covid from the first months would therefore have only a negative PCR test result, and consequently end up as part of the comparison group. According to FHI’s own numbers, 220 had been admitted to intensive care by 10 May 2020, and 471 by 1 Feb 2021, implying almost half of the Covid infections took place before testing was available to the general population.

      Since we don’t know from the start what proportion of Covid infected people needed access to healthcare over an extended period of time, it is difficult to assert to what extent the outcome of the study is affected by these falsely negative individuals being part of the comparison group.

      The consequences of having persistent Covid symptoms without a formal diagnosis, in regards to use of healthcare services, is not clear. Doctors could choose to thoroughly examine the patient in order to rule out other morbidity, or tell the patient that it would pass by itself, and to wait it out, with no further examination.

      In addition to this, an unknown number of individuals will have had false negative PCR test results, and therefore be part of the «No Covid» comparison group despite having had Covid. <br /> Also significantly, the study fails to take into account the fact that many individuals would get tested because they were showing symptoms of Covid, and that this implies illness that could affect their use of healthcare services, regardless of cause. The selection in the comparison group is therefore skewed towards a part of the population who were sick.

      Thus, in actuality the groups that are compared look like this: <br /> - PCR positive – Covid infected<br /> - PCR negative – Three subcategories (unknown ratio): <br /> –– No symptoms (close contacts, general population)<br /> –– Symptoms (other disease)<br /> –– (Long) Covid infected, tested while no virus present

    1. On 2020-05-26 20:36:45, user C'est la même wrote:

      A lack of sequencing data limits the conclusions of this study. Suggestion that individuals were reinfected by the same strain is not confirmed due to lack of specificity of the serological testing. There is far greater genetic diversity of these strains compared to SARS-2. Just like influenza, subsequent infections in the few years following an infection are due to exposure to different strains, or similar strains but with significant drift in key antigenic proteins.

      Immunological memory is not dependent on high levels of circulating antibodies and hence the antibody kinetics do not tell us very much about long term immunity. The observed kinetics are similar to many other infections/vaccines and primarily reflect plasma cell kinetics, not memory-B-cell functions. So long as a small population of memory T-cells and B-cells are maintained, long term immunity will be maintained.

      I strongly suggest that a strong worldwide vaccination approach will be effective, even if at worst, there is significant genetic variation that requires annual vaccinations.

    1. On 2020-05-26 21:50:22, user Sam Wheeler wrote:

      Good paper, I downloaded the pdf.

      We still don't have the answer: what if an adult has taken the BCG shot very recently. There are clinical trials that will answer this question, hopefully soon.

      In many countries, medical doctors refused to prescribe BCG vaccines to adults even before covid-19, and pharmacies don't stock the vaccine at all. In which countries can an adult easily buy a BCG vaccine, and in which countries it is nearly impossible?

    1. On 2020-05-27 01:46:56, user Dario Palhares wrote:

      I congratulate you from this preprint. Since 2014, in Bioethics, we have questioned quarantine measures as a simple excuse for the State to get absolutist; a State of Exception. Never in history has quarantine shown any effectiveness in reducing, modeling or preventing a single epidemics. I guess you´ve got interesting feedback here in order to aprimorate you work when published. Anyway, I would like to ask (if not beg) you to analyze data from some other European countries: Portugal, Greece, Netherlands, Belgium, and in USA, to split data by state/region: NY, NYC, Florida, California.

    1. On 2020-05-27 02:53:57, user Divalent wrote:

      Are case data the date that test results were reported to the public, or the date the lab determined the test result, or the date of test sample was taken, or the date of first symptoms? (I'm trying to get a handle on what sewage detection tells us, and how it can be used. I.E., how much of the 7 day offset is due to asympt shedding, vs test-processing delays vs test result reporting delay, vs time from sympts to time of test.)

    1. On 2020-05-27 08:45:44, user Thomas Wieland wrote:

      Thanks for your comment! Unfortunately, there is no explicit behavioral measurement that could be used. However, there are some other findings which imply behavorial changes before the German "lockdown" started: Surveys show an increasing awareness towards SARS-CoV-2/COVID-19 in February/first half of march (e.g. the Ipsos survey of February 2020). Moreover, the German Robert Koch Institute (RKI) documented an "abrupt" and "extremely unusual" decline of other respiratory diseases (with shorter incubation period, such as influenza) from the beginning of March (calendar week 10). See the corresponding RKI paper (EpidBull 16/2020, page 7-9). These findings imply a more cautious behavior (staying at home when sick, physical distancing to strangers e.g. in public transport, thorough hand washing, carefully cough and sneeze etc.). Well, also hoardings started in the middle of February, which is, of course, an indicator for awareness towards the Corona threat (though hoarding is not desirable or even "rational"...)

    1. On 2020-05-29 08:49:02, user David Cadrecha wrote:

      Similar study in Spain shows a 20% reduction in the number of deaths per day social distancing started earlier.

      Looking at different countries and regions, a strong correlation between late intervention and number of fatalities is found.

      It should work for any country and tells that every single day of anticipation reduces deaths by roughly 20-25% (in the absence of other preventive actions)

      “LA PRÓXIMA VEZ DEBEMOS ACTUAR ANTES. Impacto de la precocidad de las intervenciones por Covid-19”

      https://t.co/TWfpDklLfo

    1. On 2020-06-01 09:16:36, user ??? wrote:

      Dear Colleague

      I am Jaehun Jung, the corresponding author of the paper.

      HIRA in Korea conducted a database update on May 15 that included 1,000 confirmed cases and over 150,000 controls. We will revise the manuscript based on a more detailed case definition and medication history.

      Preliminary analysis showed that most of the drugs presented in our study did not show any statistically significant effects. If you are using our research results in systematic review or meta analysis, be sure to consider this.

      Thank you

    1. On 2020-06-02 10:57:42, user Bruce Nelson wrote:

      The sample unit was the household. One person was tested per household. But SARS-CoV-2 is clustered by household, leading to possible underestimate of prevalence?

    1. On 2020-06-04 00:53:02, user Bruce Zweig wrote:

      The sentence ‘Our findings showed that only 4.22% of the overall population received 5ARI anti-androgen therapy’ should say ‘male patient population’ instead of ‘overall population.’

    1. On 2020-06-04 08:18:20, user Abderrahim Oussalah wrote:

      It could be insightful to have adjusted effect sizes for the GWAS after considering body-mass index and other potential risk factors (e.g., therapy with angiotensin-converting enzyme inhibitor / angiotensin receptor blocker) as covariates in the models?

    2. On 2020-06-05 07:12:39, user Matthias Hübenthal wrote:

      Thanks to Ellinghaus et al. for sharing these interesting results. The authors utilized rs8176747, rs41302905 and rs8176719 to predict ABO blood types. Combinations of the inferred blood types then have been used to predict case/control status employing logistic regression. Alternatively, one could base the prediction on a genetic risk score incorporating the ABO SNPs. Boxplots of the risk scores could then be used to illustrate group-wise differences. However, for completeness association results for the ABO SNPs should be reported and discussed.

    1. On 2020-06-04 09:47:03, user Malcolm Semple wrote:

      Dear Authors, Great paper, well written. Your reference Docherty et al as unpublished. This is now published in BMJ : Docherty Annemarie B, Harrison Ewen M, Green Christopher A, Hardwick Hayley E, Pius Riinu, Norman Lisa et al. Features of 20 133 UK patients in hospital with covid-19 using the ISARIC WHO Clinical Characterisation Protocol: prospective observational cohort study BMJ 2020; 369 :m1985

      Did you identify distinct symptom clusters as we did?

      Wishing you well

      Calum<br /> CI ISARIC4C & CO-CIN

    1. On 2020-06-05 20:23:05, user Steve S wrote:

      It is good to see this strange weekly cycle being addressed by the research community. The hypothesis that contracting on the weekend because human behavior changes on these days—set by our arbitrary definition of time (a week)—sets the trend of the weekly cycles in viral metrics (cases reported and deaths) is appealing. However, it seems a tad odd to me that the explanation of the death rate being weekly is completely dependent on it having a cycle that is divisible by a week, i.e., 14 days is two weeks. If say the death rate peaked at 10 days instead, then you would expect interference patterns between previous weeks to create something analogous to beat frequency in sound, where there would be several irregular peaks within in a week and the weeks could look different from each other. 14 days would therefore have to be a perfect coincidence, which just seems unlikely... but still possible I guess. I'm a neuroscientist not an epidemiologist, so forgive my ignorance, but are there examples of other infectious diseases that have weekly trends. Are the cases reported and death rates also weekly in these cases?

    1. On 2020-06-06 15:24:15, user wbgrant wrote:

      The analysis of data from European and perhaps other countries is problematic for a couple of reasons. One is that the 25OHD concentrations used may not be appropriate for those who develop COVID-19 due to age mismatch or not being for winter.<br /> Another is that life expectancy, an index for the fraction of the population that is elderly, has a stronger influence on COVID-19 rates than does 25OHD. See this preprint<br /> Kumar V, Srivastaa A. Spurious Correlation? A review of the relationship between Vitamin D and Covid-19 infection and mortality<br /> https://www.medrxiv.org/con...<br /> I verified their findings by using more recent case and death rate data.

    1. On 2020-06-07 22:27:50, user TNT wrote:

      Why did the doctors only administer 1000 IU/d? More serum vitamin D would have had greater immunoregulatory effect. Optimal immune regulation Is achieved at 100 nmol/L and many studies have demonstrated 4,000 IU/d is safe. Agree with the need of identifying patients’ serum content before the trial began

    1. On 2020-06-08 02:08:18, user Simin wrote:

      Hello from Istanbul, <br /> Not a science person but just a concerned human being. <br /> I have a question if I may. <br /> The water basin siphons mentioned in the article, are they only restroom siphons or does the research include the siphons of the kitchen basins too? <br /> The reason for my question is to figure out if the grocery cleaning habits maybe ended up any virus particles in the kitchen sinks.

      Thank you for all your efforts and kind reply if possible.

      Simin

    2. On 2020-06-12 04:50:15, user Paul_Vaucher wrote:

      Dear authors,

      Thank you for this interesting article of major interest. I find the process and research question to be most relevant. I however have a few questions that remain open to understand how the study could come to the conclusion that aerosols and surfaces were not important vectors of covid-19.

      1. What is the external validity of the results for making inferences over infectiousity on the entire period people could be carriers of the disease? In this study, most participants had already been in quarantine for 5 days. Repeated sampling has shown viral load to be optimal in the upper airway system 2 days before and 2 days after symptoms appear. Viral load from nasal and throat swabs drop to a rate where viral culture becomes difficult from 8 days onwards. Most of the study participants were probably beyond that point and were therefore not expected to be very infectious in the first place. If existant, infection through secondary contact and aerosols are however more likely when viral loads are high. It therefore seems difficult from the collected data to infer that household infection through these vectors are unlikely at all times.<br /> 2) When comparing risks from different surface types, how do authors justify the use of chi2 statistics with a sample smaller than 200 and all positive cells with less than 5 cases? In this condition, type 2 errors are very high and this test should not be used under this condition. The number of positif tests are too low to be able to answer the question of whether different surface types are more or less potential vectors of the disease.<br /> 3) Statistical inference assumes independence between measures. This is clearly not the case as a median of 9 samples were taken from each household. Statistical methods should therefore account for these clustering effects. However, the sample size is probably too small for this and a pure descriptive approach without inference could be more relevant.<br /> 4) Could we have any indication on viral load from throat swabs in household cases? If their viral loads were low, we wouldn’t then expect contamination to happen anyway. In two of your 21 housholds, there were apparently not a single case with a positive PCR. This might suggest viral loads to have been too low for any form of infection to have occurred in these households. It seems important to document to what extent each household had at least one person who could infect the air and surfaces.<br /> 5) Likewise, to document risks of infecting the air, were any samples from direct breathing taken from cases Within each household? This seems important as we would theoretically not expect ambient aerosols to be present in aerosols if viruses were difficult to find from air breathed out from cases.

      This study investigates an important question. I am however not convinced the method used truly answers the question as the public seems to understand it. Their is indeed room for misinterpretation and for the public to consider contact and air contamination not to occur at any time.

      To avoid any overinterpretation, it seems important to clarify that this study only tests risks of air and surface contacts days after people have been placed in quarantine when we don’t suspect them to be very infectious anymore.

    1. On 2020-06-08 13:40:04, user bvwredux wrote:

      Exine. The outer layer of the pollen, it is incredible stuff. Also the "remnants of the tapetal cells" found in the nooks and crannies of the exine layer. Either or both of them may be potently anti-viral for the corona virus -- that's my speculation. There is little pollen in bat caves. See https://www.ncbi.nlm.nih.go...

    1. On 2020-06-08 20:18:31, user itdoesntaddup wrote:

      I did my own empirical research along these lines for local authorities in England, finding a power law relationship between cases reported by Public Health England and population density, summarised in this chart, made before there was a change in the testing regime:

      https://datawrapper.dwcdn.n...

      I was inspired to put it together through being a long term observer of the output of the Santa Fé Institute (including some of the papers written by Luís Bettencourt under their aegis). I found that Geoffrey West had published a short note there on the same topic a few days later:

      https://www.santafe.edu/new...

    1. On 2020-06-10 19:55:05, user Sebastian Rosemann wrote:

      Dear authors,

      this is an interesting overview-study. However, many questions concerning the quality of the data and a systematical question arise.


      Systematical

      How do the authors assure that the uniform reporting delay of ~10 days reflects the real pandemic curve by e.g. comparing published reproduction rates against the rates used to estimate the effects of NPIs? How do the authors deal with overestimating certain NPIs when comparing their impact rates to local observations?

      For the german numbers we have the following discrepancy:<br /> The estimated reproduction number is based on reported cases using a delay of ~10 days from infection to confirmation. This seems not appropriate as a study from germany and the discussion around it shows.<br /> If simply using the reported cases curve one may get wrong drop rates for NPIs.<br /> This Science-study first used the reported cases:

      https://science.sciencemag....

      As stated by the authors in a technical note the drop-rates are quite different if one uses the real epi-curve with exact symptom-onset (if available):

      https://github.com/Priesema...

      Figure 19 shows a model based three-change-point approach and the impact.<br /> Figure 16 shows the same model fitting the curve with reported cases.<br /> Mind the drop rate of the first invention (which was cancellation of gatherings > 1000).

      The reproduction numbers in this study lead to a totally different conclusion as changes in R are not correct and gatherings < 1000 as first NPI are not introduced correctly which gives the closing of schools an overestimated impact.

      A closer look at the reproduction number of the netherlands reveals the same.<br /> Drops are visible but not in this intensity:<br /> https://www.rivm.nl/documen...


      Data quality

      A look around intervention dates in different countries brings up questions concerning the quality of these data. Some of the findings to discuss are the following:

      Belgium:<br /> Large gatherings were effectively cancelled since around march 10th and 13th<br /> https://en.wikipedia.org/wi...

      Bulgaria:<br /> 10 out of 28 regions closed on march 4th<br /> https://www.bnr.bg/en/post/...<br /> General closage happended on march 13th<br /> https://en.wikipedia.org/wi...

      Germany:<br /> Gatherings > 1000 were effectively cancelled since march 9th, one week before closing schools.<br /> https://en.wikipedia.org/wi...<br /> Closing schools was announced on march 13th but startet on march 16th.

      Finland:<br /> https://www.reuters.com/art...<br /> Mainly closed since march 18th.

      This list is open and does only include a few finding that should be discussed or taken into account in the estimation. However, i know that inventions are not always clearly clear to categorize. But this should be more reflected by untercertainties in estimations.

    1. On 2020-06-10 21:37:52, user La-Thijs Mokers wrote:

      HCQ isn't even the active component in the andecdotal cases of succesful treatment. You have to administer the HCQ together with a zink-supplement, else nothing will happen for sure. Also you need to get the timing right; this suggested treatment will only work during the early stage of infection, when viral load is relatively low. Herein HCQ merely functions as ionophore for the Zn2+ ions ( https://www.ncbi.nlm.nih.go... ), so that they can easily pass the cellwall into the cell, where they will inhibit viral replication ( https://www.ncbi.nlm.nih.go... ). Nothing fancy to it if you know how to use freaking google. Ofcourse loads of misleading studies will be continued published - like the recent Lancet drama of Mehra et al -, leaving out zinc and testing ridiculously high dosage of HCQ on very ill patients with a sky high viral load. No wonder you get a negative result if your research setup is designed to fail like that.

    1. On 2020-06-11 03:51:09, user kpfleger wrote:

      I echo Helga Rein's request for data on vitamin D levels of COVID-19 patients in your data. I emailed Ben Goodacre and the OpenSafely team email address suggesting this on May 14 but have received no response. The data implicating low vitamin D levels as causally worsening severity of COVID-19 infection is now very compelling. For a 1-page summary of the facts with links to supporting sources see: http://agingbiotech.info/vi...

      The world needs a dataset with n=10,000+ examining vitamin D levels in COVID-19 patients.

    1. On 2020-06-11 18:36:28, user Ruth Kriz wrote:

      This is consistent with my findings in other chronic infections that about 55% have PAI-1 or Leiden Factor V mutations that prevent them from up regulating their Thrombin/anti-thrombin complexes or elevated Lipoprotein (a) that binds with tPA when inflammation triggers the clotting pathway.

    1. On 2020-06-12 15:42:05, user DFreddy wrote:

      I miss mental health as risk factor. Every human also has a mind and a body. We know from piles of evidence that the mind impacts physical health too. I hope it will be included in a future analysis. Seems very elemental to do, no?

    1. On 2020-06-14 12:52:28, user Nayo57 wrote:

      Best recent seroprevalence studies from NYC and Bergamo yield roughly 1500 deaths/100k infections or a crude IFR of 1.5%. With Germany's crude IFR of about 4.5%, the total number of infected would be around 3 times the official estimate. We have to await age-stratified data to refine this estimate.

      On the other hand, CFR for medical staff in Germany as reported by RKI is about 0.15% vs 0.2% for age-groups 20-60 years when adjusted to gender-mix in medical staff. This would put the underreported fraction of cases in the range of about 30%.

    1. On 2020-05-20 09:31:25, user Reks wrote:

      Two comments and one question:<br /> 1. I think your reference 26 got mixed up? ( here: In a recent systematic review we concluded that the evidence in favour of face mask use outside of hospital was weak. 26)<br /> 2. The measures data are not entirely accurate: face masks were made mandatory in Poland on April 16th<br /> 3. Assume that countries tended to close down schools at roughly same epi stages. Your models, however sophisticated, would not be able to tease out the effects of school closures and any limiting factors that are inherent in the course of this epidemic, let's call them "natural" factors for want of a better term. Or would they? If not, should you mention that in the limitations? Can you perhaps check if this is likely to be the case (i.e. closures or other measures tending to be introduced at similar stages across quite a few countries)?

    1. On 2020-05-20 19:25:43, user Christian Gibbs wrote:

      Please note the dislaimer at the top of the article:

      This article is a preprint and has not been peer-reviewed. It reports new medical research that has yet to be evaluated and so should not be used to guide clinical practice.

    1. On 2020-04-24 07:15:48, user Rajendra Kings Rayudoo wrote:

      TO <br /> Yoann Madec,Rebecca Grant,

      I have gone through your paper above <br /> i had a doubt that<br /> 1) the antibodies that are transferred from one person to another ,can have long-term effect on the fighting with the antigen.and

      2) do the donor can increase the anti-sars antibodies continuously after donation.

      thanking u <br /> with regards <br /> rajendra

    1. On 2020-04-24 09:51:12, user Rajendra Kings Rayudoo wrote:

      TO<br /> Kamalini Lokuge, Emily Banks, Stephanie Davis, Leslee Roberts, Tatum Street, Declan O'Donovan, Grazia Caleo, Kathryn Glass

      I rea the above paper very happy to listen the decline of carona in Australia but as you mentioned the measures to take in the populous country likeindia which is 70 times bigger than Australia but the mathematical models and the way of finding asymptomatic carriers are fascinating

      i request you to please explain the methods of conducting the efficient way to eradicate the asymptomatic carriers

      thanking you <br /> with regards <br /> rajendra

    1. On 2020-04-24 15:57:17, user Rajendra Kings Rayudoo wrote:

      To<br /> Manisha mandal , shyamapada mandal

      Every thing is ok but how come the analytics of asymptomatic carriers and presymptomatic carriers which are grave fmdanherous to spread

      More over in india this is a stage which entering into community transmission

      Regards<br /> ............................................. Rajendra

    1. On 2020-04-25 14:04:04, user Rosemary TATE wrote:

      Hi, I dont see the STROBE guidelines checklist (for observational studies) uploaded, although you ticked yes to this<br /> "I have followed all appropriate research reporting guidelines and uploaded the relevant EQUATOR Network research reporting checklist(s) and other pertinent material as supplementary files, if applicable. "<br /> A lot of people seem to ignore these but they are important and any good journal will require them.<br /> Can you please upload? Many thanks.

    1. On 2020-04-25 20:17:25, user Pasquale Valente wrote:

      The study show also extraordinary good news. Too bad that the <br /> authors do not underline them. So it is good, while we thank them for <br /> the work done, we make clear the positive numbers that can be glimpsed <br /> between the lines. So, as far as I intend to report, the study is based <br /> on two surveys conducted between 21 February and 7 March, which affected<br /> 85% (2812 people) and 71.5% (2343 people) of the population of Vo ' <br /> Euganeo (PD), the town of 3300 inhabitants where, on February 21, the <br /> first death from pneumonia occurred, which was attributed (by whom and <br /> on what basis?) to the SARS CoV-2 infection in Italy. The study not <br /> reported as a case of pneumonia has been defined, nothing regarding the <br /> clinical picture, nor anatomo-pathological disorders. It refers to the <br /> basis of a news item learned in the press (a man 78 years old, <br /> cardiopathic, who went through several shelters in intensive care died <br /> in that sad day. The study seems interested to elucidate, interestingly,<br /> the mechanisms of transmission of the virus and in particular the <br /> dynamics of its onward transmission, between symptomatic and <br /> asymptomatic subjects. The study produces also some useful data. The <br /> prevalence of Sars CoV-2 positive cases was 2.6% (73 positive tests / <br /> 2812 tests) in the first survey and 1.2% (29 positive tests / 2343 <br /> tests) in the second survey on 7 March. How many symptomatic cases with <br /> positive tests? The table show n. 43 symptomatic subjects/ 2812 subjects<br /> tested, equivalent to 0.015%. In the second survey 16/2343 symptomatic <br /> cases were found, that is equal to 0.0068%. Isn't this good news? Only 7<br /> -15 per 1000 inhabitants of Vo' Euganeo manifested fever or cough in <br /> the winter period in a town of Veneto. Meanwhile, the Schiavonia <br /> Hospital, where the man died, was first closed and then reopened as<br /> a COVID hospital. May be this is also a good new. We are preparing at <br /> the best, for the next pandemia. The study claims to have also collected<br /> data on the progression of symptoms and hospitalization of some <br /> subjects. Well, we will look forward to seeing them on a new <br /> paper. <br /> Best Regards

    1. On 2020-04-26 00:39:15, user tsuyomiyakawa wrote:

      There are two major issues that make the design of this study inappropriate for examining the BCG hypothesis.

      1. The efficacy of the BCG is supposed to wane over time and so the most of the protective effects of BCG in aged people, is any, is supposed to be mediated largely by herd immunity. Herd immunity would occlude the discontinuity.

      2. BCG is a weakened version of tuberculosis (TB) and TB infection would exert equivalent or even stronger protective effects with the BCG hypothesis. Before implementation of BCG policy, most of the countries were high tuberculosis burden countries. So aged people in those countries are expected to be protected by their experience of TB infection in a similar way BCG protects, under the BCG hypothesis. Note that Vietnam and Thailand are still high TB burden countries.

      Also, there are a few minor issues that I'd like to point out.

      1. In Czech, it is interesting that there are some children under 10 years old who were tested positive and are not covered by BCG. In other countries, few children, who are covered by BCG, were tested positive.

      2. In Figure 1 or in Supp. Figures, similar panels for the other analyzed countries should be also shown.

      3. The raw data on which Fig 2 should be made available. Apparent positive correlation between BCG coverage and "the log cases per thousands" is interesting but it is likely to be a spurious correlation. Trying to identify the factor underlying such correlation would be important.

    1. On 2021-01-24 19:40:01, user Han-Kwang Nienhuys wrote:

      I have further analyzed the data in fig. 2; the odds ratios (frequency ratio B.1.1.7 / other) grow exponentially with daily growth factors between 1.06 and 1.09 between 6 weeks and 1 week before the of the data (only considering the UK regions where the error bars in Fig. 4 were reasonably small: EE, EMid, London, NEE, SEE, SWE, WMid). For this I need to assume that a fraction of the SGTF cases are 'false positive', since most regions show a constant SGTF rate in October, before taking off with exponential growth.

      Also notable, genomic analysis in UK SEE, Denmark, Netherlands, and Portugal show consistently growth rates between 7 %/d and 9.4 %/d with only Denmark showing a slowdown (from 12 %/d to 7 %/d).

      Also, one would expect the odds ratio to grow exponentially over time if there are just two competing variants, each with their own transmissibility or reproduction number. However, the other strains that make up everything else than B.1.1.7 are likely to have slightly different transmissibilities. Over time, one would expect the transmissibility to drift to higher values, also among those other strains. The fact that the odds ratio growth rate is decreasing does not necessarily mean that the B.1.1.7 is getting less infectious; rather, the mixture of other strains could be getting more infectious over time, just because the contributions of the less infectious ones in the mix gradually decreases.

      Summarizing: I believe that 6 %/d is an estimate that is significantly too low.

      For graphs of my analysis, please see https://twitter.com/hk_nien... .

    1. On 2021-01-26 07:48:38, user Oliver Kumpf wrote:

      This is an interesting study. Regarding some analyses I would be interested in the distribution of organ dysfunction. Were vasopressor-free days and pulmonary-support-free days equally distributed in the therapy vs control cohorts? What were the age groups who profited most? Younger patients were much more likely to survive as is represented in the suplementary material. The Kaplan-Meier curves were without statiscal analysis. Was there a statistically significant difference between pooled IL-6Ra treated patients and controls? What is the number needed to treat. This therapy is expensive and especially in countries with restricted ressources it would almost be impossible to use such treatment.