1 Matching Annotations
  1. May 2021
    1. Error bars would be nice. They're MIA in large swathes of COVID related research. I've read a lot of COVID papers in the past year and this paper is typical of the field. Things you should expect to see when reading epidemiology literature:1. Statistical uncertainty is normally ignored. They can and will tell politicians to adopt major policy changes on the back of a single dataset with 20 people in it. In the rare cases when they bother to include error bars at all they are usually so wide as to be useless. In many other fields researchers debate P-hacking and what threshold of certainty should count as a significant finding. Many people observe that the standard of P=0.05 in e.g. psychology is too high because it means 1 in 20 studies will result significant-but-untrue findings by chance alone. Compared to those debates epidemiology is in the stone age: any claim that can be read into any data is considered significant.2. Rampant confusion between models and reality. The top rated comment on this thread observes that the paper doesn't seem to test its model predictions against reality yet makes factual claims about the world. No surprises there; public health papers do that all the time. No-one except out-of-field skeptics actually judge epidemiological models by their predictive power. Epidemiologists admit this problem exists, but public health has become so corrupt that they argue being able to correctly predict things is not a fair way to judge a public health model[1]. Obviously they insist governments should still implement whatever policies the models say are required. It's hard to get more unscientific than culturally rejecting the idea that science is about predicting the natural world, but multiple published papers in this field have argued exactly that. A common trick is "validating" a model against other models [2].3. Inability to do maths. Setting up a model with reasonable assumptions is one thing but do they actually solve the equations correctly? The Ferguson model from Imperial College, which we're widely assured is one of the world's top teams of epidemiologists, was written in C and filled with race conditions/out of bounds reads that caused their model to totally change its predictions due to timing differences in thread scheduling, different CPUs/compilers etc. These differences were large, e.g. a difference of 80,000 deaths predicted by May for the UK [3]. Nobody in the academic hierarchy saw any problem with this and worse, some researchers argued that such errors didn't matter because they just ran it a bunch of times and averaged the results. This is confusing the act of predicting the behaviour of the world with the act of measuring it, see point (2).4. Major logic errors. Assuming correlation implies causation is totally normal. Other fields use sophisticated approaches to try and control for confounding variables, epidemiology doesn't. Circular logic is a lot more common than normal, for some reason.None of these problems stop papers being published by supposedly reputable institutions in supposedly reputable journals. After reading or scan-reading about 50 epidemiology papers, including some older papers from 10 years ago, I concluded that not a single thing from this field can be trusted. The problems aren't specific to COVID, they're cultural and have been around a long time. Life is too short to examine literally every paper making every claim but if you take a sample and nearly all of them contain basic errors or what is clearly actual fraud, then it seems fair to conclude the field has no real standards.[1] "few models in healthcare could ever be validated for predictive use. This, however, does not disqualify such models from being used as aids to decision making ... Philips et al state that since a decision-analytic model is an aid to decision making at a particular point in time, there is no empirical test of predictive validity. From a similar premise, Sculpher et al argue that prediction is not an appropriate test of validity for such model" https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3001435/[2] https://github.com/ptti/ptti/blob/master/README.md[3] https://github.com/mrc-ide/covid-sim/issues/116 https://github.com/mrc-ide/covid-sim/issues/30 https://github.com/mrc-ide/covid-sim/commit/581ca0d8a12cddbd... https://github.com/mrc-ide/covid-sim/commit/3d4e9a4ee633764c... reply parent oldgradstudent () 2 days ago on It gets worse.You should look at the the observational studies measuring vaccine effectiveness in Israel coming from Balicer and his group.They report the effect of the vaccine on number of positive cases without even mentioning that the vaccinated individuals are not routinely tested by ministry of health policy, or that the main reason people get tested is to shorten the isolation period after contact with covid-19 cases, which vaccinated individuals are exempt from.

      .