AI-concerned think the risk that a genetically engineered pathogen will killmore than 1% of people within a 5-year period before 2100 is 12.38%, while the AIskeptics forecast a 2% chance of that event, with 96% of the AI-concerned abovethe AI skeptics’ median forecast
this seems like a sort of ad-hoc way of breaking up the data. What exactly is the question here, and why is this the best way to answer it?