Not, uh, those kind of models. We know what it is about them.
I’m talking about predictive models, whose object is to use whatever data is available to map the statistical likelihood of particular future events. These days such models are roughly as numerous as air molecules, since businesses and governments are obsessed with mitigating risk. As man’s ability to travel through time has been unfortunately slow to develop and the traditional ways of obtaining knowledge of the future — visiting fortune tellers or examining the entrails of animals sacrificed to the gods — are currently out of fashion, predictive models are pretty much all we’re left with.
I don’t mean to suggest that these models are completely worthless, only to emphasize that they are by definition based on incomplete data and must always be taken with a grain of salt. Sometimes, depending on the amount of data lacking, with a whole salt mine.
Even so, we are continually seeing them cited without qualification as if they were actual intel reports from the future. Just last week the Institute for Health Metrics and Evaluation caused a minor panic when it released its model’s then-new projection of the progression of COVID-19 (to which the White House’s modelling is heavily indebted). That projection was fairly dire, predicting a death toll between 100,000 and 240,000 in the U.S. by the end of this thing, even with our present social-distancing measures in place. Well, fast-forward just a single week and, after it was widely noted that the number of deaths in New York City was leveling off and hospitalizations were declining even though IHME’s model held that both would increase for five more days, the institute announced that they had significantly revised their projections, reducing the total death number by 12 percent and the total number of necessary hospital beds by 58 percent. Which is great, but it also makes you a little suspicious of their new numbers.
Benny Peiser and Andrew Montford of the indispensable Global Warming Policy Forum have a piece in the Wall Street Journal about this same issue. They begin with a discussion of the two principal British models of the pandemic’s progression, which present wildly different conclusions, and upon which the British government is making its decisions:
Several researchers have apparently asked to see Imperial [College]’s calculations, but Prof. Neil Ferguson, the man leading the team, has said that the computer code is 13 years old and thousands of lines of it “undocumented,” making it hard for anyone to work with, let alone take it apart to identify potential errors. He has promised that it will be published in a week or so, but in the meantime reasonable people might wonder whether something made with 13-year-old, undocumented computer code should be used to justify shutting down the economy.
Peiser and Montford’s work at the GWPF make them uniquely qualified to comment on the unreliability of predictive models, because long before that fateful bowl of bat stew changed the world, climate scientists were dramatically announcing the headline-grabbing conclusions of opaque processes and fuzzy math. For one extremely significant example of this, check out this interview with Ross McKitrick, a Canadian economist who began applying his expertise in statistical analysis to climate change studies and was surprised by what he uncovered. Rex Murphy is the host:
At the 40:40 mark, Dr. McKitrick tells the story of how he and a Toronto mining executive named Stephen McIntyre began looking into the data used by American climatologist Michael Mann in developing his Hockey Stick Graph. That graph had displaced the general climate consensus of the time, which held that climate had always moved in waves of alternating warm and cold periods, and purported to show, through the examination of tree rings, “that at least for the past thousand years it was really just a straight cooling line and then you get to the 20th century and the temperature begins to soar very rapidly. We’re riding up the blade of the stick.”
In the clip above, McKitrick discusses the origins of his skepticism concerning Mann’s theories, which had revolutionized the field of climatology and given rise to mountains of carbon regulations. After finally accessing what appeared to be the underlying data set and trying to replicate Mann’s conclusions, says McKitrick, the hockey stick graph “really wasn’t robust, that you could get all kinds of different shapes with the same data set based on minor variations in processing. We also identified some specific technical errors in the statistical analysis. Really, our conclusion was, they can’t conclude anything about how our current era compares to the medieval period. The data and their methods just aren’t precise enough and they’re overstating the certainty of their analysis… The methods were wrong, the data is unreliable for the purpose, and I would just say that that graph is really uninformative about historical climate.”