I cannot count the number of times someone has told me that they believe in “the science,” as if that were the name of some omniscient god who had delivered us final answers written in stone.
Ask a Washington dinner party full of moderately well informed people what will happen with Iran over the next five years, and you’ll end up with a consensus that gee, that’s tough. Ask them what GDP growth will be in fall 2019, and they’ll probably converge on a hesitant “2 or 3 percent, I guess?” On the other hand, ask them what’s going to happen to the climate over the next 100 years, and what you’re likely to hear is angry.
How can one be certain about outcomes in a complex system that we’re not really all that good at modeling? Anyone who’s familiar with the history of macroeconomic modeling in the 1960s and 1970s will be tempted to answer “Umm, we can’t.” Economists thought that the explosion of data and increasingly sophisticated theory was going to allow them to produce reasonably precise forecasts of what would happen in the economy. Enormous mental effort and not a few careers were invested in building out these models. And then the whole effort was basically abandoned, because the models failed to outperform mindless trend extrapolation — or as Kevin Hassett once put it, “a ruler and a pencil.”
Computers are better now, but the problem was not really the computers; it was that the variables were too many, and the underlying processes not understood nearly as well as economists had hoped. Economists can’t run experiments in which they change one variable at a time. Indeed, they don’t even know what all the variables are.
This meant that they were stuck guessing from observational data of a system that was constantly changing. They could make some pretty good guesses from that data, but when you built a model based on those guesses, it didn’t work. So economists tweaked the models, and they still didn’t work. More tweaking, more not working.
Eventually it became clear that there was no way to make them work given the current state of knowledge. In some sense the “data” being modeled was not pure economic data, but rather the opinions of the tweaking economists about what was going to happen in the future. It was more efficient just to ask them what they thought was going to happen. People still use models, of course, but only the unflappable true believers place great weight on their predictive ability.
This lesson from economics is essentially what the “lukewarmists” bring to discussions about climate change. They concede that all else equal, more carbon dioxide will cause the climate to warm. But, they say that warming is likely to be mild unless you use a model which assumes large positive feedback effects. Because climate scientists, like the macroeconomists, can’t run experiments where they test one variable at a time, predictions of feedback effects involve a lot of theory and guesswork. I do not denigrate theory and guesswork; they are a vital part of advancing the sum of human knowledge. But when you’re relying on theory and guesswork, you always want to leave plenty of room for the possibility that your model’s output is (how shall I put this?) … wrong.
Naturally, proponents of climate-change models have welcomed the lukewarmists’ constructive input by carefully considering their points and by advancing counterarguments firmly couched in the scientific method.
No, of course I’m just kidding. The reaction to these mild assertions is often to brand the lukewarmists “deniers” and treat them as if what they were saying was morally and logically equivalent to suggesting that the Holocaust never happened.