Have climate models outlived their usefulness?

Outside of their academic fascination, looked at in terms of their contribution to climate policy, it seems that we may have reached the useful limit of computer climate modelling.

The first computers built in the 1950s allowed climate scientists to think about modelling the climate using this new technology. The first usable computer climate models were developed in the mid-1970s. Shortly afterwards the US National Academy of Sciences used their outcomes to estimate a crucial climate parameter we still calculate today – the Equilibrium Climate Sensitivity (ECS) – how much the world would warm (from 'pre-industrial' levels) with a doubling of CO2 -- and concluded that it had a range of 1.5 – 4.5°C. Since then computer power has increased by a factor of more than a quadrillion yet, one could argue, climate models have not much improved on that original estimate. Their range of projections has not narrowed significantly, and consequently the contribution they make to climate policy hasn’t improved concomitantly.

From the time of the IPCC’s AR1 (1990) all the way to AR4 (2007) it was 2.0 – 4.5°C. AR5 (2015) changed it only slightly to 1.9 – 4.5°C. Significantly, the most recent IPCC assessment, AR6 (2021), took a step back partially decoupling computer models and model predictions, replacing them with “expert judgement” of ECS that gave more emphasis on other sources of information. AR6 did not consider all climate models to be equal and weighted them according to their hindcast abilities. Subsequently AR6 narrowed the ECS range to 2.5 – 4.0°C. This can hardly been regarded as real improvement.

At the time climate scientists were looking forward to new rounds of computer modelling, particularly the so-called Coupled Model Intercomparison Project (CMIP6) effort to reduce the ECS uncertainty. But it was clear very early on that the opposite was happening, uncertainty was actually increasing. CMIP6 eventually concluded that ECS was 2.0 – 5.5°C. The fact that CMIP6 has too many climate models running too hot is telling us something important. The model democracy of the past, which we at the GWPF and others were criticised for pointing out, was indeed wrong.

In summary, not many climate models concurred with reality leading to suggestions that only those that did should be considered useful. Instead many studies encompassed all the models, including the ones that did not reproduce observations, and used the median of them all, and their spread, as a way to predict future climate change. It was always an inconvenience that nature was not following the median of the models, and an irritation that “sceptics” pointed out that it was unscientific not to discard models that did not fit real-world data.

A recent commentary in Nature addresses this problem by pointing out, at last, that not all computer models are equal: The hot ones should not be used to produce projections of the future direction of climate change. This is progress, albeit slow, and it nullifies a great many reports in the media of “things are going to get worse than we expected,” alarm that has been depending on the unrealistic upper bounds of projections.

Computer climate models are not as good as many Guardian readers think! Consider for example a 2018 study involving over 30 computer models and five global temperature datasets that showed severe disagreement between them over almost 40% of the Earth. This is not the only piece of research that provides valid reasons to doubt their accuracy especially when it comes to predicting localised climate which is the basis for the much publicised technique of climate event attribution. The spread of computer model outcomes is also problematic in this regard.

What of the future? Are bigger computers the way? Climate modellers want to add more and more parameters to create a virtual earth and pace it forward improving weather and climate predictions. But is the reliance on increasingly powerful computers and complex climate models actually an emergent flaw in climate science? In 2009 an international summit said that with a thousand-fold increase in computer power we would achieve a quantum leap in our ability to predict the climate.

Well now we have achieved that level of performance, and more, yet our predictive success in predicting the climate hasn’t really improved. In fact, it has decreased by some measures: faster computers, more memory, more complex codes, finer grids, more weather parameters have made things worse.

Bigger is Better?

The UK Met Office's Cray XC40 computer is among the best in the world capable of performing 14,000 trillion operations a second. It has 2 Petabytes of memory, 460,000 computer cores and runs programs with over a million lines of code. It began operations in December 2016 and according to the Met Office it's at the end of its useful life. Later this year it will be replaced by one that is six times faster as part of a government funded £1.2 billion programme.

The mantra of bigger is better continues. Recently the Royal Society issued a report by Dame Julia Slingo, formerly of the UK Met Office, that emphasised the need for more computing power and better climate models. By 2030, it said, climate models will provide essential information for both mitigating and adapting to our changing climate. It is adamant that what we need is more detailed and precise information to “enable robust decision-making” in the future. It also wants a step change in international cooperation and investment.

It's because of the “shortcomings imposed by the limitations of supercomputing,” says the Royal Society, and it wants a major new international facility modelled on CERN that would push climate modelling forward, with major new computing technology to be able to predict the weather and climate at kilometre-scale resolution resulting in better global climate predictions and services”. But is this scientifically feasible, and, if so, would it be cost effective?

The lack of improvement in computer models as they become increasing complex is possibly telling us that most of the small-scale details and processes are irrelevant to the outcome and perhaps not worth computing. This is an unpopular viewpoint in these days of institutions and university departments basing their existence on making the case for bigger and bigger computers and more complex models to “improve” the results sometime in the future that has yet to arrive.

If a simple model, considered ‘unrealistic’ by the standards of today’s climate modellers and their behemoth codes, does a better job or equivalent of climate prediction that a modern more “realistic” one then what does it say about the progress of this field and its diminishing scientific and financial returns?

The search for climate reality simulated by a computer model uncertainties has uncovered an underlying truth about this process. The models are disintegrating into uncertainty and no one is telling the decision makers who base their entire policies on these forecasts. The media haven’t noticed and continue to write articles praising computer models as being more accurate than we thought!

There is as much uncertainty and “wriggle room” in climate models as there was decades ago. Outside of their academic fascination, looked at in terms of their contribution to climate policy, it seems that we may have reached the useful limit of computer climate modelling.

Feedback: david.whitehouse@netzerowatch.com

Dr David Whitehouse

David Whitehouse has a Ph.D in Astrophysics, and has carried out research at Jodrell Bank and the Mullard Space Science Laboratory. He is a former BBC Science Correspondent and BBC News Science Editor. david.whitehouse@netzerowatch.com

Previous
Previous

Why record wind output is no cause for celebration

Next
Next

The great renewables ripoff