An essay on the state of climate change science
(1) Is the science of climate change ‘settled’?
The scientific uncertainties associated with climate prediction are the basis of most of the arguments about the significance of climate change(25), and as well are the basis of much of the polarized public opinion on the political aspects of the matter. Perhaps the most fundamental of the uncertainties can be illustrated by reference to a simple ‘thought experiment’ as follows.
Imagine a plume of smoke rising from a cigarette into some sort of flue. The stream of smoke is smooth enough for a start, but suddenly breaks into random turbulent eddies whose behaviour is inherently unpredictable.
We can in principle make closely spaced measurements all over the turbulent plume at some particular initial time, and then at regular steps forward in time into the future. We can in principle predict things into the future with a numerical model which uses the initial measurements as a starting point and then makes predictions of the conditions at the end of each time step at all of the so-called ‘grid points’ corresponding to the positions of the measurements.
After the first time step, the model uses as its starting point the conditions predicted for the end of the previous step. The predictions may match the observations for a while, but very soon random fluctuations smaller than the distance between the measurements (they are called ‘sub-grid-scale eddies’ in the vernacular of numerical modellers) grow in size and — as far as the model is concerned — appear out of nowhere and swamp the eddies we thought we knew something about. While we can probably say that the overall column of smoke will continue to rise, we can make that rather limited statement only because the eddies are restricted or ‘contained’ by a boundary (the flue), and cannot grow to a size any bigger than the limit set by the boundary.
Predicting the actual value of the average rate of rise of the overall plume is still difficult. Depending on the shape of the flue, it may require the use of one or more ‘tuneable parameters’ in the forecasting process. A tuneable parameter is a piece of input information whose actual value is chosen on no basis other than to ensure that theoretical simulation matches observation. Normally it would be used to define something about the average state of the turbulent medium between the grid points of the forecasting model.
The climate system is much like the smoke but is vastly more complicated. The atmosphere and the ocean are two interacting turbulent media with turbulent processes going on inside them, and there are all sorts and shapes of physical boundary (of the ocean in particular) that ‘contain’ the eddies in a way that may or may not allow prediction of average conditions over areas less than the size of the earth. In principle at least we may be able to make a reasonable forecast of such things as the future global-average temperature and global-average rainfall by using a numerical model and a number of tuneable parameters obtained from observations of present conditions. (The ‘in principle’ here is based on the fact that the overall size of the earth sets an upper limit on the scale of possible eddies). Forecasting smaller-scale averages becomes more and more problematic as the scale decreases. As a first guess based on the smoke plume analogy, one might be able to forecast averages over areas the size of ocean basins (imagine them as ‘containers’ limiting the maximum possible eddy size) but one cannot really expect to make skilful prediction for areas much smaller than that.
This qualitative conclusion is borne out by the 100-year forecasts of global and regional rainfall produced by the various numerical climate models from around the world(1). While the predicted global averages are reasonably consistent (not necessarily correct, but at least to some degree consistent with each other), the predictions for continental Australia for instance, where the overall average of measured rainfall is currently about 450 mm per year, range from less than 200 mm per year to greater than 1000 mm per year. From which it would seem that long-term predictions of regional rainfall are probably little better than guesswork.
The World Meteorological Organization of the United Nations took its first steps towards establishing the World Climate Program in the early nineteen-seventies. Among other things it held an international workshop in Stockholm to define the main scientific problems to be solved before reliable climate forecasting could be possible(2). The workshop defined quite a number, but focused on the two that it regarded as the most important and fundamental.
The first concerned an inability to simulate the amount and character of clouds in the atmosphere. Clouds are important because they govern the balance between solar heating and infrared cooling of the planet, and thereby are a major control of Earth’s temperature. The second concerned an inability to forecast the behaviour of oceans. Oceans are important because they are the main reservoirs of heat in the climate system. They have internal, more-or-less random, fluctuations on all sorts of time-scales ranging from years through to centuries. These fluctuations cause changes in ocean surface temperature that in turn affect Earth’s overall climate.
Many of the problems of simulating the behaviour of clouds and oceans are still there (along with lots of other problems of lesser moment) and for many of the same reasons as were appreciated at the time(26,27). Perhaps the most significant is that climate models do their calculations at each point of an imaginary grid of points spread evenly around the world at various heights in the atmosphere and depths in the ocean. The calculations are done every hour or so of model time as the model steps forward into its theoretical future. Problems arise because practical constraints on the size of computers ensure that the horizontal distance between model grid-points may be as much as a degree or two of latitude or longitude — that is to say, a distance of many tens of kilometres.
That sort of distance is much larger than the size of a typical piece of cloud. As a consequence, simulation of clouds requires a fair amount of inspired guesswork (for which read ‘parameterization’ as mentioned above with regard to the smoke plume analogy) as to what might be a suitable average of whatever is going on between the grid-points of the model. Even if experimental observations suggest that the models get the averages roughly right for a short-term forecast, there is no guarantee they will get them right for atmospheric conditions several decades into the future. Among other reasons, small errors in the numerical modelling of complex processes have a nasty habit of accumulating with time.