Skip to content

The IAC Report And IPCC ‘Expert Judgement’ Of Confidence

Back in March, I put up a post, Phil Jones and the ‘expert judgement’ of the IPCC. This raised questions about one of the most important tables in the Summary for Policymakers (SPM) of Working Group I (WGI) in the  IPCC’s Fourth Assessment Report (AR4). This table assigns levels of ‘likelihood’ to evidence of observed trends in extreme weather events, the possibility that there is a human contribution to these trends, and predictions that the trends will continue during the 21st century.

According to this table, the authors of the IPCC report have greater confidence in the predictions than either the observations they are derived from or the hypotheses that they are based on, which seems to turn logic on its head. So when I started ploughing through the InterAcademy Council’s Review of the Processes and Procedures of the IPCC I was interested to see that the same table turns up on page 31 in a very critical chapter headed IPCC’s Evaluation of Evidence and Treatment of Uncertainty.


So far as I can see, the IAC have not addressed the precise point that I was making, but it is quite clear that they are very concerned about the way in which the last assessment report represented confidence and uncertainty, and that they have chosen to highlight this table as an example of the impact that expert judgement of confidence in research findings has.

Before going any further, lets bear a couple of points in mind. The phenomena listed in the table  droughts, storms, heatwaves, sea level rise, tropical storms (hurricanes and cyclones) and heavy precipitation (floods) are the stuff of which anthropogenic climate change nightmares are made. The media, politicians, and environmental activists have used evidence that these are probably increasing in frequency and severity, and are likely to continue to do so, as the main plank in their justification of action on climate change. The credibility of their assertions depends on one authority only: the IPCC reports.

Here is what the IAC report has to say in the introduction to the chapter on The IPCC’s Evaluation of Evidence and the Treatment of Uncertainty.

The evolving nature of climate science, the long timescales involved, and the difficulties of predicting human impacts on and responses to climate change mean that many of the results presented in IPCC reports have inherently uncertain components. To inform policy decisions properly, it is important for uncertainties to be characterized and communicated clearly and coherently.

This chapter … explores whether uncertainty is characterized appropriately, given the nature of IPCC assessments, and whether the scales used to characterize confidence in results are appropriate, given the nature of the conclusions. At the end of the chapter, the Committee summarizes its conclusions and recommendations for improving the presentation of evidence and treatment of uncertainty in IPCC assessment reports.

My emphasis
Page 27

The IAC, therefore, recognise the very significant role that uncertainty must to play in view of our incomplete understanding of the climate system, and also that improvements are required in the way that this is represented in IPCC reports. As they accept that improvements are necessary, that implies that there are problems with present practice.

Under the heading Uncertainty Guidance in the Fourth Assessment Report, the IAC have this to say:

IPCC authors are tasked to review and synthesize available literature rather than to conduct original research. This limits the authors’ abilities to formally characterize uncertainty in the assessment reports. As a result, IPCC authors must rely on their subjective assessments of the available literature to construct a best estimate and associated confidence intervals.
My emphasis
Page 27

It would be difficult to overestimate the influence that the expert judgement of authors has on the credibility of climate science as it is presented to policymakers by the IPCC. Most will not be able to make their own decisions on whether scientific findings are well supported by evidence. They must rely entirely on the levels of confidence assigned by the authors.

Later, the report makes clear just how ubiquitous these expert judgements are:

In addition to characterizing uncertainty using confidence intervals and probability distributions, Working Group I used a combination of the confidence and likelihood scales to characterize the certainty of their conclusions. Virtually every statement in the Summary for Policy Makers is characterized using the terms employed by one of these scales. Table 3.4 illustrates the use of the likelihood scale, including the likelihood of a trend in extreme weather events in the late 20th century, the likelihood of a human contribution to that trend, and the likelihood of future trends in the 21st century, based on the SRES scenarios.
My emphasis
Page 30

Among the conclusions and recommendations at the end of the chapter is this:

The IPCC uncertainty guidance urges authors to provide a traceable account of how authors determined what ratings to use to describe the level of scientific understanding (Table 3.1) and the likelihood that a particular outcome will occur (Table 3.3). However, it is unclear exactly whose judgments are reflected in the ratings that appear in the Fourth Assessment Report or how the judgments were determined. How, exactly, a consensus was reached regarding subjective probability distributions needs to be documented. The uncertainty guidance for the Third Assessment Report required authors to indicate the basis for assigning a probability to an outcome or event (Moss and Schneider, 2000), and this requirement is consistent with the guidance for the Fourth Assessment

Report. Recommendation: Chapter Lead Authors should provide a traceable account of how they arrived at their ratings for level of scientific understanding and likelihood that an outcome will occur.
Emphasis in the original
Page 37

Back in March, when I posted about the apparent illogicality of the confidence assessment in the extreme weather events table used in WGI SPM, it would have been very interesting to know whose expert judgement had been relied on and how they had reached their conclusions. It is seems likely that scientists who were implicated in the Climategate scandal were involved, as I said in that post, but there was no way of knowing.

It would seem that anyone reading the immensely influential Fourth Assesment Report must apply blind faith to what they are told about the reliability of the science that underpins its findings and recommendations. That faith must also extend to assuming that those whose expert judgement they are relying on are not only competent, but that they are free of any preconceptions and prejudices relating the science they are assessing and they are undertaking this quite objectively. Anyone who has read the Climategate email has no reason to be confident that these conditions have been met. Apparently there are no records of who made these judgements, or how they were arrived at, so there is no way that they can either be validated or challenged other than repeating the whole process. This obviously is not going to happen.


The IPCC uncertainty guidance provides a good starting point for characterizing uncertainty in the assessment reports. However, the guidance was not consistently followed in the fourth assessment, leading to unnecessary errors. For example, authors reported high confidence in statements for which there is little evidence, such as the widely-quoted statement that agricultural yields in Africa might decline by up to 50 percent by 2020. Moreover, the guidance was often applied to statements that are so vague they cannot be falsified. In these cases the impression was often left, quite incorrectly, that a substantive finding was being presented.
My emphasis
Page 36

The report refers specifically to a failing in the WGII report, but may apply equally to the extreme weather table in the WGI SPM. This is tantalising, because throughout the various revelations of Climategate, and the subsequent furore about the various errors found in AR4, those who seek to defend the IPCC have consistently claimed that the mistakes made by the IPPC have been peripheral and irrelevant to the core findings on the physical basis for concern about climate change.

I wish that I could find someone who is better qualified than I am, and has more authority, who would be prepared to examine the claims made in the extreme weather table. The science that it summarises is at the core of so many claims about the perils of AGW, and has been so influential in the formulation of public policy over the last few years. It is very important that the levels of confidence that it assigns are well founded and correct, and that they can be shown to be so.

A golden opportunity to make a detailed examination of the very strange logic that this table seems to apply has been missed, or ignored, in the course of the IAC’s review.

Harmless Sky, 22 September 2010