This post is motivated by an essay by Kerry Emanuel published at the Climate Change National Forum, entitled Tail Risk vs. Alarmism, which is in part motivated by my previous post AAAS: What we know.
Excerpts:
In assessing the event risk component of climate change, we have, I would argue, a strong professional obligation to estimate and portray the entire probability distribution to the best of our ability. This means talking not just about the most probable middle of the distribution, but also the lower probability high-end risk tail, because the outcome function is very high there.
Do we not have a professional obligation to talk about the whole probability distribution, given the tough consequences at the tail of the distribution? I think we do, in spite of the fact that we open ourselves to the accusation of alarmism and thereby risk reducing our credibility. A case could be made that we should keep quiet about tail risk and preserve our credibility as a hedge against the possibility that someday the ability to speak with credibility will be absolutely critical to avoid disaster.
Uncertainty monster simplification
In my paper Climate Science and the Uncertainty Monster, I described 5 ways of coping with the monster. Monster Simplification is particularly relevant here: Monster simplifiers attempt to transform the monster by subjectively quantifying or simplifying the assessment of uncertainty.
The uncertainty monster paper distinguished between statistical uncertainty and scenario uncertainty:
Statistical uncertainty is the aspect of uncertainty that is described in statistical terms. An example of statistical uncertainty is measurement uncertainty, which can be due to sampling error or inaccuracy or imprecision in measurements.
Scenario uncertainty implies that it is not possible to formulate the probability of occurrence of one particular outcome. A scenario is a plausible but unverifiable description of how the system and/or its driving forces may develop over time. Scenarios may be regarded as a range of discrete possibilities with no a priori allocation of likelihood.
Given our uncertainty and ignorance surrounding climate sensitivity, I have discussed the folly of attempting probabilistic estimates of climate sensitivity, and to create a pdf (see this previous post Probabilistic estimates of climate sensitivity). In my opinion, the most significant point in the IPCC AR5 WG1 report is their acknowledgment that they cannot create a meaningful pdf of climate sensitivity with a central tendency, and hence they only provide ranges with confidence levels (and they avoid identifying a best estimate of 3C as they did in the AR4). The strategy used in the AR5 is appropriate in context of scenario uncertainty, where they identify some bounds for sensitivity, and present some assessment of likelihood (values less than 1C are extreme unlikely, and values greater than 6C are very unlikely).
So I starkly disagree with this statement by Emanuel:
we have a strong professional obligation to estimate and portray the entire probability distribution to the best of our ability.
In my opinion, we have a strong profession obligation NOT to simply the uncertainty by portraying it as a pdf, when the situation is characterized by substantial uncertainty that is not statistical in nature. This issue is discussed in a practical way with regards to climate science in a paper by Risbey and Kandlikar (2007), see especially Table 5:
Climate sensitivity is definitely not characterized by #1, rather it is characterized by #2 or #4. The lower bound is arguably well defined; the upper bound is not. The problem at the upper bound is what concerns Emanuel; I am arguing that the way to address this is NOT through considering a fat tail that extends out to infinity of a mythical probability distribution.