The heavy reliance by the IPCC on climate model simulations seems less justified with the CMIP5 simulations than it did with the CMIP3 simulations (where I was one of the people that was fooled by the strong agreement between the 20th century simulations and the observations of global temperature anomalies in AR4). The psychological effect on decision makers of this disagreement and larger spread among the models will be interesting.
John Christy’s testimony to the Senate Committee Environment & Public Works Committee can be found here [christy testimony 2012].
The main summary points:
1. It is popular again to claim that extreme events, such as the current central U.S. drought, are evidence of human-caused climate change. Actually, the Earth is very large, the weather is very dynamic, and extreme events will continue to occur somewhere, every year, naturally. The recent “extremes” were exceeded in previous decades.
2. The average warming rate of 34 CMIP5 IPCC models is greater than observations, suggesting models are too sensitive to CO2. Policy based on observations, where year-to- year variations cause the most harm, will likely be far more effective than policies based on speculative model output, no matter what the future climate does.
3. New discoveries explain part of the warming found in traditional surface temperature datasets. This partial warming is unrelated to the accumulation of heat due to the extra greenhouse gases, but related to human development around the thermometer stations. This means traditional surface datasets are limited as proxies for greenhouse warming.
4. Widely publicized consensus reports by “thousands” of scientists are misrepresentative of climate science, containing overstated confidence in their assertions of high climate sensitivity. They rarely represent the range of scientific opinion that attends our relatively murky field of climate research. Funding resources are recommended for “Red Teams” of credentialed, independent investigators, who already study low climate sensitivity and the role of natural variability. Policymakers need to be aware of the full range of scientific views, especially when it appears that one-sided-science is the basis for promoting significant increases to the cost of energy for the citizens.
5. Atmospheric CO2 is food for plants which means it is food for people and animals. More CO2 generally means more food for all. Today, affordable carbon-based energy is a key component for lifting people out of crippling poverty. Rising CO2 emissions are, therefore, one indication of poverty-reduction which gives hope for those now living in a marginal existence without basic needs brought by electrification, transportation and industry. Additionally, modern, carbon-based energy reduces the need for deforestation and alleviates other environmental problems such as water and air pollution. Until affordable energy is developed from non-carbon sources, the world will continue to use carbon as the main energy source as it does today.
Points 1 and 2 (particularly) contain some new and important analyses.
#1 Extreme events
Regarding extreme events: Christy presents some analyses of extreme heat and cold events, that I haven’t seen previously. He also provides analyses of snowfall, drought, and wildfires. From the narrative:
Recently it has become popular to try and attribute certain extreme events to human causation. The Earth however, is very large, the weather is very dynamic, especially at local scales, so that extreme events of one type or another will occur somewhere on the planet in every year. Since there are innumerable ways to define an extreme event (i.e. record high/low temperatures, number of days of a certain quantity, precipitation total over 1, 2, 10 … days, snowfall amounts, etc.) this essentially assures us that there will be numerous “extreme events” in every year because every year has unique weather patterns. The following assesses some of the recent “extreme events” and demonstrates why they are poor proxies for making claims about human causation.
From the broad perspective, where we consider all the extremes above, we should see a warning – that the climate system has always had within itself the capability of causing devastating events and these will certainly continue with or without human influence on the climate. Thus, societies should plan for infrastructure projects to withstand the worst that we already know has occurred, and to recognize, in such a dynamical system, that even worse events should be expected. In other words, the set of the measured extreme events of the small climate history we have, since about 1880, does not represent the full range of extreme events that the climate system (i.e. Mother Nature) can actually generate. The most recent 130 years is simply our current era’s small sample of the long history of climate.
There will certainly be events in this coming century that exceed the magnitude of extremes measured in the past 130 years in many locations. To put it another way, a large percentage of the worst extremes over the period 1880 to 2100 will occur after 2011 simply by statistical probability without any appeal to human forcing at all. Records are made to be broken. Going further, one would assume that about 10 percent of the record extremes that occur over a thousand-year period ending in 2100 should occur in the 21st century. Are we prepared to deal with events even worse than we’ve seen so far? Spending which is directed to creating resiliency to these sure-to-come extremes, particularly drought/flood extremes, seems rather prudent to me – since there are no human means to make them go away regardless of what some regulators might believe.
#2 CMIP5 IPCC climate model simulations
Ok, here is the blockbuster:
In Figure 2.1 below, I display the results from 34 of the latest climate model simulations of global temperature that will be used in the upcoming IPCC AR5 assessment on climate change (KNMI Climate Explorer). All of the data are given a reference of 1979-1983, i.e. the same starting line. Along with these individual model runs I show their average (thick black line) and the results from observations (symbols). The two satellite-based results (circles, UAH and RSS) have been proportionally adjusted so they represent surface variations for an apples-to-apples comparison. The evidence indicates the models on average are over-warming the planet by quite a bit, implying there should be little confidence that the models can answer the question asked by policymakers. Basing policy on the circles (i.e. real data) seems more prudent than basing policy on the thick line of model output. Policies based on the circles would include adaptation to extreme events that will happen because they’ve happened before (noted above and below) and since the underlying trend is relatively small.
First, I’m trying to figure out exactly how this figure was created. I went to the CMIP5 web page. It seems that Christy has somehow spliced the historical simulations (1850 to at least 2005) with the projections (in this case the RCP4.5 scenario). Given this apparent splicing, I am not sure why all these curves look so smooth. Until I get a clarification from Christy, I advise not reading too much into the curves beyond say 2005-2010. Also I am not sure if these simulations include the simulations with coupled carbon cycle.
I have seen results from a few of the individual climate model simulations from CMIP5, but not the synthesis of all the models. Assuming that Christy’s figure (at least up to 2005) has been put together correctly, we see the models are overall biased high, with a greater spread than we saw in the CMIP3/AR4 (this was discussed on a recent thread). Note, all the CMIP3/AR4 models produced results that pretty much matched the observations (see Fig 9.5). Given that the CMIP5 simulations use better models and forcing data, how do we explain the larger bias and spread in CMIP5?
In my uncertainty monster paper, I attributed this agreement in CMIP3 to circular reasoning that included selecting forcing data by each modeling group to produce good agreement with the observed time series. Hegerl et al. heartily objected to our analysis, although the emails seem to support my argument.
Based on what I have heard, their explanation for the larger spread and high biases seems to be related to how the aerosol indirect effect is included (and whether it is included at all).
I look forward to pondering the entire historical simulations back to to 1850. The CMIP5 has a much better experimental design than CMIP3, and there is less flexibility in selecting the forcings to make your simulations agree with the observations. Here is what I think is going on with the high bias since about 1985 shown in Christy’s figure. The models are too sensitive to CO2 forcing, because of a hyperactive water feedback (this hyperactive water vapor feedback arises from approximations used from weather models that cause error accumulation in longer climate simulations). The aerosol indirect effect (negative) can counter this hyperactive postive water vapor feedback by being too strong. I recall a paper by Rostayn (about 10 years old, can’t find it easily) that said including aerosol indirect effect without including fully interactive aerosol (with sinks) would produce an aerosol indirect effect that is too large. So in principle, two wrongs (hyperactive positive water vapor feedback and negative aerosol indirect effect) can make a right (i.e. agree with observations).
The heavy reliance by IPCC on climate model simulations seems less justified with the CMIP5 simulations than it did with the CMIP3 simulations (where I was one of the people that was fooled by the strong agreement between the 20th century simulations and the observations of global temperature anomalies in AR4).
The psychological effect on decision makers of this disagreement and larger spread among the models will be interesting. I hope this improved characterization of the model uncertainty will lead to greater support for observationally based studies to determine attribution and sensitivity, and more focus on assembling and cleaning historical records and developing new paleo proxies.
And with regards to extreme events, we need to see more of the type of regional analyses that Christy has done. Christy’s analysis reinforces that many types of weather extremes were more extreme in the 1930′s and 1950′s.