Skip to content

Climategate: Plausibility And The Blogosphere In The Post-Normal Age

At the end of January 2010 two distinguished scientific institutions shared headlines with Tony Blair over accusations of the dishonest and possibly illegal manipulation of information. Our ‘Himalayan glaciers melting by 2035′ of the Intergovernmental Panel on Climate Change is matched by his ‘dodgy dossier’ of Saddam’s fictitious subversions. We had the violations of the Freedom of Information Act at the University of East Anglia; he has the extraordinary 70-year gag rule on the David Kelly suicide file. There was ‘the debate is over’ on one side, and ‘WMD beyond doubt’ on the other. The parallels are significant and troubling, for on both sides they involve a betrayal of public trust.

Politics will doubtless survive, for it is not a fiduciary institution; but for science the dangers are real. Climategate is particularly significant because it cannot be blamed on the well-known malign influences from outside science, be they greedy corporations or an unscrupulous State. This scandal, and the resulting crisis, was created by people within science who can be presumed to have been acting with the best of intentions. In the event of a serious discrediting of the global-warming claims, public outrage would therefore be directed at the community of science itself, and (from within that community) at its leaders who were either ignorant or complicit until the scandal was blown open. If we are to understand Climategate, and move towards a restoration of trust, we should consider the structural features of the situation that fostered and nurtured the damaging practices. I believe that the ideas of Post-Normal Science (as developed by Silvio Funtowicz and myself) can help our understanding.

There are deep problems of the management of uncertainty in science in the policy domain, that will not be resolved by more elaborate quantification. In the gap between science and policy, the languages, their conventions and their implications are effectively incommensurable. It takes determination and skill for a scientist who is committed to social responsibility, to avoid becoming a ‘stealth advocate’ (in the terms of Roger Pielke Jr.). When the policy domain seems unwilling or unable to recognise plain and urgent truths about a problem, the contradictions between scientific probity and campaigning zeal become acute. It is a perennial problem for all policy-relevant science, and it seems to have happened on a significant scale in the case of climate science. The management of uncertainty and quality in such increasingly common situations is now an urgent task for the governance of science.

We can begin to see what went seriously wrong when we examine what the leading practitioners of this ‘evangelical science’ of global warming (thanks to Angela Wilkinson) took to be the plain and urgent truth in their case. This was not merely that there are signs of exceptional disturbance in the ecosphere due to human influence, nor even that the climate might well be changing more rapidly now than for a very long time. Rather, they propounded, as a proven fact, Anthropogenic Carbon-based Global Warming. There is little room for uncertainty in this thesis; it effectively needs hockey-stick behaviour in all indicators of global temperature, so that it is all due to industrialisation. Its iconic image is the steadily rising graph of CO2 concentrations over the past fifty years at the Mauna Loa volcano in Hawaii (with the implicit assumption that CO2 had always previously been at or below that starting level). Since CO2 has long been known to be a greenhouse gas, with scientific theories quantifying its effects, the scientific case for this dangerous trend could seem to be overwhelmingly simple, direct, and conclusive.

In retrospect, we can ask why this particular, really rather extreme view of the prospect, became the official one. It seems that several causes conspired. First, the early opposition to any claim of climate change was only partly scientific; the tactics of the opposing special interests were such as to induce the proponents to adopt a simple, forcefully argued position. Then, once the position was adopted, its proponents became invested in it, and attached to it, in all sorts of ways, institutional and personal. And I suspect that a simplified, even simplistic claim, was more comfortable for these scientists than one where complexity and uncertainty were acknowledged. It is not merely a case of the politicians and public needing a simple, unequivocal message. As Thomas Kuhn described ‘normal science’, which (as he said) nearly all scientists do all the time, it is puzzle-solving within an unquestioned framework or ‘paradigm’. Issues of uncertainty and quality are not prominent in ‘normal’ scientific training, and so they are less easily conceived and managed by its practitioners.

Now, as Kuhn saw, this ‘normal’ science has been enormously successful in enabling our unprecedented understanding and control of the world around us. But his analysis related to the sciences of the laboratory, and by extension the technologies that could reproduce stable and controllable external conditions for their working. Where the systems under study are complicated, complex or poorly understood, that ‘textbook’ style of investigation becomes less, sometimes much less, effective. The near-meltdown of the world’s financial system can be blamed partly on naïvely reductionist economics and misapplied simplistic statistics. The temptation among ‘normal’ scientists is to work as if their material is as simple as in the lab. If nothing else, that is the path to a steady stream of publications, on which a scientific career now so critically depends. The most obvious effect of this style is the proliferation of computer simulations, which give the appearance of solved puzzles even when neither data nor theory provide much support for the precision of their numerical outputs. Under such circumstances, a refined appreciation of uncertainty in results is inhibited, and even awareness of quality of workmanship can be atrophied.

In the course of the development of climate-change science, all sorts of loose ends were left unresolved and sometimes unattended. Even the most fundamental quantitative parameter of all, the forcing factor relating the increase in mean temperature to a doubling of CO2, lies somewhere between 1 and 3 degrees, and is thus uncertain to within a factor of 3. The precision (at about 2%) in the statements of the ‘safe limits’ of CO2 concentration, depending on calculations with this factor, is not easily justified. Also, the predictive power of the global temperature models has been shown to depend more on the ’story line’ than anything else, the end-of century increase in temperature ranging variously from a modest one degree to a catastrophic six. And the ‘hockey stick’ picture of the past, so crucial for the strict version of the climate change story, has run into increasingly severe problems. As an example, it relied totally on a small set of deeply uncertain tree-ring data for the Medieval period, to refute the historical evidence of a warming then; but it needed to discard that sort of data for recent decades, as they showed a sudden cooling from the 1960’s onwards! In the publication, the recent data from other sources were skilfully blended in so that the change was not obvious; that was the notorious ‘Nature trick’ of the CRU e-mails.

Even worse, for the warming case to have political effect, a mere global average rise in temperature was not compelling enough. So that people could appreciate the dangers, there needed to be predictions of future climate – or even weather – in the various regions of the world. Given the gross uncertainties in even the aggregated models, regional forecasts are really beyond the limits of science. And yet they have been provided, with various degrees of precision. Those announced by the IPCC have become the most explosive.

As all these anomalies and unsolved puzzles emerged, the neat, compelling picture became troubled and even confused. In Kuhn’s analysis, this would be the start of a ‘pre-revolutionary’ phase of normal science. But the political cause had been taken up by powerful advocates, like Al Gore. We found ourselves in another crusading ‘War’, like those on (non-alcoholic) Drugs and ‘Terror’. This new War, on Carbon, was equally simplistic, and equally prone to corruption and failure. Global warming science became the core element of this major worldwide campaign to save the planet. Any weakening of the scientific case would have amounted to a betrayal of the good cause, as well as a disruption of the growing research effort. All critics, even those who were full members of the scientific peer community, had to be derided and dismissed. As we learned from the CRU e-mails, they were not considered to be entitled to the normal courtesies of scientific sharing and debate. Requests for information were stalled, and as one witty blogger has put it, ‘peer review’ was replaced by ‘pal review’.

Even now, the catalogue of unscientific practices revealed in the mainstream media is very small in comparison to what is available on the blogosphere. Details of shoddy science and dirty tricks abound. By the end, the committed inner core were confessing to each other that global temperatures were falling, but it was far too late to change course. The final stage of corruption, cover-up, had taken hold. For the core scientists and the leaders of the scientific communities, as well as for nearly all the liberal media, ‘the debate was over’. Denying Climate Change received the same stigma as denying the Holocaust. Even the trenchant criticisms of the most egregious errors in the IPCC reports were kept ‘confidential’. And then came the e-mails.

We can understand the root cause of Climategate as a case of scientists constrained to attempt to do normal science in a post-normal situation. But climate change had never been a really ‘normal’ science, because the policy implications were always present and strong, even overwhelming. Indeed, if we look at the definition of ‘post-normal science’, we see how well it fits: facts uncertain,values in dispute, stakes high, and decisions urgent. In needing to treat Planet Earth like a textbook exercise, the climate scientists were forced to break the rules of scientific etiquette and ethics, and to play scientific power-politics in a way that inevitably became corrupt. The combination of non-critical ‘normal science’ with anti-critical ‘evangelical science’ was lethal. As in other ‘gate’ scandals, one incident served to pull a thread on a tissue of protective plausibilities and concealments, and eventually led to an unravelling. What was in the e-mails could be largely explained in terms of embattled scientists fighting off malicious interference; but the materials ready and waiting on the blogosphere provided a background, and that is what converted a very minor scandal to a catastrophe.

Consideration of those protective plausibilities can help to explain how the illusions could persist for so long until their sudden collapse. The scientists were all reputable, they published in leading peer-reviewed journals, and their case was itself highly plausible and worthy in a general way. Individual criticisms were, for the public and perhaps even for the broader scientific community, kept isolated and hence muffled and lacking in systematic significance. And who could have imagined that at its core so much of the science was unsound? The plausibility of the whole exercise was, as it were, bootstrapped. I myself was alerted to weaknesses in the case by some caveats in Sir David King’s book The Hot Topic; and I had heard of the hockey-stick affair. But even I was carried along by the bootstrapped plausibility, until the scandal broke. (I have benefited from the joint project on plausibility in science of colleagues in Oxford and at the Arizona State University).

Part of the historic significance of Climategate is that the scandal was so effectively and quickly exposed. Within a mere two months of the first reports in the mainstream media, the key East Anglia scientists and the Intergovernmental Panel on Climate Change were discredited. Even if only a fraction of their scientific claims were eventually refuted, their credibility as trustworthy scientists was lost. To explain how it all happened so quickly and decisively, we have the confluence of two developments, one social and the other technical. For the former, there is a lesson of Post-Normal Science, that we call the Extended Peer Community. In traditional ‘normal’ science, the peer community, performing the functions of quality-assurance and governance, is strictly confined to the researchers who share the paradigm. In the case of ‘professional consultancy’, the clients and/or sponsors also participate in governance. We have argued that in the case of Post-Normal Science, the ‘extended peer community’, including all affected by the policy being implemented, must be fully involved. Its particular contribution will depend on the nature of the core scientific problem, and also on the phase of investigation. Detailed technical work is a task for experts, but quality-control on even that work can be done by those with much broader expertise. And on issues like the definition of the problem itself, the selection of personnel, and crucially the ownership of the results, the extended peer community has full rights of participation. This principle is effectively acknowledged in many jurisdictions, and for many policy-related problems. The theory of Post-Normal Science goes beyond the official consensus in recognising ‘extended facts’, that might be local knowledge and values, as well as unoffficially obtained information.

The task of creating and involving the extended peer community (generally known as ‘participation’) has been recognised as difficult, with its own contradictions and pitfalls. It has grown haphazardly, with isolated successes and failures. Hitherto, critics of scientific matters have been relegated to a sort of samizdat world, exchanging private letters or writing books that can easily be ignored (as not being peer-reviewed) by the ruling establishment. This has generally been the fate of even the most distinguished and responsible climate-change critics, up to now. A well-known expert in uncertainty management, Jeroen van der Sluijs, explicitly condemned the ‘overselling of certainty’ and predicted the impending destruction of trust; but he received no more attention than did Nikolas Taleb in warning of the ‘fat tails’ in the probability distributions of securities that led to the Credit Crunch. A prominent climate scientist, Mike Hulme, provided a profound analysis in Why We Disagree About Climate Change, in terms of complexity and uncertainty. But since legitimate disagreement was deemed nonexistent, he too was ignored.

To have a political effect, the ‘extended peers’ of science have traditionally needed to operate largely by means of activist pressure-groups using the media to create public alarm. In this case, since the global warmers had captured the moral high ground, criticism has remained scattered and ineffective, except on the blogosphere. The position of Green activists is especially difficult, even tragic; they have been ‘extended peers’ who were co-opted into the ruling paradigm, which in retrospect can be seen as a decoy or diversion from the real, complex issues of sustainability, as shown by Mike Hulme. Now they must do some very serious re-thinking about their position and their role.

Full essay