There was a consensus – in fact not a 97% consensus, but a 100% consensus – among the experts that the Conservatives would get less than 300 seats. But the consensus was wrong.
One theme of this blog has been the failure of the predictions made by expert climate scientists, together with the failure to acknowledge or investigate this failure.
Last night we had another very interesting example of expert predictions failing.
With all the results now in, we know that the Conservatives have 331 seats, and Labour 232.
How does this compare with the various predictions made just before the vote?
Con | Lab | |
Final Result | 331 | 232 |
Bookies (oddschecker) | 287 | 267 |
Nate Silver (538) | 278 | 267 |
Guardian | 273 | 273 |
British Election Study | 274 | 278 |
I’ve listed here some of the predictions made yesterday, in decreasing order of accuracy. The “Bookies” row comes from Oddschecker, which lists odds provided by 20 or so bookies in a neat Table form (currently showing, for example, the options for next Labour Leader). You’ll have to take my word for it that I copied down their most likely outcome correctly.
Nate Silver’s prediction is still on-line; he is sometimes regarded as a guru of great wisdom, despite having got the 2010 UK election spectacularly wrong (he predicted about 100 Lib Dem seats). The final projection from the Guardian was a dead heat between Labour and the Conservatives. The British Election Study is a group of, um,expert UK academics. Their final forecast is here.
The first thing to note of course is that everyone got it badly wrong, greatly underestimating the Conservative support. Reasons for this include
(a) the “Closet Conservative” factor – there is a tendency for people not to own up to supporting the Conservative party, and
(b) incorrect sampling by the pollsters – perhaps quiet conservatives stay at home, don’t answer the phone much and aren’t as eager as some others to express their opinions.
However, I thought that the pollsters were well aware of these factors, particularly since the 1992 election when something very similar happened, and compensated for it.
But what I found most interesting is that of all the predictions, the worst was that given by the team of expert university academics. Roger Pielke wrote a post about their predictions back in March, when their average prediction was similar to that in the table above, suggesting a small lead for Labour. There was a consensus – in fact not a 97% consensus, but a 100% consensus – among the experts that the Conservatives would get less than 300 seats. But the consensus was wrong.
Why does a team of experts perform worse than the bookies, who presumably base their odds mainly on the money placed, i.e. on public opinion?! One possible explanation for this apparent contradiction is suggested by the work of Jose Duarte and others, on the effects of the well-known left-wing bias in academia; it may be that inadvertently the researchers are building in their own political bias into the assumptions they make in their model, and this is influencing their results.
Other possible explanations for the surprise election results and the apparent failure of the expert predictions are as follows:
* This is just a short-term fluctuation – a hiatus, or pause, in the Labour vote – that the models cannot be expected to predict correctly. The experts have much more confidence in their projection for the 2100 election. (HT David)
* The raw data from the election results is not reliable, and needs to be adjusted by the experts. After suitable UHI and homogeneity adjustments have been applied, the results are in line with the expert predictions, and Ed Miliband is declared the new Prime Minister.
* More funding and bigger computers are urgently needed, so that we can get more accurate predictions.
* The missing Labour voters are hiding at the bottom of the oceans.
Finally, Feynman’s rule applies again: