Temperature trends (in °C/century, in terms of colors) over the whole history as recorded by roughly 5,000 stations included in HadCRUT3. To be discussed below.
As Shawn has also noticed, the worst defect is associated with the 863th (out of 5,113) station in Jeddah, Saudi Arabia. This one hasn’t submitted any data. For many stations, some months (and sometimes whole years) are missing so you get -99 instead. This shouldn’t be confused with numbers like -78.9: believe me, stations in Antarctica have recorded average monthly temperatures as low as -78.9 °C. It’s not just a minimum experienced for an hour: it’s the monthly average.
I wanted to know what are the actual temperature trends recorded at all stations – i.e. what is the statistical distribution of these slopes. Shawn had this good idea to avoid the computation of temperature anomalies (i.e. subtraction of the seasonally varied “normal temperature”): one may calculate the trends for each of the 12 months separately.
At a very satisfactory accuracy, the temperature trend for the anomalies that include all the months is just the average of those 12 trends. In all these calculations, you must carefully omit all the missing data – indicated by the figure -99. But first, let me assure you that the stations are mostly “old enough”:
As you can see, a large majority of the 5,000 weather stations is 40-110 years old (if you consider endYear minus startYear). The average age is 77 years – and that’s also because you may find a nonzero number of stations that have more than 250 years of the data. So it’s not true that you can get too many “bizarre” trends just because they arise from a very small number of short-lived and young stations.
Following Shawn’s idea, I computed the 12 histograms for the overall historical warming trends corresponding to the 12 months. They look like this:
Click to zoom in.
You may be irritated that the first histogram looks much broader than e.g. the fourth one and you may start to think why it is so. At the end, you will realize that it’s just an illusion – the visual difference arises because the scale on the y-axis is different and it’s different because if there’s just “one central bin” in the middle, it may reach much higher a maximum than if you have two central bins. 😉
This insight is easily verified if you actually sketch a basic table for these 12 histograms:
The columns indicate the month, starting from January; the number of stations that yielded legitimate trends for the month; the average trend for the stations and the given month, in °C/century; and the standard deviation – the width of the histogram.
You may actually see that September (closely followed by October) saw the slowest warming trend in these 5,000 stations – about 0.5 °C per century – while February (closely followed by March) had the fastest trend of 1.1 °C per century or so. The monthly trends are slightly random numbers in the ballpark of 0.7 °C but the function “trend” seems to be a more continuous, sine-like function of the month than white noise.
At any rate, it’s untrue that the 0.7 °C of warming in the last century is a “universal” number. In fact, for each month, you get a different figure and the maximum one is more than 2 times larger than the minimum one. The warming trends hugely depend both on the places as well as the months.
The standard deviations of the temperature trend (evaluated for a fixed month of the year but over the statistical ensemble of all the legitimate weather stations) go from 2.14 °C per century in September to 2.64 °C in February – the same winners and losers! The difference is much smaller than the huge “apparent” difference of the widths of the histogram that I have explained away. You may say that the temperatures in February tend to oscillate much more than those in September because there’s a lot of potential ice – or missing ice – on the dominant Northern Hemisphere. The ice-albedo feedback and other ice-related effects amplify the noise – as well as the (largely spurious) “trends”.
Finally, you may combine all the monthly trends in a huge melting pot. You will obtain this beautiful Gauss-Lorentz hybrid bell curve:
It’s a histogram containing 58,579 monthly/local trends – some trends that were faster than a certain large bound were omitted but you see that it was a small fraction, anyway. The curve may be imagined to be a normal distribution with the average trend of 0.76 °C per century – note that many stations are just 40 years old or so which is why they may see a slightly faster warming. However, this number is far from being universal over the globe. In fact, the Gaussian has a standard deviation of 2.36 °C per century.
The “error of the measurement” of the warming trend is 3 times larger than the result!
If you ask a simple question – how many of the 58,579 trends determined by a month and by a place (a weather station) are negative i.e. cooling trends, you will see that it is 17,774 i.e. 30.3 percent of them. Even if you compute the average trend for all months and for each station, you will get very similar results. After all, the trends for a given stations don’t depend on the month too much. It will still be true that roughly 30% of the weather stations recorded a cooling trend in all the monthly anomalies on their record.
Finally, I will repeat the same Voronoi graph we saw at the beginning (where I have used sharper colors because I redefined the color function from “x” to “tanh(x/2)”):
Ctrl/click to zoom in (new tab).
The areas are chosen according to their nearest weather station – that’s what the term “Voronoi graph” means. And the color is chosen according to a temperature color scheme where the quantity determining the color is the overall warming (+, red) or cooling (-, blue) trend ever recorded at the given temperature station.
It’s not hard to see that the number of places with a mostly blue color is substantial. The cooling stations are partly clustered although there’s still a lot of noise – especially at weather stations that are very young or short-lived and closed.
As far as I remember, this is the first time when I could quantitatively calculate the actual local variability of the global warming rate. Just like I expected, it is huge – and comparable to some of my rougher estimates. Even though the global average yields an overall positive temperature trend – a warming – it is far from true that this warming trend appears everywhere.
In this sense, the warming recorded by the HadCRUT3 data is not global. Despite the fact that the average station records 77 years of the temperature history, 30% of the stations still manage to end up with a cooling trend. The warming at a given place is 0.75 plus minus 2.35 °C per century.
If the rate of the warming in the coming 77 years or so were analogous to the previous 77 years, a given place XY would still have a 30% probability that it will cool down – judging by the linear regression – in those future 77 years! However, it’s also conceivable that the noise is so substantial and the sensitivity is so low that once the weather stations add 100 years to their record, 70% of them will actually show a cooling trend.
Even if you imagine that the warming rate in the future will be 2 times faster than it was in the last 77 years (in average), it would still be true that in the next 40 years or so, i.e. by 2050, almost one third of the places on the globe will experience a cooling relatively to 2010 or 2011! So forget about the Age of Stupid doomsday scenario around 2055: it’s more likely than not that more than 25% of places will actually be cooler in 2055 than in 2010.
Isn’t it remarkable? There is nothing “global” about the warming we have seen in the recent century or so.
The warming vs cooling depends on the place (as well as the month, as I mentioned) and the warming places only have a 2-to-1 majority while the cooling places are a sizable minority. Of course, if you calculate the change of the global mean temperature, you get a positive sign – you had to get one of the signs because the exact zero result is infinitely unlikely. But the actual change of the global mean temperature in the last 77 years (in average) is so tiny that the place-dependent noise still safely beats the “global warming trend”, yielding an ambiguous sign of the temperature trend that depends on the place.
Imagine, just for the sake of the argument, that any change of the temperature (calculated as a trend from linear regression) is bad for every place on the globe. It’s not true but just imagine it. So it’s a good idea to reduce the temperature change between now and e.g. the year 2087.
Now, all places on the planet will pay billions for special projects to help to cool the globe. However, 30% of the places will find out in 2087 that they will have actually made the problem worse because they will get a cooling and they will have helped to make the cooling even worse! 😉
Because of this subtlety, it would be an obvious nonsense to try to cool the globe down even if the global warming mattered because it’s extremely far from certain that cooling is what you would need to regulate the temperature at a given place. The regional “noise” is far larger than the trend of the global average so every single place on the Earth can neglect the changes of the global mean temperature if they want to know the future change of their local temperature.
The temperature changes either fail to be global or they fail to be warming. There is no global warming – this term is just another name for a pile of feces.
And that’s the memo.