With speculation over whether or not the predicted 2014 El Nino Pacific Ocean warming event will actually take place, Columbia University climate scientist Anthony Barnston wonders why the past history of the El Nino Southern Oscillation system is not much help in making predictions this time.
Lately, many of us are wondering if a 2014-15 El Nino is going to materialize, and if so, how strong it might become and how long it will last. It might cross some folks’ minds that the answer to these questions can be found by collecting past ENSO cases that are similar and see what happened. Such an approach is known as analog forecasting, and on some level it makes intuitive sense.
In this post, I’ll discuss why the analog approach to forecasting often delivers disappointing results. Basically, it doesn’t work well because there are usually very few, if any, past cases on record that mimic the current situation sufficiently closely. The scarcity of analogs is important because dissimilarities between the past and the present, even if seemingly minor, amplify quickly so that the two cases end up going their separate ways.
The current situation is interesting because it seems we have been teetering on the brink of El Nino, as our best dynamical and statistical models keep delaying the onset but yet continue to predict the event starting in fairly short order. Which raises the question: have there been other years that have behaved similarly to 2014? Before we check, let’s talk for a minute about how we find good analogs for the current situation.
The set of criteria by which the closest analogs are selected is a contested issue in forecasting. One can select years based on time series, maps, or among many variables across different periods of time. There are also many different ways to measure similarity, and one has to select the appropriate level of closeness to past cases—in other words, decide how close is close enough.
One main criticism of analog forecasting is the subjectivity of making such choices, which can lead to different answers. Here, I will use one method based on similarities of sea surface temperature (SST) in the Nino3.4 region (1). Figure 1 shows six other years during the 1950-2013 period that have behaved similarly to this year in terms of SST, and also shows what happened in the seven months following September.
Figure 1. Monthly average SST anomaly in the Nino3.4 region over the last 15 months for this year (thick black line), and the same for 6 other years selected as the closest analogs for this year. After September, the last month for which we have observations for this year, the ENSO behavior of the chosen analog cases is shown for the following seven months, providing a basis for a possible analog forecast for the current year. Photo credit: IRI, Columbia University and NOAA Climate Program Office.
In checking out the analog forecast possibilities in Fig. 1, it is clear that the outcomes are diverse. Out of the six selected cases, three indicate ENSO-neutral for the coming northern winter season, while the other three show El Nino (at least 0.5˚ anomaly)—and all three attain moderate strength (at least 1˚C anomaly) for at least one 3-month period during the late fall or winter (2). For the coming January, the 6 analogs range from -0.3 to 1.4˚C, revealing considerable uncertainty in the forecast (3).
Although this uncertainty in outcomes is somewhat smaller than that what we would have if we selected years completely randomly from the history, it is larger than that from our most advanced dynamical and statistical models. This is one reason analog forecasting systems have been largely abandoned over the last two decades as more modern prediction systems have proven to provide better accuracy.
The large spread among the six analog cases selected for a current ENSO forecast is not unusual, and it would be nearly as large even if we came up with a more sophisticated analog ENSO forecast system (4). The big problem in analog forecasting is the lack of close enough analogs in the pool of candidates.