A new UN report relies on discredited research – and on academics who conceal vital information.
Last Halloween, the Washington Post ran a dramatic headline: Startling new research finds large buildup of heat in the oceans, suggesting a faster rate of global warming.
This story was huge news worldwide. Fortune magazine quoted Laure Resplandy, the Princeton University oceanographer who was the research paper’s lead author. “The planet warmed more than we thought,” she said. “It was hidden from us just because we didn’t sample it right.”
In fact, the problem wasn’t hiding in the ocean, but in the paper’s own mathematical calculations. Within days Nic Lewis, a UK private citizen and math whiz, had published the first of four detailed critiques of the paper’s statistical methodology (see here, here, here, and here).
We’re told that research published in prestigious scientific journals is reliable, and that peer review is meaningful. Yet 19 days after those Halloween headlines, the journal announced the authors had acknowledged a number of errors.
Two weeks ago, presumably after months of attempting to rescue the paper, the journal threw in the towel and retracted it wholesale.
What happened in between? The Intergovernmental Panel on Climate Change (IPCC) released a 1,200-page report about oceans. Chapter 5 of that report cites this now-retracted research (see pages 5-27 and 5-183 here).
In fairness, this single citation may just be a typo. There’s a good chance the IPCC meant to cite a different 2018 paper, in which Resplandy was also the lead author.
But the matter doesn’t end there. The UK-based Global Warming Policy Foundation (GWPF) is now pointing out that a crucial conclusion of the IPCC’s report relies heavily on a second paper titled How fast are the oceans warming?
Written by Lijing Cheng and colleagues John Abraham, Zeke Hausfather, and Kevin Trenberth, it was published in January 2019 in Science. The journal calls it a ‘Perspective,’ because rather than being a research paper, it’s more of an argument.
In three places, the Halloween research is cited to support its conclusions. Nowhere do Cheng and his colleagues acknowledge that the statistical methodology of the Halloween research had already been torn to shreds, that the paper’s authors had already conceded it was flawed.