Science is facing a “reproducibility crisis” where more than two-thirds of researchers have tried and failed to reproduce another scientist’s experiments, research suggests.
This is frustrating clinicians and drug developers who want solid foundations of pre-clinical research to build upon.
From his lab at the University of Virginia’s Centre for Open Science, immunologist Dr Tim Errington runs The Reproducibility Project, which attempted to repeat the findings reported in five landmark cancer studies.
“The idea here is to take a bunch of experiments and to try and do the exact same thing to see if we can get the same results.”
You could be forgiven for thinking that should be easy. Experiments are supposed to be replicable.
The authors should have done it themselves before publication, and all you have to do is read the methods section in the paper and follow the instructions.
Sadly nothing, it seems, could be further from the truth.
After meticulous research involving painstaking attention to detail over several years (the project was launched in 2011), the team was able to confirm only two of the original studies’ findings.
Two more proved inconclusive and in the fifth, the team completely failed to replicate the result.
“It’s worrying because replication is supposed to be a hallmark of scientific integrity,” says Dr Errington.
Concern over the reliability of the results published in scientific literature has been growing for some time.
According to a survey published in the journal Nature last summer, more than 70% of researchers have tried and failed to reproduce another scientist’s experiments.
Marcus Munafo is one of them. Now professor of biological psychology at Bristol University, he almost gave up on a career in science when, as a PhD student, he failed to reproduce a textbook study on anxiety.
“I had a crisis of confidence. I thought maybe it’s me, maybe I didn’t run my study well, maybe I’m not cut out to be a scientist.”
The problem, it turned out, was not with Marcus Munafo’s science, but with the way the scientific literature had been “tidied up” to present a much clearer, more robust outcome.
“What we see in the published literature is a highly curated version of what’s actually happened,” he says.
“The trouble is that gives you a rose-tinted view of the evidence because the results that get published tend to be the most interesting, the most exciting, novel, eye-catching, unexpected results.
“What I think of as high-risk, high-return results.”
The reproducibility difficulties are not about fraud, according to Dame Ottoline Leyser, director of the Sainsbury Laboratory at the University of Cambridge.
That would be relatively easy to stamp out. Instead, she says: “It’s about a culture that promotes impact over substance, flashy findings over the dull, confirmatory work that most of science is about.”
see also GWPF Report: Donna Laframboise: Peer Review — Why Skepticism Is Essential