Skip to content

Alarming cracks are starting to penetrate deep into the scientific edifice. They threaten the status of science and its value to society. And they cannot be blamed on the usual suspects — inadequate funding, misconduct, political interference, an illiterate public. Their cause is bias, and the threat they pose goes to the heart of research.

Dan Sarewitz has a column in Nature entitled Beware the creeping cracks of bias, with subtitle Evidence is mounting that research is riddled with systematic errors. Left unchecked, this could erode public trust . Some excerpts:

Bias is an inescapable element of research, especially in fields such as biomedicine that strive to isolate cause–effect relations in complex systems in which relevant variables and phenomena can never be fully identified or characterized. Yet if biases were random, then multiple studies ought to converge on truth. Evidence is mounting that biases are not random.

Early signs of trouble were appearing by the mid-1990s, when researchers began to document systematic positive bias in clinical trials funded by the pharmaceutical industry. Initially these biases seemed easy to address, and in some ways they offered psychological comfort. The problem, after all, was not with science, but with the poison of the profit motive. It could be countered with strict requirements to disclose conflicts of interest and to report all clinical trials.

Yet closer examination showed that the trouble ran deeper. Science’s internal controls on bias were failing, and bias and error were trending in the same direction — towards the pervasive over-selection and over-reporting of false positive results.

How can we explain such pervasive bias? Like a magnetic field that pulls iron filings into alignment, a powerful cultural belief is aligning multiple sources of scientific bias in the same direction. The belief is that progress in science means the continual production of positive findings. All involved benefit from positive results, and from the appearance of progress. Scientists are rewarded both intellectually and professionally, science administrators are empowered and the public desire for a better world is answered. The lack of incentives to report negative results, replicate experiments or recognize inconsistencies, ambiguities and uncertainties is widely appreciated — but the necessary cultural change is incredibly difficult to achieve.

Researchers seek to reduce bias through tightly controlled experimental investigations. In doing so, however, they are also moving farther away from the real-world complexity in which scientific results must be applied to solve problems.

Scientists rightly extol the capacity of research to self-correct. But the lesson coming from biomedicine is that this self-correction depends not just on competition between researchers, but also on the close ties between science and its application that allow society to push back against biased and useless results.

It would therefore be naive to believe that systematic error is a problem for biomedicine alone. It is likely to be prevalent in any field that seeks to predict the behaviour of complex systems — economics, ecology, environmental science, epidemiology and so on. The cracks will be there, they are just harder to spot because it is harder to test research results through direct technological applications (such as drugs) and straightforward indicators of desired outcomes (such as reduced morbidity and mortality).

Nothing will corrode public trust more than a creeping awareness that scientists are unable to live up to the standards that they have set for themselves. Useful steps to deal with this threat may range from reducing the hype from universities and journals about specific projects, to strengthening collaborations between those involved in fundamental research and those who will put the results to use in the real world. There are no easy solutions. The first step is to face up to the problem — before the cracks undermine the very foundations of science.

Brian Martin’s book

Brian Martin has an online book entitled The bias of science (published in 1979).  From the jacket blurb:

How do values enter into science? And what values are they? In The bias of science, applied mathematician Brian Martin traces the issues involved in these questions from the details of scientific research work to the structure of the scientific community and of scientific knowledge. The bias of science starts out as a case study of two scientific research papers, which are about the pollution of the upper atmosphere by Concorde-type aircraft. The writers of these papers are shown to ‘push their arguments’ in various ways, such as through their technical assumptions. Dr Martin argues that the particular orientations of the authors of the papers can best be explained in terms of ‘presuppositions’ about what the scientists are trying to prove. Evidence that the existence of such presuppositions is a common and expected feature of science leads to analyses of other scientific papers, to surveys of the sociology and epistemology of science and the psychology of scientists, and to a comparison of communication of scientific ideas in scientific papers and newspapers.

The idea of presuppositions is then used in a more general sense to look at the structural biases underlying science in general – scientific research, the scientific community, and scientific knowledge. Martin looks critically at political and economic influences on scientific research, at the selective usefulness of scientific work to different groups in society, at the use of science to justify political decisions, and at the fundamental biases in scientific knowledge itself.

Finally, to highlight both the presuppositions underlying current science and the values of the author, the case for self-managed science – a science participated in by all the community in a self-managed, non-hierarchical society – is argued.

The bias of science is unique in basing a critique of science on a detailed analysis of a particular research area in the physical sciences. Its aim is to show how an analysis of science can be followed through, rather than authoritatively preach. The bias of science is also one of the very few comprehensive critiques of science by a young practising research scientist.

I’ve skimmed this book, there is some very provocative and insightful material here.  In purusing Martin’s other publications, I have flagged some for future posts.

JC comments: recognition of bias and its sources is the first step.  As Sarewitz states, “A biased scientific result is no different from a useless one.”