“The first principle is that you must not fool yourself—and you are the easiest person to fool.”

This quote from Richard Feynman’s 1974 Caltech commencement address is a welcome earworm repeating in my head whenever I’m thinking about probabilities and decision-making.
At the time of this writing, at least as it pertains to science, most of us are only thinking about SARS-CoV-2 and COVID-19. What policies will do more good than harm to individuals and society in the short- and long-term? You are only fooling yourself if you don’t think there’s a high degree of uncertainty about the best path forward.
In these times, alternative and opposing opinions on the problems and solutions surrounding the pandemic need to be heard, not silenced. Yet, popular platforms may not be allowing this to happen. YouTube’s CEO, for example, at one point reportedly said, “anything that goes against WHO recommendations” on the COVID-19 pandemic would be “a violation of our policy,” and those videos would be removed. Twitter also updated its policy, broadening its definition of harmful content to include, “content that goes directly against guidance from authoritative sources of global and local public health information.” Facebook updated its site, “Removing COVID-19 content and accounts from recommendations, unless posted by a credible health organization.”
I completely understand these positions and, on balance, they probably do more good than harm, however, they may come at a cost down the line (or even right now). If history is any guide, censoring any opinions that contradict institutional authorities or the conventional wisdom often doesn’t end well. However, as the German philosopher Hegel put it, the only thing we learn from history is that we learn nothing from history. While COVID-19 presents us with a particularly thorny case of decision making based on scientific uncertainty, this issue is perennial in science. (If you want to read a good article arguing for debate and alternative viewpoints specific to the case of COVID-19, check out this one co-authored by Vinay Prasad and Jeffrey Flier in STAT, the former who I am scheduled to interview in the coming months.)
We are our own worst enemies when it comes to identifying any shortcomings in our hypotheses. We are victims of confirmation bias, groupthink, anchoring, and a slew of other cognitive biases. The worst part is that we are often unaware of our biases, which is why we’re the easiest people to fool. As painful as it seems, considering problems and solutions from a perspective that contradicts our own is one of the best ways to enhance our decision-making. But thinking this way, deliberately and methodically, is a practice, and though it’s really hard, it is necessary in order to sharpen our cognitive swords.
In the early 19th century, the Prussian army adopted war games to train its officers. One group of officers developed a battle plan, and another group assumed the role of the opposition, trying to thwart it. Using a tabletop game called Kriegsspiel (literally “wargame” in German), resembling the popular board game Risk, blue game pieces stood in for the home team—the Prussian army—since most Prussian soldiers wore blue uniforms. Red blocks represented the enemy forces—the red team—and the name has stuck ever since.
Today, red teaming refers to a person or team that helps an organization—the blue team—improve, by taking an adversarial or alternative point of view. In the military, it’s war-gaming and real-life simulations, with the red team as the opposition forces. In computer security, the red team assumes the role of hackers, trying to penetrate the blue team’s digital infrastructure. In intelligence, red teams test the validity of an organization’s approach by considering the possibility of alternative hypotheses and performing alternative analyses. A good red team exposes ways in which we may be fooling ourselves.
“In science we need to form parties, as it were, for and against any theory that is being subjected to serious scrutiny,” wrote the scientific philosopher Karl Popper in 1972. “For we need to have a rational scientific discussion, and discussion does not always lead to a clear-cut resolution.” Seeking evidence that contradicts our opinion is a sine qua non in science.
Popper pointed out that a scientist’s theory is an attempted solution in which she invested great hopes. A scientist is often biased in favor of her theory. If she’s a genuine scientist, however, it’s her duty to try and falsify her theory. But she will inevitably defend it against falsification. It’s human nature. Popper actually found this desirable, to distinguish genuine falsifications from illusory ones. A good blue team keeps the red team honest.
Generally, the more we can introduce and consider opposing views into our thinking, the more we can rely on the knowledge we’re trying to build. But—and this is a very, very big BUT—not all opposing views are equal. Recognizing the difference between scientific (worthy of debate, though often still incorrect over time) claims and pseudoscientific (not worthy of debate, as the very foundations on which they sit are not pulled from the disciple of science or the scientific method) claims is crucial and a failure to do so makes the following exercise futile. At no time is this distinction between “good” science worthy of debate and “junk” science worthy of the skip/delete button simultaneously more important, and more difficult, to appreciate than it is today, where we find the barrier to signal (e.g., good science) and noise (e.g., bad science, or pseudoscience) propagation to be essentially non-existent. How do we differentiate between reasonable and baseless views? This is trickier than it seems, because if we’re not vigilant, we may simply dismiss opposing views as quackery just because they happen to contradict our opinion.