Fear is a strong motivating factor, having evolved over millennia as we have protected ourselves against predators. Fear supports self-preservation by making us risk-averse and cautious. But such a deep, visceral, evolved emotion does not always serve our long-term objectives of thriving; it leads to maximin outcomes, and it is often mismatched to the actual threats to our self-preservation. As our environments change around us, we can fear things we shouldn’t and may not fear things that we should; we overthink everything and tend toward a “precautionary principle” approach, making us risk-averse and cautious.
I think such fear is a component in the persistence of regulation when it’s maladaptive to technological change, so I was happy to read Adam Thierer’s new Mercatus working paper, Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle. Adam lays out a framework for analyzing fear-based attitudes toward technology and technological change that’s informed by economics, sociology, psychology, and rhetoric. He tackles the question of why, and how, participants in public policy debates use appeals to fear to sway opinion toward anticipatory regulation and forms of censorship:
While cyberspace has its fair share of troubles and troublemakers, there is no evidence that the Internet is leading to greater problems for society than previous technologies did. That has not stopped some from suggesting there are reasons to be particularly fearful of the Internet and new digital technologies. There are various individual and institutional factors at work that perpetuate fear-based reasoning and tactics.
He analyzes the use of “appeal to fear” and “appeal to force” logic in the construction of arguments in favor of regulation and censorship, focusing on case studies of online child safety and violent media and online privacy and cybersecurity. In deconstructing these arguments he identifies four ways that fear can be a myth: it may be empirically unfounded and lacking evidence, other variables may be more important in affecting behavior than the feared variable, not all individuals have the same reaction to the feared variable, and other approaches than regulation exist that can mitigate the consequences of the feared variable (pp. 5-6).
Adam introduces the phenomenon of the “technopanic”, which is “… a moral panic centered on societal fears about a particular contemporary technology” (p. 7). Because culture often evolves more slowly than technology, as we are adapting culturally to the new technology we can see these panic phenomena, which can result in demonizing the technology and can lead to calls to “do something”, typically some form of control-based anticipatory regulation or censorship. A crucial part of manipulating individual attitudes to tap into fear and create advocacy for and acceptance of such regulation is what Adam calls “threat inflation”:
Thus, fear appeals are facilitated by the use of threat inflation. Specifically, threat inflation involves the use of fear-inducing rhetoric to inflate artificially the potential harm a new development or technology poses to certain classes of the population, especially children, or to society or the economy at large. These rhetorical flourishes are empirically false or at least greatly blown out of proportion relative to the risk in question. (p. 9)
Allowing threat inflation and technopanics to drive policy outcomes is socially corrosive and wasteful; it diverts resources from their higher-valued uses in dealing with actual risks rather than inflated ones, and it creates an environment of suspicion and social control, particularly censorship and information control. After analyzing six factors that create conditions favorable for the development of threat inflation and technopanics regarding Internet technology (nostalgia, special interests, etc., well worth reading in detail), he proposes two categories of policy response that we should pursue instead of prohibition and anticipatory regulation: resiliency and adaptation. We build resiliency to threats through education, transparency, labeling, etc., and we adapt to living with risk through experimentation, trial-and-error, experience, and social norms. These two are complementary; information-sharing about best practices can shape social norms and get people to change their behavior without regulation. For example, I don’t sign my credit cards, but instead write “CHECK ID” in the signature line and present a photo ID when using them. Having store clerks and other shoppers witness my behavior to protect my identity may lead to their replication of it, and has led over time to a change in behavior (remember back in the 1990s when they used to write your phone number on the receipt? Yikes! But that behavior’s gone extinct.).
We cannot eliminate risk through resilience and adaptation, but we can’t eliminate it through regulation either. Better to have strong, flexible, adaptable institutions and practices that enable us to continue thriving in unknown and changing conditions, while we enjoy the substantial benefits of technological creativity. While I heartily recommend Adam’s paper to you all as a good and thought-provoking read, he also summarizes it in this recent Forbes column.