“Behavioural Government:” Implications for Regulator Behavior

September 6, 2018

Download this Commentary (PDF)


The Behavioural Insights Team (BIT) in the UK published a new report titled Behavioural Government. Although we usually talk about “nudging” to correct irrational choices of individual citizens, this report focuses instead on the behavioral biases of policymakers. The report considers various forms of behavioral bias and proposes a set of strategies to mitigate them.

Established by the UK government in 2010, the BIT (also known as the Nudge Unit) applies behavioral insights to public policy to help individuals make better choices for themselves. Similar ideas have been implemented in many other governments including the United States. However, a major assumption behind these initiatives is that policymakers have a better grasp on individuals’ optimal choices than the individuals themselves. Recent research demonstrates a willingness to challenge this assumption, “cast[ing] serious doubts on the capacity of the government to accomplish systematic improvements of seemingly imperfect individual decisions.” The BIT report shares a similar idea, acknowledging that policymakers do not always have a superior understanding of human behavior, but “are themselves influenced by the same heuristics and biases that they try to address in others.”

Although the BIT report primarily focuses on the UK government, this commentary explores the idea in the U.S. context, focusing on regulators’ behavior. It compares private and public decision makers, and illustrates the importance of applying behavioral insights to public decision making.

Private vs. public decision makers

Although public decision makers such as regulators, legislators, and judges are often considered to be more experienced and knowledgeable than the rest of us, there is no concrete evidence that these experts are immune to psychological limitations. Moreover, public decision makers often need to make decisions and judgment under uncertain, time-pressured conditions, which likely increases their tendency to rely on heuristics—cognitive shortcuts or rules of thumb—that can lead to systematic errors.

In the long run, public decision makers may be more susceptible to errors than private decision makers. Public decision makers miss out on feedback that individuals in the private market can use to reduce their biases. As Howard Beales, Senior Scholar at the GW Regulatory Studies Center (GW RSC), clarifies in his paper, the market interactions between consumers and sellers allow them to learn from their experiences and mistakes and respond correspondingly, reducing the influence of irrational choices. As such, market participants will be able to reduce their cognitive biases and limitations in the long run. Nevertheless, public decision makers don’t get this same kind of immediate, observable feedback. Regulators, for example, promulgate rules based on ex-ante cost-benefit estimates and assumptions but rarely conduct retrospective review of existing rules to examine the actual (and unintended) impacts of regulatory actions. Although they sometimes get feedback from stakeholders and legislators, the feedback is not necessarily unbiased and may not reflect the true regulatory reality.

Regulators’ behavioral biases

A limited but increasing body of scholarship has discussed regulators’ susceptibility to the same behavioral biases as private decision makers. In particular, studies have shown some evidence that U.S. regulators are subject to overconfidence, myopia, and availability heuristic.

Many have pointed out that regulators tend to be overconfident in their own ability to comprehend the market and propose policy interventions. As a result, they are likely to mistakenly believe that they understand all the causes and consequences of a regulatory action, which sometimes leads to unintended consequences of regulation. For example, the auto safety regulations (e.g., the seatbelt requirements) caused drivers to increase risk-taking behavior, which resulted in more pedestrian deaths and nonfatal accidents while offsetting some or all of the intended benefits of the regulations. This is well known as the Peltzman effect.

Moreover, regulators can be myopic. Just as consumers tend to value present savings more than future savings, regulators tend to focus on the subject matter they are familiar with or responsible for while overlooking the other aspects of related issues. An example is the Corporate Average Fuel Economy standards. A study finds that 87% of the estimated benefits in the standards for light-duty vehicles are derived from correcting assumed consumer irrationality about fuel economy choices. The authors criticize regulators for promulgating the rule with a single focus on fuel efficiency while ignoring other motor-vehicle attributes affected by the regulation (e.g., reduced interior space) and the resulting loss in consumer welfare.

Regulatory actions can also be heavily influenced by salient events or crises. An empirical study shows that, in promulgating regulations under the Superfund statute, the Environmental Protection Agency overestimated the risk levels in a number of Superfund cases. Researchers argue that the agency’s risk perception was affected by the public pressure and media attention triggered by the chemical waste leak into Love Canal, New York between 1943 and 1952. The phenomenon that regulatory agenda and debates are driven by extreme events is called the availability heuristic, which refers to the tendency of people to assess the frequency or probability of an event based on how easily its occurrences can be brought to mind, rather than its actual probability based on empirical data.

There are many other biases and heuristics that may apply to regulators, as indicated by the BIT report and other studies. In fact, as Cass Sunstein acknowledges, “for every bias identified for individuals, there is an accompanying bias in the public sphere.”

Regulators’ choice architecture

An aspect that is often overlooked in the research studying public decision makers’ behavioral biases is the environment in which they make decisions, or what Richard Thaler and Cass Sunstein call “choice architecture.” They argue that decision makers make choices “in an environment where many features, noticed and unnoticed, can influence their decisions.” The institutional settings and incentives inherent in regulators’ choice architecture can enhance (or sometimes mitigate) their susceptibility to behavioral biases.

For example, regulators are institutionalized in an agency with a set mission. The legislative mandates of each agency direct its employees to focus on the policy concerns of its dominant interest (e.g., protecting the public health), which makes regulators more likely to myopically make certain policy choices while overlooking the others. Also, regulators face the incentive to seek support from public interest groups, and also to avoid judicial challenges. This might increase their vulnerability to social outcry following salient events or crises, even when scientific support is inadequate.

On the other hand, regulators are usually required to follow specified administrative processes in rulemaking. They often need to develop economic impact analyses to support their regulations, and to provide notice and seek public comments before issuing a final rule, which may lessen the influence of the availability heuristic because they cannot easily take a regulatory action merely in response to public pressure.

In an upcoming GW RSC working paper, Susan Dudley, Director of GW RSC, and I explore in more detail the interactions between regulators’ institutional incentives and behavioral irrationality.

Who should be nudged?

Recognizing that government officials may face behavioral biases, the BIT report proposes a series of strategies to mitigate the biases, including requiring transparency about the evidence base used to make policy decisions (to mitigate confirmation bias); creating routes for diverse views to be fed in before, during, and after group discussions (to mitigate group reinforcement); and adjusting forecasting estimates by taking into account evidence from similar projects (to mitigate optimism bias). In general, these strategies are intended to change the choice architecture of policymakers, increasing their internal incentives to make rational choices. Different from mandatory requirements, this is a nudge to alter “people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives,” as defined by Thaler and Sunstein.

Interestingly, the idea of nudges, when applied to public policy, initially referred to the government taking regulatory actions to encourage individuals to make optimal choices for themselves. Now, we are beginning to see more widespread acceptance of the idea that the people who nudge need to be nudged as well. Unlike the extensive research on individuals’ behavioral errors and de-biasing strategies, research applying behavioral insights to regulators’ decision making and regulatory outcomes is still limited. Questions such as whether a mere nudge is sufficient to alter a regulatory choice architecture, and what nudging strategies are feasible and effective in reality, require more attention from scholars and practitioners.