Improving Regulatory Science: A Case Study of the National Ambient Air Quality Standards

Policy scrabble tiles

by Susan E. Dudley, Director and Marcus Peacock, Visiting Scholar

June 30, 2017

Download the working paper (PDF)


This paper explores the motivations and institutional incentives of participants involved in the development of regulation aimed at reducing health risks, with a goal of understanding and identifying solutions to what the Bipartisan Policy Center has characterized as “a tendency to frame regulatory issues as debates solely about science, regardless of the actual subject in dispute, [that] is at the root of the stalemate and acrimony all too present in the regulatory system today.” We focus our analysis with a case study of the procedures for developing National Ambient Air Quality Standards under the Clean Air Act, and attempt to identify procedural approaches that bring greater diversity (in data, expertise, experience, and accountability) into the decision process.

Regulatory Science and Policy

Regulations intended to address public health and environmental risks depend heavily on scientific information. These regulations are often the subject of heated debate, involving accusations of “politicized science,” “advocacy science,” and “junk science.”[3] While it is legitimate to want to protect the integrity of scientific findings, more often than not, these policy debates center on issues that science must inform, but cannot decide.

No one is immune to the temptation to spin science to advance a pre-determined policy goal. However, masquerading policy preferences as “science” can be extremely harmful. At its worst, scientists and policymakers work, wittingly or unwittingly, in an unholy alliance to support harmful political preferences in the name of “science.” Perhaps the most notorious example in the United States is the extent to which some scientists in the 19th century declared certain human races inherently “inferior.” This “evidence” was, in turn, used by politicians to justify, and defend, race-based slavery.[4] Fortunately, the costs of “politicized science” in the United States today are less severe than mass human enslavement, but they can still have significant adverse effects on public policies as well as diminish the integrity of scientific advice.

While there is extensive media coverage of “politicized science” related to public disagreements regarding regulatory issues that have a strong scientific component, such as genetically-modified organisms or climate change, the examination of how science may be politicized inside federal regulatory decision-making processes has been largely limited to academia and the scientific community.[5] In particular, while attempts by advocates of policies to improperly shape science have been widely presented in the media, in everything from main stream news reports[6] to the HBO series Mad Men,[7] there has been much less examination of the role of scientists improperly attempting to shape policy decisions. Yet the latter problem can be just as serious. As former Assistant Administrator of the US Environmental Protection Agency, Milton Russell, has noted, while government scientists need to be protected from “influence over what they find and report,” “policy-makers must be protected from policy analysts or scientists telling them what they should decide, but open to information about what the consequences of alternative decisions are likely to be.”[8]

This paper examines two types of politicized science that can infect policymaking inside regulatory agencies. The first is when scientists, intentionally or unintentionally, insert, but do not disclose, their own policy preferences in the scientific advice they provide government decision-makers. Such “hidden policy judgments” are a form of “advocacy science.”[9] The second is when scientists and/or policymakers conflate scientific information and nonscientific judgments to make a policy choice, but then present that decision as being solely based on science. It is this tendency to “camouflag[e] controversial policy decisions as science” that Wagner called a “science charade”[10] and it can be particularly pernicious. For instance, a 2009 Bipartisan Policy Center (BPC) 2009 report, Improving the Use of Science in Regulatory Policy, concluded that “a tendency to frame regulatory issues as debates solely about science, regardless of the actual subject in dispute, is at the root of the stalemate and acrimony all too present in the regulatory system today.”[11] Both of these problems, hidden policy judgments and the science charade, can be the result of officials falling prey to the “is-ought fallacy”: incorrectly mixing up positive information about what “is” with normative advice about what “ought to be.”

This paper focuses on the problems of hidden policy judgments and the science charade inside federal regulatory agencies. It examines why these are problems, the institutional incentives that contribute to them, and possible remedies. After describing what we mean by hidden policy judgments and the science charade, and describing the “is-ought fallacy,” we illustrate these problems by examining the incentives and behavior of participants in the development of national ambient air quality standards (NAAQS) under the Clean Air Act.[12] The paper concludes with ten recommendations for changing those incentives.

Continue reading

[3]     See, for example, Jason Scott Johnston, ed. Institutions and Incentives in Regulatory Science. Lexington Books (2012)

[4]     See, for instance, the work of anthropologist Henry Hotze on behalf of the Confederate States of America in Lonnie A. Burnett, Henry Hotze: Confederate Propagandist, University of Alabama Press: Tuscaloosa, AL (2008).

[5]     See, for instance, Jake C. Rice, “Food for Thought: Advocacy science and fisheries decision-making,” ICES Journal of Marine Science, 68(10) (2011), pp. 2007-2012.

[6]     See, for instance, a discussion of how politicians from both major parties attempt to spin science in Sheryl Gay Stolberg, “Obama Puts His Own Spin on Mix of Science with Politics,” The New York Times, March 9, 2009.

[7]     See, for instance, the discussion of the manipulation of the public regarding the health effects of tobacco on behalf of tobacco companies in “Smoke Gets in Your Eyes.” Mad Men: Season One. Writ. Matthew Weiner. Dir. Alan Taylor. AMC, 2007.

[8]     Milton Russell, “Lessons from NAPAP,” Ecological Applications, 2(2), 1992, p. 108.

[9]     “Advocacy science” is an elusive term and can, for instance, include the activity of scientists seeking more federal funding for research. For the purposes of this paper the term is defined as when a policy preference is presented in the form of scientific advice. For a discussion of advocacy science see Deborah Runkle, Mark S. Frankel ed., “Advocacy in Science: Summary of a Workshop convened by the American Association for the Advancement of Science,” 1 May 2012, pp. 2-3.

[10]     Wagner, W.E. The Science Charade in Toxic Risk Regulation. Columbia Law Review. 1995 Nov;95(7): 1614; 29.

[11]     Bipartisan Policy Center. Improving the Use of Science in Regulatory Policy. Washington (DC): Bipartisan Policy Center; 2009;10. Available at: “BPC”