Improving the Use of Science in Rulemaking


Regulatory Science and Policy: A Case Study of the National Ambient Air Quality Standards

September 09, 2015

By Susan E. Dudley
Effective environmental policy depends on reliable scientific information and transparent policy choices; it is challenged not only when science is politicized, but also when policy is “scientized.” This paper suggests that current practices scientize policy and threaten not only regulatory outcomes, but the credibility of the scientific process. Using a case study of the procedures by which the Environmental Protection Agency sets National Ambient Air Quality Standards under the Clean Air Act, it illustrates some of the perverse incentives involved in developing regulations, and offers possible mechanisms to improve those incentives and resulting policy.

randall lutter

Improving Weight of Evidence Approaches to Chemical Evaluations

December 16, 2014

By Randall Lutter et al
Federal and other regulatory agencies often use or claim to use a weight of evidence (WoE) approach in chemical evaluation. Their approaches to the use of WoE, however, differ significantly, rely heavily on subjective professional judgment, and merit improvement. We review uses of WoE approaches in key articles in the peer-reviewed scientific literature, and find significant variations. We find that a hypothesis-based WoE approach, developed by Lorenz Rhomberg et al., can provide a stronger scientific basis for chemical assessment while improving transparency and preserving the appropriate scope of professional judgment. Their approach, while still evolving, relies on the explicit specification of the hypothesized basis for using the information at hand to infer the ability of an agent to cause human health impacts or, more broadly, affect other endpoints of concern. We describe and endorse such a hypothesis-based WoE approach to chemical evaluation.


Improving Causal Inferences in Risk Analysis

August 24, 2012

By Tony Cox
Recent headlines and scientific articles projecting significant human health benefits from changes in exposures too often depend on unvalidated subjective expert judgments and modeling assumptions, especially about the causal interpretation of statistical associations. Some of these assessments are demonstrably biased toward false positives and inflated effects estimates. More objective, data-driven methods of causal analysis are available to risk analysts. These can help to reduce bias and increase the credibility and realism of health effects risk assessments and causal claims.