Are Chemical Risk Assessment and Benefit-Cost Analysis Compatible?

Brian Mannix

by Brian Mannix, Research Professor

January 06, 2016

Executive Order 12866 requires benefit-cost analyses for all regulations; in many cases these economic analyses rely upon risk assessment for critical inputs.  Usually this is not a problem; in principle, risk assessment and benefit-cost analysis are perfectly compatible.  For example, FEMA requires risk assessment and benefit-cost analysis on projects that mitigate the damage from natural hazards – floods, storms, and earthquakes.  The agency uses benefit-cost modules that have probabilistic risk assessments built in, complete with damage functions that are the analogue of dose-response curves in chemical risk assessments.  FEMA has used these modules successfully for 25 years, to ensure that hazard mitigation funds are spent cost-effectively.  In many other fields, too, benefit-cost analysis and risk assessment work hand-in-glove to inform decisions.

But benefit-cost analysis and chemical risk assessment have not had such a happy history together.  The problem can be traced to some specific practices that historically have characterized chemical risk assessments, and that are widely accepted within that community.

Valuing Risk Reduction in Benefit-Cost Analysis

To understand the conflict, consider the framework that economists use to assign a value to mortality risk reductions in a benefit-cost analysis.  The Value of Statistical Life (VSL) represents the public’s revealed “willingness to pay” for risk reduction (or to be compensated for a risk increase).  This approximates the magnitude of what, in price theory, economists call a substitution effect.  If risk-reduction gets too expensive, people buy less of that, and more of other desirable things.  There is another metric that associates a dollar value with a statistical life; it is generally called the health-wealth effect, and it is an example of what economists call an income effect.  If consumers’ incomes go down, they will buy less of everything, including risk reduction.  In the economy as a whole, if we spend a billion dollars’ worth of resources on anything (say, a space launch), then that’s a billion dollars less that will be spent on everything else in the consumers’ market basket – including risk reduction.

There is a large literature on both of these effects, and a range of dollar estimates for the magnitude of each.  Without getting into the details of these estimates, the interesting thing to note about them is that they tend not to be very far apart from each other.  For most consumer goods, the income effect is likely to be only a tiny fraction of the substitution effect.  But this is not true of mortality risk reduction, where the two effects are within an order of magnitude of each other.  So, for example, some studies estimate that the VSL is approximately $9 million per statistical life; at the same time (and using different methodologies) studies may show the health-wealth effect to be about $25 million per statistical life.[1]  This is interesting, because it means that risk-reduction is a rather important component of the consumer’s market basket.  It is not a separate item that we buy, but it is a significant component of many of the things we do spend a lot of money on:  like houses, cars, food, education, and health care.

Overstating risk may kill people

This observation – that the two effects are not very far apart – has interesting consequences.  It means that if any particular risk is exaggerated, causing us to spend too much money in an effort to reduce it, we will quickly find that other risks are growing because of the income effect on consumers.  If we overestimate a particular risk by a little bit, then the error will cause a net reduction in consumer welfare – as is true of pretty much any error, in any direction, in a typical policy optimization.  But if we overestimate a particular mortality risk by a factor of ten or more, then the income effect will not only diminish the expected benefits; it will come to dominate the outcome.  That is, a risk reduction policy will end up killing more people than it saves!

Chemical risk assessment practices overstate risks

And herein lies a problem, because chemical risk assessors have been known to deploy factors of ten rather casually.  “Safety factors” (factors of ten) have been used to handle uncertainty in chemical risk assessments – for example, to account for the uncertainty that arises when extrapolating from an animal model to a human risk.  In any given risk assessment, two or three factors of ten might be multiplied together.  The rationale is that, in the presence of uncertainty, it is better to err in the direction of safety.  In some contexts, such a risk assessment approach might make sense.  But in the context of a benefit-cost analysis, used in support of a regulatory decision, it is a tragic mistake.  Even one such “safety factor” can be enough to change a regulation intended to reduce risk into one that increases it. 

Moreover, it is not just the explicit safety factors that may inadvertently tilt a regulatory decision in a counterproductive direction.  Whenever a predictive model relies on a long chain of inferences, there will be many opportunities to introduce a bias in the prediction of risk.  Sometimes risk assessors may simply be designing their model to minimize false negatives, while paying a bit less attention to false positives.  At every step, people may think they are just being cautious – and erring on the side of safety.  But the cumulative result is just the opposite of what they intended.  The exaggeration of chemical risks actually kills people.  So economists tend to be especially wary of chemical risk assessment models and the results that they produce.  This will likely continue until chemical risk assessment methods are redesigned to be unbiased, and to be compatible with the techniques of benefit-cost analysis. 


[1] Note that, conventionally, economists express the substitution and income effects in terms of a quantity change per dollar of cost – i.e., the inverse of how we generally express the VSL and the health-wealth effect.  So a “large” dollar value per statistical life would be termed a “small” income effect.  The point of this paragraph is that, for mortality risk reduction, the income effect is smaller (i.e., larger in dollar value) than the substitution effect, but not by more than an order of magnitude.