Decision Analysis, Muddling-Through, and Machine Learning for Managing Large-Scale Uncertain Risks

Adapting Policy Analysis for Uncertain Futures

By Louis Anthony Cox, Jr.

March 08, 2019

Click here for the Working Paper Series

Download the working paper (PDF)


When is a comprehensive decision analysis the most effective way to choose what to do next, and when might less knowledge-intensive processes – especially, learning as one goes – be more useful? Managing large-scale, geographically distributed, and long-term risks arising from diverse underlying causes ranging from poverty to underinvestment in protecting against natural hazards to failures of critical infrastructure networks and vulnerable sociotechnical, economic, and financial facilities and systems poses formidable challenges for any theory of effective social decision-making. Different affected organizations, populations, communities, individuals, and thought leaders can perceive risks, opportunities, and desirable responses to them very differently. Participants may have different and rapidly evolving local information, goals and priorities; perceive different opportunities and urgencies for actions at any time; and be differently aware of how their actions affect each other through side effects and externalities. Six decades ago, political economist Charles Lindblom viewed theories of “rational-comprehensive decision-making” such as decision analysis and statistical decision theory as utterly impracticable for guiding policies to manage such realistically complex situations. He proposed incremental learning and improvement, or “muddling through,” instead as both a positive and a normative theory of bureaucratic decision making. But sparse, delayed, uncertain and incomplete feedback undermines the effectiveness of collective learning while muddling through, even if all participant incentives are aligned; it is no panacea. We consider how recent insights from machine learning – especially, deep multi-agent reinforcement learning – can be used to formalize several aspects of muddling through. These insights suggest principles for improving human organizational decision-making. Deep learning principles adapted for human use can not only help participants in different levels of government or control hierarchies to manage some large-scale distributed risks better, but they also show how rational-comprehensive decision analysis and incremental learning and improvement can be reconciled and synthesized, making it unnecessary to choose between them.

Continue reading (PDF)