Download this working paper (PDF)
Abstract
To offer some insight into the question of how Artificial Intelligence should be regulated, this essay looks at experience with regulating past novel technologies—commercial flight, biotechnology, and the internet. These case studies help inform some preliminary lessons that may be applicable to other emerging technologies, including AI.
Introduction
Ever since ChatGPT burst on the scene in November 2022, the potential risks as well as promises of generative AI have commanded the attention of policy officials and the public. Within weeks of its launch, a Guardian editorial warned that “AI’s potential for harm should not be underestimated,” and called for regulation to keep people safe. In March 2023, the Future of Life Institute issued an open letter, signed by some tech industry CEOs, academics, and others, warning that “AI systems with human-competitive intelligence can pose profound risks to society and humanity,” and calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
The European Union was the first government to take comprehensive regulatory action. In March of 2023, it passed the EU Artificial Intelligence Act establishing EU-wide rules on data quality, transparency, human oversight, and accountability. The Act classifies AI systems into four categories of risk, which face different restrictions ranging from complete prohibition for those systems determined to present “unacceptable” risks, to no regulation for those considered “minimal” risk.
At the end of October 2023, President Biden issued Executive Order (E.O.) 14110, titled “Safe, Secure, and Trustworthy Development and Use of AI.” At 36 pages of fine print, it is among the longest executive orders ever issued. Among other things, it directs agencies to issue regulations within specified time frames and requires developers to share safety test results and other critical information with the U.S. government. In addition to focusing on privacy concerns, it calls for policies to advance equity and civil rights, stand up for consumers, patients, and students, and support workers and collective bargaining. President-elect Trump has promised to revoke the order when he takes office in January 2025.
How should regulators think about AI? GenAI certainly presents risks, ranging from deepfakes, plagiarism, and falsehoods presented as convincing facts (ABA 2023), to a technological singularity scenario where machine intelligence surpasses humans. Are these so novel and unique that they require an entirely new regulatory framework, or can existing principles, practices, and regulatory authorities address at least some of these concerns?
To offer some insight into this question, this essay looks at experience with regulation of past novel technologies—commercial flight, biotechnology, and the internet. These case studies help inform some preliminary lessons that may be applicable to other emerging technologies, including AI.