Considerations for Artificial Intelligence in Government

Summary of panel discussion at July 2025 conference co-hosted by RSC and Norm Ai
July 29, 2025

Download this commentary (PDF)

Explore more event coverage


In brief ...

At a conference co-hosted by the Regulatory Studies Center and Norm Ai, speakers discussed challenges and opportunities for artificial intelligence to streamline regulatory compliance burdens and improve stakeholder experiences with government services. This commentary summarizes the panel, “AI’s Role in Regulation Post-Chevron,” featuring Dan Berkovitz, Cary Coglianese, and Troy Paredes.

 On July 8, 2025, the George Washington Regulatory Studies Center and Norm Ai hosted an event titled Can AI Streamline Regulation and Reduce Compliance Burdens? The event convened academics, practitioners, and current and former government officials to discuss the role of artificial intelligence (AI) technologies in government, and the role of government in regulating the use of AI. The first panel featured moderator Troy Paredes, former commissioner of the Securities and Exchange Commission (SEC), and panelists Cary Coglianese, professor of law and director of the Penn Program on Regulation at the University of Pennsylvania, and Dan Berkovitz, former commissioner of the Commodity Futures Trading Commission (CFTC) and former SEC general counsel. The panelists discussed a number of topics, including the importance of the words we use to talk about AI; the role for AI in government; the opportunities and challenges of using AI in government; and considerations for expanding AI’s role in government. This commentary summarizes the discussion on each of these themes.

How We Talk About AI

At the beginning of the panel, Coglianese noted that there is no single type of AI.  Generative AI—such as ChatGPT—is distinct from more “traditional” AI models that are trained to perform a specific task, like analyzing medical imaging to detect abnormalities. Other types of technologies, such as natural language processing and machine learning, also fall under the broader umbrella of AI. And these types of models are themselves distinct from various technologies that use AI, like autonomous vehicles. Altogether, Coglianese noted that this heterogeneity necessitates flexible regulatory approaches and, in some cases, different rules for different uses of AI.

Coglianese also noted that “guardrails” may not be the most appropriate metaphor for AI regulations; instead, he recommended the concept of “leashes.” Guardrails are fixed barriers along well-defined pathways. As such, guardrails are an imperfect metaphor, according to Coglianese, because we do not yet know where AI technology is going. Policies that establish strict guardrails may restrict AI before we have uncovered its many uses and benefits. Leashes, on the other hand, when used to constrain other non-human forms of intelligence—such as dogs—allow for flexibility to explore new territories without following a prescribed path, all with a human at the other end of the leash to supervise and correct. When applied to AI, regulatory equivalents to leashes allow for broader creativity and exploration, Coglianese explained, without the risk of overly restricting new technology before exploring what it can accomplish.

Role of AI in Government

During the panel, Coglianese posed two key questions: “What is the role for AI in government?” And “Compared to what?” Rather than assess AI in a vacuum, Coglianese encouraged the audience to evaluate AI against the status quo. Government decisionmaking is notoriously slow and sometimes biased, he said. Regarding speed of decisionmaking, Coglianese noted a letter he recently received from an incarcerated person who had read about Coglianese’s work on AI and the courts and wanted to share that they would accept the use of AI to resolve their case, just to get a decision made in a timely manner. Beyond that example, Coglianese suggested that the public more generally may come to accept and even demand that governmental decision-makers use AI tools as AI becomes used throughout the private sector. AI used across private industry may well set the standard, and the public may hold the government to that standard to improve efficiency and responsiveness.

AI might also improve equity in outcomes in addition to efficiency, Coglianese noted. He discussed Social Security Disability Insurance benefit adjudications. He reported on research showing that there can be “wide variation in administrative law judges’ decisionmaking, even within the same office” that leads to inconsistent outcomes for potential program beneficiaries. Coglianese suggested that a “transparent, well-calibrated, well-validated, and well-monitored algorithm” could reduce the effects of human cognitive biases and favoritism in various types of governmental decisionmaking processes and could yield more equitable outcomes for members of the public.

Challenges to Using AI in Government

One key consideration that some expressed during the discussion is that efficiency is not the sole objective of government. During his time as a CFTC Commissioner, Berkovitz worked to issue regulations that implemented the Dodd-Frank Act. Many individuals who would be affected by the rule wanted to talk to someone in the government about the rulemaking and were not satisfied to simply send a letter. Berkovitz pointed out that the First Amendment guarantees the freedom to petition the government; it remains essential for the government to have someone to listen.

Paredes noted that the government is required to engage in “reasoned decisionmaking” to comport with the Administrative Procedure Act, let alone good practice. Based on his time as a Commissioner at the SEC, he indicated that even when there is rigorous consideration and analysis, people may still reach very different conclusions, in part because people often make different tradeoffs. An AI model could help provide the kind of inputs – such as assessing tremendous amounts of information and performing complex scenario analysis – that could inform the regulatory process and policy makers’ judgments.

Panelists also raised more practical questions about the challenges of implementing AI in government. Knowing that data and training are essential for the development of AI models, Paredes raised the question of what happens when policy preferences and values shift and governmental goals and priorities change. Berkovitz also wondered if the results of an AI model would need to be published in the administrative record. In Berkovitz’s experience, commissioners reach decisions by discussing the issues with staff and drawing conclusions based on that input. Those conversations are not typically part of the administrative record. If commissioners were to begin using AI to reach those types of decisions, must the commissioner’s AI queries be published? Could parties who do not like the results of a rulemaking challenge it based on which parties funded the AI model and had input into its training? These questions illustrate how involving AI in an agency’s decisionmaking process could affect public perceptions of the integrity of the process.

Current Uses and Considerations for Expanding Use of AI in Government

The government’s use of AI has expanded greatly over the past several years, according to Coglianese. In 2020, the Administrative Conference of the United States published a report that identified 157 uses of AI in 64 agencies. At the event, Coglianese described a database he is developing that includes over 3,000 uses of AI across the federal government—not including individual employees’ nonsystematic use of new large language model tools.

As the government’s use of AI continues to grow, Coglianese and Berkovitz both acknowledged the current climate of distrust in institutions and experts. In the current environment, Berkovitz noted that the public may not trust experts to implement AI systems in government. Coglianese highlighted the importance of taking a “trust but verify” approach to the government’s use of AI: while these tools can be incredibly powerful and offer many benefits, they must be paired with real knowledge, careful oversight, and sound human judgment.

Paredes noted that government must balance promoting innovation and rewarding AI development and deployment, while also managing risks posed by AI. At the same time, maintaining the status quo and failing to adapt is also a risk. Paredes suggested that as AI technology continues to develop, government cannot simply sit back and wait to implement AI.