Download this commentary (PDF)
In brief...
Adopting a state-level moratorium against policies regulating artificial intelligence would help ensure consistency, prevent legal fragmentation, and allow Congress time to develop a comprehensive national framework.
For a brief moment earlier this year, it appeared likely that Washington would halt state-level efforts to regulate artificial intelligence (AI). The rationale was straightforward: ensure consistency, prevent legal fragmentation, and allow Congress time to develop a comprehensive national framework. The proposal was welcomed by segments of the tech industry and gained momentum quickly. But just as swiftly, it was withdrawn. While the proposal’s stated aim was to harmonize legal standards and support coordinated policymaking, its abrupt collapse revealed deeper tensions within the evolving landscape of AI governance in the United States.
A 10-year moratorium on state AI laws was initially a central feature of the One Big Beautiful Bill Act, a legislative package covering a wide range of policy areas. The House approved the 10-year version in May 2025, while a five-year Senate compromise surfaced later but failed to gain traction. On July 1, the Senate voted 99–1 to strike the provision. The reversal was notable not only for its speed but also for its bipartisan nature. The collapse of the moratorium proposal signaled growing discomfort with the idea of centralizing authority over AI regulation, especially when state-level action has played a significant role in addressing early risks associated with emerging technologies.
So, what happened? And why does this matter beyond the fate of a single clause in a sprawling spending bill?
At the heart of the debate was a larger question: who should shape the rules for a rapidly evolving technology like AI? Supporters of the moratorium, including the U.S. Chamber of Commerce and the National Small Business Association, argued that national consistency was key, both for companies seeking predictable compliance standards and for courts trying to make sense of it all. However, critics, including 17 Republican governors, a coalition of 40 state attorneys general, who urged Congress to let states protect citizens from AI risks, as well as civil society organizations like the Electronic Privacy Information Center, pushed back, warning that national consistency shouldn’t come at the cost of meaningful safeguards. They argued that state and local governments often provide the first guardrails against emerging risks, from facial recognition in schools to algorithms used in hiring or housing. Because these tools are deployed in specific communities, local officials are often the first to hear complaints, see patterns of harm, and respond - a dynamic documented in Virginia Eubanks' “Automating Inequality”, which shows how municipal technology decisions can profoundly shape access to welfare, housing, and child services. Critics stressed that a federal moratorium would override these kinds of state and local protections, removing some of the only guardrails currently in place.
Several key state-level laws have already shaped the national conversation around AI and data privacy. Illinois’s Biometric Information Privacy Act, passed in 2008, requires explicit consent for the collection of biometric data and has served as the basis for landmark legal actions, including a $650 million settlement in litigation involving Facebook over its use of facial-recognition tagging without user permission and American Civil Liberties Union v. Clearview AI for scraping billions of photos from the internet to build a facial recognition database sold to law enforcement. In California, the Consumer Privacy Act, passed in 2018, established foundational rights for individuals regarding data access, deletion, and opt-out options—reforms that have influenced business practices across the country. More recently, in May 2024, Colorado enacted the Colorado Artificial Intelligence Act, which requires risk assessments and transparency for high-risk AI applications in areas such as employment, healthcare, and housing. While none of these laws is without limitation, each has helped raise public awareness, shape business practices, and provide policymakers with early models for regulating emerging technologies.
Together, these examples highlight why state action has become such a flashpoint in the federal debate: they demonstrate both the capacity of states to pioneer protections and the tension that arises when those protections collide with calls for uniform national rules. One major concern raised by opponents of the moratorium was the risk that federal inaction might limit the ability of states to act independently to protect their residents. For example, Tennessee’s Ensuring Likeness, Voice, and Image Security Act, enacted in January 2024, was the first state law to protect music artists from unauthorized AI-generated voice cloning, a measure that might never have been possible under such a federal freeze. In that sense, the conversation was not simply about AI but about federalism itself: how to balance national oversight with local authority in a policy domain characterized by both rapid innovation and high societal impact.
Why the Senate Rejected the Moratorium
Despite months of lobbying, the moratorium collapsed in the Senate with a near-unanimous 99–1 vote. Many viewed the provision as industry-driven, with groups like the Chamber of Commerce supporting it for consistency, while opponents argued it would strip states of tools to address AI harms. Lawmakers pointed to the fact that states were already taking action in areas ranging from privacy to AI-generated voice cloning, and they were reluctant to undercut those efforts. Even Sen. Marsha Blackburn (R-TN), who had initially supported the moratorium, reversed course, warning it could weaken protections for children, creators, and conservative groups. She emphasized that federal preemptive laws, such as the proposed Kids Online Safety Act and a national privacy framework, would need to be in place before states could justifiably be blocked from regulating on their own.
Rather than resolving these tensions, the moratorium debate reframed them. The focus is no longer solely on whether states should have a role in AI regulation, but on how to design multi-level governance structures that are adaptable, coordinated, and evidence-informed. State governments play a dual role: they are both laboratories of democracy, as seen in biometric privacy and AI-voice cloning laws, and obstacles to national consistency. Businesses have long argued that California’s Consumer Privacy Act created heavy compliance burdens, and recent reporting shows how conflicting state consent standards and a patchwork of privacy laws have made it harder to navigate nationwide obligations. Indeed, critics argue that California’s market power allows it to set de facto national standards, effectively binding companies and consumers in other states without accountability to those voters, a dynamic that fuels calls for Congress to provide uniform federal rules.
Meanwhile, the pace of AI development continues to accelerate. Startups are deploying health diagnostic tools supported by international institutions such as the International Finance Corporation, while large technology companies are investing heavily in generative AI applications for medicine, law, education, and finance, including systems described as “ChatGPT for doctors.” In such a dynamic environment, regulatory systems must remain adaptable, with space for experimentation and guardrails that can evolve as risks and uses emerge. Rather than pausing local innovation in favor of central planning, policymakers may benefit from learning from the diversity of state-level responses already underway.
Although the Senate’s rejection of the moratorium was decisive, the broader discussion is ongoing. Recent commentary in The Hill suggests that similar proposals could reemerge, potentially as part of a broader federal AI governance framework. And as Neil Chilson of the Abundance Institute observed, the moratorium could return, this time paired with a broader federal framework. Whether such efforts are framed around efficiency, innovation, or consumer protection, the underlying questions around regulatory authority are likely to persist.
Ultimately, the debate reflects a growing recognition that AI regulation, like the technology itself, requires models of governance that are responsive, distributed, and capable of evolving in tandem with the systems they aim to guide.