Download this commentary (PDF)
In brief ...
At a recent conference co-hosted by the Regulatory Studies Center and Norm Ai, experts from the legal community discussed how AI is transforming business and the regulatory challenges that come with it. The speakers emphasized a harm-based, technology-neutral framework and warned against a state-by-state approach that could impact innovation in the industry.
On July 8, the Regulatory Studies Center co-hosted a conference with Norm Ai on the use of Artificial Intelligence (AI) technologies in the regulatory process. The final panel of the day featured a fireside chat where Norm Ai CEO John Nay, PhD, interviewed U.S. Chamber of Commerce Senior Vice President of Technology Engagement Jordan Crenshaw. The discussion focused on how AI is transforming business and the importance of establishing governance structures that both address the potential harms of AI while ensuring that the industry and adopters of AI are not overburdened in a way that could make innovation and adoption difficult.
The discussion began with Dr. Nay posing the question: How are companies using AI technology, and how is it benefiting small businesses? Crenshaw highlighted that new technologies are being used by both large and small companies. Large companies have been developing both general-purpose foundation models (e.g., OpenAI, Anthropic) and deploying a series of vertical AI technologies (e.g., Norm Ai, Aidoc), which include improving efficiencies in radiology or assisting in legal compliance tasks. Smaller companies have used the technology to expand their capabilities without hiring additional staff. This includes fulfilling tasks such as design, coding, or customer outreach. Crenshaw noted that these technologies have been particularly salient with the recent introduction of new tariffs. Small businesses have used these technologies to change their supply chain processes to lower tariff exposure by acquiring products from different countries of origin subject to lower tariff rates.
The speakers noted that the adoption of AI, particularly by small businesses, is becoming more and more widespread. Mr. Crenshaw cited a 2024 survey from the U.S. Chamber of Commerce, which found that 40% of small businesses use AI regularly, a near doubling from the prior year, while 77% reported that they plan to adopt such emerging technologies. The majority of survey respondents agreed that the adoption of such tools helped their business build stronger relationships with customers and access higher-quality talent, among other performance improvements.
The speakers discussed how the changing regulatory landscape may influence the capacity of US businesses, both on the adoption and innovation side. The US now has multiple states with privacy or AI-specific rules, creating a patchwork regulatory framework. Mr. Crenshaw noted that this sort of patchwork system can disproportionately harm small businesses, which may lack the capital to implement compliance systems across all states. Mr. Crenshaw suggested two possible solutions. First, there could be a national privacy standard with clear harm-based guardrails to ensure that states do not need to establish their own laws, which may be uneven and difficult to comply with. Second, state rules could have a carveout for small businesses so that businesses that face steeper compliance costs are not overwhelmed by the prospect of introducing or using new technologies across states.
Mr. Crenshaw also made several suggestions for high-level ideas for establishing business-friendly regulatory frameworks. Echoing earlier speakers, Nay and Crenshaw noted that a harm- or outcomes-based approach could be technology-neutral.
Crenshaw underscored the importance not overly defining sectors as inherently high risk by juxtaposing the use of AI in employment decisions versus assignment of ride sharing, such as Uber. Exclusively relying on an AI algorithm to hire individuals may have significant costs to an individual who is hired or not hired if the algorithm uses biased data or overly relies on keyword matching. Therefore, it would make sense to limit its use in this industrial setting. By contrast, outlawing the use of AI in ridesharing apps would mean that an individual would have to dispatch individual rideshare drivers to riders, which would have a high cost to the business. Crenshaw discussed how the juxtaposition of these two applications underscores the need to target regulations based on the potential costs and benefits, rather than regulating broader categories of issue sets of the technology. He emphasized that this approach lends itself well to using existing laws around fraud, discrimination, and child protection, where possible, to ensure consumer safety; where there are genuine AI-specific gaps (e.g. deepfakes), new laws should be established.
Crenshaw highlighted that states will likely start acting fast on new state-level laws. With the failure of the federal moratorium, states will likely rush in to pass their own laws, increasing the risk of a patchwork framework of AI regulations.