On January 30, the General Services Administration (GSA) hosted a public meeting on mass and fake comments. The meeting is the first of the two public meetings convened as part of GSA’s recent initiative to modernize Electronic Rulemaking Management. The initiative is intended to better integrate data and information technology among federal regulatory information systems, promote public access, accountability, and transparency, and reduce duplication and increase efficiency.
The meeting covered discussions by panelists from agencies, universities, and the private sector on issues related to the problems and impact of mass and fake comments in rulemaking. Through the meeting, GSA initiated a dialogue with the public to inform its strategy to deal with mass and fake comments. While GSA’s initiative is a welcome movement, the discussions on Thursday raise several fundamental questions for us to consider.
What are mass and fake comments?
Although the GSA meeting covered discussions on both mass and fake comments, mass comments and fake comments are different. As GW Regulatory Studies Center Senior Scholar and Political Science Professor Steve Balla explained in his remarks and recent paper, mass comments generally refer to “collections of identical and near-duplicate comments sponsored by organizations and submitted by group members and supporters.” Although the majority of proposed rules receive a modest number of public comments, mass comment campaigns are not rare. In one extreme case, the Federal Communications Commission’s 2017 proposal on net neutrality received nearly 22 million comments through mass comment campaigns, which generated massive media and public attention.
Relative to mass comments, less is known about the occurrence and nature of fake comments. Tobias Schroeder, Director of the eRulemaking Program, mentioned that there is only a “working definition” of fake comments, which generally involves comments submitted with fraudulent attribution or identity. The mass comments, which may be duplicated in content but submitted by real individuals or organizations, do not fall under this definition. A special type of fake comments that emerged from the panel discussions is the comments generated by automated machines, or bots. Sanith Wijesinghe, Innovation Area Lead at the MITRE Corporation, called those “deep fake comments,” since many of them are indistinguishable from human comments.
Do mass comment campaigns affect rulemaking?
While the occurrence of recent mass comment campaigns has gathered substantive attention, our knowledge about its actual impact on agency rulemaking is rather scattered. Oliver Sherouse, Regulatory Economist at the Office of Advocacy within the Small Business Administration, pointed out that mass comment campaigns may inundate comments from small businesses that have limited resources and capacity. Reeve Bull, Research Director of the Administrative Conference of the United States, warned that mass comment campaigns, regardless of the large number of comments they usually generate, tend to express unrepresentative views of the public.
The concerns about the negative impact of mass comment campaigns are valid, but there is little empirical evidence showing the frequency or magnitude of their harm on regulatory outcomes. Balla offered one of the few available empirical studies. After reviewing 1,000 mass comment campaigns in Environmental Protection Agency (EPA) rulemakings, Balla and his coauthors found that mass comment campaigns generally elicited a limited degree of procedural responsiveness from EPA—meaning that the agency responded to the vast majority of mass comment campaigns but incorporated few changes solicited in the comments into its final rules. In his remarks, Balla concluded that mass comment campaigns are “a form of participation in rulemaking exercised by a variety of interested parties” rather than “an abuse of rulemaking process.” Despite the usually striking number of submissions generated by mass comment campaigns, regulatory agencies seem to have processed mass comments appropriately. As my former colleague Aryamala Prasad observed, quality matters more than quantity: the influence of a mass comment campaign on regulatory outcomes “depends on whether it provides additional evidence to the government.”
Are fake comments spam?
A greater degree of uncertainty is associated with fake comments submitted with non-existing or stolen identities. Identity theft causes legal issues, but whether this type of “fake” comment should be regarded as spam in rulemaking needs careful consideration. Sherouse acknowledged that we don’t have data on what percentage of comments using non-existing or stolen identities provide structured, substantive information and what do not. People have valid reasons to choose whether to reveal their identity in a comment, as noted by Wijesinghe in the meeting. There are a large number of comments submitted with no identity (i.e., anonymous comments). Schroeder stressed that many agencies allow for submission of anonymous comments, and each agency makes its own decision in that regard to balance the tradeoff between encouraging truthful identity and maximizing public participation.
Comparatively, panelists expressed stronger concerns about the risks associated with bot-driven comments. Wijesinghe indicated that fake comments generated by machines can be a threat to evidence-based policymaking, since those comments do not contain human insights and may disrupt the notice-and-comment process. Michael Fitzpatrick, Head of Global Regulatory Affairs at Google, noted that bot-driven comments would cause perception problems, decreasing the trust of the agency and the government, and said that the problem “will get exponentially worse.” He suggested that there are new machine learning technologies that can identify and prevent bot-generated comments while creating minimum friction in the submission process. However, how to incorporate these technologies into the regulatory process without discouraging legitimate participation is still an open question.
The GSA meeting revealed a surprising fact that we still don’t know enough about mass and fake comments. There is only an undetermined definition of fake comments, scattered knowledge about the actual impact of mass comment campaigns on agency rulemaking, and even more limited understanding of what fake comments mean to the regulatory process. Knowing that there are technologies available to identify and prevent mass or fake comments is assuring, but how to deal with those comments is more a policy question. As GSA works to improve the regulatory information systems, more empirical research is needed to inform the agency action about the nature and impact of mass and fake comments.