Download this Commentary (PDF)
Interview with the author
President Trump wants to stamp out viewpoint-based speech restrictions[1] by social media platforms. His executive order on social media instructs the secretary of commerce to petition the Federal Communications Commission (FCC) for a rulemaking to address this issue. The main purpose of the rulemaking, as Adam White points out, is to “clarify” a provision in Section 230 of the Communications Decency Act of 1996 that immunizes providers of “interactive computer services” (which includes social media companies) from lawsuits if their removal or restrictions on access to content represent a good-faith effort to prevent content that is “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.”
The president presumably hopes for a regulation stating that social media companies could be sued if they restrict access to or remove political speech that is not obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable. Suppress your giggles for a moment as you try to recall any recent political speech that would not fit someone’s definition of these terms, especially the vague “otherwise objectionable.” Somebody finds almost anything objectionable these days. I personally find any political speech not based on fact to be objectionable, which probably covers at least 90 percent of it. Others may think insistence on facts constitutes some kind of aggression that they find objectionable.
Nevertheless, suppose for the sake of argument that there exists some political speech that is not objectionable by anyone’s definition, but a social media company disagrees with it and blocks it. The president thinks the companies are blocking unobjectionable conservative speech, and he wants the FCC to do something about it.
Legal barriers to FCC action may be insurmountable. Section 230 of the Communications Decency Act does not even mention the FCC. I’m no lawyer, but that sounds like a pretty good clue that Section 230 did not authorize the FCC to do anything. Cass Sunstein, a noted Harvard law professor who served as the top regulatory official in the Obama administration, further notes that the meaning of the statute is so clear that there may be no room for any regulatory agency to interpret its text.
But aside from the legal issues, there are economic and analytical challenges to the rulemaking the president appears to want. When the FCC created its Office of Economics and Analytics in 2018, it amended the Code of Federal Regulations to give the new office responsibility for preparing “a rigorous, economically-grounded cost-benefit analysis for every rulemaking deemed to have an annual effect on the economy of $100 million or more.” Surely a regulation affecting social media companies’ ability to curate content would have more than $100 million in annual economic impact. That would trigger the FCC’s own rule requiring a thorough cost-benefit analysis.
The phrase “cost-benefit analysis” immediately generates math anxiety, but there are some really important issues FCC analysts would need to address before doing any math. I want to focus on those issues because they’re often neglected.
It’s clear from Executive Order 12866, the Office of Management and Budget’s guidance to agencies on regulatory impact analysis, and textbooks on benefit-cost analysis that the first step in economic analysis of regulation is to identify and assess the problem the regulation is supposed to solve. Thus, the FCC would first need to figure out whether there is a widespread, systematic problem in need of a solution.
One source of evidence could be the more than 16,000 complaints about online platforms discriminating against users based on their political viewpoints that the executive order says the White House received in 2019. The executive order promises to make these complaints available to the Justice Department and the Federal Trade Commission; they should also be furnished to the FCC.
However, the mere existence of thousands of unverified consumer complaints proves little. To consider these complaints as evidence, the FCC or any other concerned government agency would first have to separate genuine complaints by real people from any duplicate or fraudulent complaints that were submitted by political advocates, Russian bots, or other nefarious actors just to puff up the numbers. (Research and congressional testimony by my RSC colleagues finds that regulatory agencies generally do a good job of this when considering comments on regulations generated by mass comment campaigns, so sorting out the real complaints should not be too difficult.)
The agency would then need to identify which complaints actually pertain to discrimination based on the speaker’s political viewpoint – conduct that is relevant to the rulemaking – rather than other matters not relevant to the rulemaking. The FCC acquired ample experience doing precisely this during the Restoring Internet Freedom proceeding. A commenter asked the commission to include in the record thousands of consumer complaints against Internet service providers, but the commission concluded that “the overwhelming majority of these informal complaints do not allege conduct implicating the Open Internet rules.” (See the Restoring Internet Freedom rule, p. 7913.)
For complaints that do involve political speech, the next step is verifying their credibility. The White House web interface that collected complaints in 2019 reportedly asked for users’ phone numbers and allowed them to include screen shots, so in principle the complaints could be verified. (The White House interface no longer accepts submissions.)
When considering the complaints as data, it is also critical to define conceptually the problem the regulator seeks to solve. Is the problem discrimination against any content based on political viewpoints, or is the problem systematic bias against a particular viewpoint? If the problem is the former, then it may be sufficient to show that an online platform blocked or hindered access to content because of the speaker’s political viewpoint. If the problem is the latter, then it is not clear if a sample of complaints solicited by the administration is a representative sample that would allow regulators to conclude that a platform systematically discriminated against conservative viewpoints.
Alternative explanations for observed patterns in the data should also be considered. Could observed “discrimination” be random error? Did the company believe it was responding to user preferences? Is the sample biased in some way, since it was solicited by an administration seeking to prove discrimination against conservative viewpoints?
If the FCC concludes that the complaints demonstrate that one or more social media companies discriminates based on users’ political viewpoints, it would then need to consider whether the data show the discrimination is significant and systematic. Given the volume of posts, tweets, and other online messages, even thousands of complaints may amount to nothing more than an anecdotal drop in the bucket.
If the problem is significant and systematic, the next step is to assess whether the problem is likely to continue, or whether marketplace developments are likely to ameliorate the problem. Marketplace developments could include the existence of competing social media companies that do not discriminate, the potential for new competitors, and even the administration’s own jawboning.
I do not have firm answers to most of the questions posed above. But they are the kinds of questions any conscientious regulatory economist would ask to determine whether there is even a problem worth trying to fix.
[1] The executive order uses the term “censorship,” a politically-loaded term that is not technically accurate since the companies are private actors rather than governments.