In June, President Joe Biden revoked a Trump-era executive order that attempted to ban TikTok—a popular social media application owned by a Chinese company. Back in 2020, when then-President Donald Trump first tried to force the sale of TikTok to an American company or else be banned, several commentators detailed how U.S. adversaries could use TikTok for espionage, perhaps by even collecting sensitive data on children. These analysts rightly identified a key concern with the app, but another threat looms, and America may be unprepared to respond.
Specifically, the threat of false information on an app like TikTok requires a policy response for which current legal avenues may not be well-suited. The Biden administration should allow the values of security, accuracy, and legitimacy to guide the development of a new approach to banning TikTok and apps like it.
Failing to combat false political information threatens core democratic functions; however, curbing this information can also undermine democratic values. In an Annual Review of Political Science piece, Jennifer Jerit and Yangzi Zhao discuss the general role of political misinformation in a democracy. They explain how false information threatens aggregate decision-making wisdom and question whether false beliefs can be corrected. Consequently, foreign governments may distort the truth in attempts to mold societal cleavages into divisions, damaging U.S. democracy and weakening U.S. defenses. Such distortions can impact election outcomes by spreading fabricated news stories and polls and undermining trust in traditional news and polling sources.
Specifically, China’s deployment of these methods poses a grave threat to U.S. stability. Armed with a “heavily censored and restricted civil [society] at home,” China works to target “open information environments.” Moreover, evidence from the Covid-19 pandemic suggests a newfound Chinese openness to engaging in false information campaigns. While steps taken by American social media companies to limit the false information on their platforms help, it seems prudent to question what steps foreign entities—who have no loyalty to the American system—will take to stop the spread of false information. In all likelihood, they will not take the necessary steps, and they may often proliferate the bogus information themselves. Thus, the U.S. government must take a more prominent and engaged role.
However, the vulnerabilities that information war exposes in open systems of government lie so deeply at the core of liberal systems that addressing those threats could undermine democratic values. Unlike authoritarians, democracies face normative and institutional constraints on their ability to shore up vulnerabilities. As Laura Rosenberger writes in a review for the Journal of Democracy, “democracies find themselves fighting on a battlefield defined by authoritarians. A focus on control and manipulation dominates this space, creating an ethos fundamentally at odds with democratic institutions and values.” This conundrum speaks to an underlying feature of information war—democratic values prevent democracies from possessing the same tools available to authoritarians.
If, as Rosenberger suggests, “control” defines attempts to fight information wars, then it seems as if any participation in an information war involves controlling information, not upholding a democratic ideal of free-flowing information. As a result, the openness of democracies exposes them to authoritarian information warfare, and democracies risk betraying their own values when they attempt to curb vulnerabilities.
To navigate this dilemma, policymakers and lawmakers need to satisfy the principles of safety, accuracy, and legitimacy. First, any response to an app like TikTok should protect democracy by safeguarding it from the threat of false information attacks originating from foreign governments. Scholars argue that the viability of democracy requires citizens to aggregate their information to make choices. Crucially, this theory relies on a mathematical theorem; if the individual probability of making a correct decision falls below zero, potentially because individuals are misinformed, then the probability of a correct collective decision plummets. So, a correct response should safeguard democracy by protecting collective decision-making.
Second, the response should guard against willful and accidental mistakes by increasing the number of people who review intelligence related to bans and by ensuring that some of those people are not part of the executive branch. Unilateral action comes across as undemocratic. Additionally, “stovepiping” plagues executive action and leaves it susceptible to national security power misapplication. Under one definition, stovepiping occurs when a president accesses raw, unfiltered intelligence. As an unintended consequence, a president may act with incomplete, insufficient, or inaccurate information, such as when a lack of technical knowledge leads them to overestimate the severity of a problem. As an intended consequence, a president may pick and choose the information that fits their political narrative, such as one that fosters fear of foreign threats right before an election. Taking action to limit free access to specific platforms is serious. While justifications may exist in certain circumstances, the law should insulate such action from the clumsiness of inexperienced principals and presidents who may misuse the power for political gain.
Finally, the response should involve the direct, not delegated, authority of Congress to ensure that the process of addressing false information and threats to democratic stability does not itself become a threat. As political psychologists demonstrate, an “open policy-making style [can avoid] the appearance of undue secretiveness,” which in turn confers legitimacy on certain actions – in other words, a deliberative process can help ease the domestic constraints that often hinder foreign policy decision-makers. For many reasons, controversy surrounds banning apps. The secrecy associated with unilateral, West Wing decision-making exposes bans to even more controversy. It potentially even opens the door for conspiracy theories and additional false information. When people don’t know what goes on behind closed doors, they may assume the worst. By ensuring that a representative, deliberative body reviews, filters, and broader transparency, decisionmakers can reduce the opaqueness surrounding prohibitions on apps like TikTok.
Unfortunately, the legal avenues explored during the attempted ban of TikTok failed to satisfy these principles, and their reliability proved to be lacking. The Biden administration and Congress owe it to the American people to develop a better avenue for a future ban of TikTok and apps like it.
Trump attempted to ban TikTok by invoking the International Emergency Economic Powers Act (IEEPA). This law grants the president the authority to address economic concerns related to national security, to prohibit transactions between Americans and the Chinese parent company of TikTok.
The IEEPA grants the president fairly broad powers to deal with national security issues, but the law also places some critical limitations on those powers. The law states that the president may “investigate, regulate, direct and compel…prevent or prohibit, any acquisition, holding, withholding, use, transfer, withdrawal…or transactions involving, any property in which any foreign national thereof has any interest.” This language illustrates the broad scope of presidential authority. Admittedly, this clause suggests the president possesses the power to ban TikTok. However, the IEEPA also states, “The authority granted to the President by this section does not include…any postal, telegraphic, telephonic, or other personal communication, which does not involve a transfer of anything of value,” suggesting that the president’s authority does not extend to social media applications that simply facilitate communication, not money.
Related to this exclusion, plaintiffs argued that the TikTok ban prohibits personal communication between Americans. Yet the President’s authority under the IEEPA explicitly does not include personal communication. Moreover, lawyers for TikTok argued that the platform functions as a “news-wire,” making content on the platform “informational materials,” which the IEEPA also excludes from the list of prohibitable categories. The Trump administration countered that the executive order in no way directly bans personal communication or informational materials. The effect on those categories simply indirectly results from the ban of the app. Regardless, as Judge Carl J. Nichols explained in granting a preliminary injunction against the ban, the text of the IEEPA clearly prohibits even “indirectly” banning informational materials and personal communications.
Even if the judge had found that the IEEPA could be used to ban TikTok, this method still would have raised concerns because it was a unilateral action. While it potentially satisfied the principle of safety, it did not satisfy the principles of accuracy and legitimacy. Using an executive order to go after such a popular app did not allow for public deliberation and dissent. It also made the action quite vulnerable to criticisms of political opportunism.
The administration could have also accomplished the ban of TikTok through a Committee on Foreign Investment in the United States (CFIUS) review process. Since 1975, CFIUS has possessed the authority to prevent mergers, acquisitions, and takeovers that threaten American national security. Thanks to the Foreign Investment Risk Review Modernization Act of 2018 (FIRRMA), this executive branch committee can exercise its power in the realm of data security. More specifically, FIRRMA gives CFIUS the ability to review transactions that involve the collection of “sensitive personal data on U.S. citizens.” Therefore, CFIUS could prevent a merger, acquisition, or takeover involving TikTok if it believed the company’s collection of American users’ private data threatened national security.
And, as Robert Chesney wrote for Lawfare in 2020, CFIUS would not have needed to review a new transaction to exercise this authority. The committee could have retroactively blocked the deal that allowed ByteDance (a Chinese company) to purchase TikTok in 2018.