My Experience with Social Media Restrictions on Free Speech
Speech is too important for technocrats to control. Elected officials and the courts should be the main controllers—and they should control only when there is a clear and present danger to our security and to our democratic process.
Strong voices from both ends of the political spectrum have called on tech companies to be more responsible, to remove from their platforms any material that offends community mores and that manipulates elections. Actually, as I see it, over the last few years the tech corporations have blocked or deleted staggering amounts of messages and ads, including material which, if removed from offline publication, would lead even moderate defenders of free speech to go ballistic. Moreover, each tech corporation is making its own rules about which speech it allows and which it blocks. These are not subject to public review and often impossible to figure out. Protecting speech—and figuring out the rare occasions people should be denied voice, should be censored—is too important to leave to Facebook CEO Mark Zuckerberg and his fellow tech tycoons.
Some argue that because tech corporations are private companies, they cannot censor, only the government can. Some who are legally-minded hold that the First Amendment states that Congress shall make no law abridging the freedom of the press, not that private companies cannot control messages. Also, given the differences in policies among the various companies, if one closes a door, there is likely another that leaves that door open. Only the government can prevent access to all mediums and thus truly censor.
One must note, though, that these companies control a very large amount of the communication space and that exercise control over many subjects. Hence, if they restrict someone’s access, that person’s speech is greatly limited. Anyone denied a voice by Google, Facebook, and Twitter will find it very difficult to reach the masses through social media.
For many years, the tech companies avoided responsibility for the content that people posted on their social media sites, claiming that they are merely platforms, not publishers. However, more and more public leaders have begun to argue that tech companies should control content. These views reached a high point following the revelations about Russia’s meddling in the 2016 U.S. elections and its drive to sow social discord through coordinated social media misinformation campaigns. The tech companies responded by hiring tens of thousands of moderators to review posts and remove material they consider too violent, lewd, hateful, or misleading. Typically, moderators have as little as ten seconds to review a post. They can hardly take much longer, given the astronomical number of posts that must be reviewed. No wonder their judgment is often highly arbitrary and always rushed. The companies are also increasingly using artificial intelligence algorithms to deny speech. Artificial Intelligence seems to incorporate the biases implicit in the mass media, for instance favoring men over women in gaining access to ads about high-paying jobs.
While conducting research on the misuse of social media platforms for a journal of the National Academy of Sciences, I was stunned at the sheer amounts and the wide range of grounds that the tech companies can use to justify removing social media posts. For example, in three months, between July and September of 2019, YouTube removed over 8.75 million videos. Of the videos removed, over 4.75 million were removed for being spam or misleading. Well, by this standard, I would block one news network and its followers would likely block the news network I am following. Over 1.35 million videos were removed for violent or graphic content and over 1.25 million were removed for nudity or sexual content, however, what is considered graphic and sexual varies a great deal from one community to another. Hence, the courts by and large have allowed such speech to be made offline. Why are tech companies being more pious?
The rules regarding what is allowed versus what is banned shift more quickly than fashion at Target. For example, Google and Facebook recently refused to take down ads from the Trump campaign that featured falsehoods about Biden. Last month, however, Google took action against seven ads purchased by the Trump campaign without revealing which ads were banned. More recently, Google announced a revised ad policy that limits the number of ways political advertisers will be allowed to target ads. Twitter first banned all political ads, then the company announced that it would permit non-micro-targeted “cause-based advertising” and ads that refer to politics that come from exempted news sources. Furthermore, not all bans are permanent, which further confuses the state of affairs. Thus, between January and June of 2019, Twitter “took action” against over one million accounts due to violations of the platform’s rules. Three of the types of “actions” Twitter takes involve some form of temporary suspension of the account until changes are made or the account ownership is verified. Only the fourth action involves permanently suspending an account and barring the violator from creating any new accounts.
In an article for Issues in Science and Technology, I suggested that elected officials need to be much more involved in setting the framework of which content tech corporations cannot remove and which they must. Specifically:
· Content that directly incites or threatens violence should be removed.
· Tech companies should be required to remove communication that is illegal offline, for instance, child pornography.
· Companies should be prevented from facilitating illegal acts such as sex trafficking and terrorism. For example, Craigslist was made to take down certain classified pages because they were determined to contribute to sexual violence.
· Most other posts should be labeled rather than removed, thus allowing users to ignore the warning and click through. This seems to be a sound approach to sexually explicit content that some people might deem pornographic and others not.
· To protect the public from manipulation by foreign sources as well as extremist domestic ones, the best that can be done is to insist that the source be disclosed. Then the public can decide which sources to trust.
· Congress should regularly review criteria that tech companies use, and the criteria should be public.
I have personally gotten a taste, indeed a mouthful, of what it is like to be at the receiving end of high tech “action.” In September, the University of Virginia Press published my book, Reclaiming Patriotism. In the book, I call for more dedication to the common good. I hence decided to put my pocketbook where my mouth is and made the book into an open-source, which means that people can download it without being charged (and I get no royalties). Still, I wanted people to hear the message, so I found a few dollars and sought to post an ad on Facebook. It is important for what follows to note the tenor of the ad. It read: “Are you tired of DIVISIVENESS and POLARIZATION? You can be a good patriot without being a nationalist. Amitai Etzinoi shows how in RECLAIMING PATRIOTISM. Download his new book for FREE here.” I received a terse message from Facebook stating: “Your ad was rejected because it doesn't comply with our advertising policies.”
I tried to find out the reason for the rejection, however, there is no way to call Facebook, so I emailed an appeal. Several days later, I received a list of potential reasons for the rejection. Only one of these reasons was applicable, and it indicated that Facebook disapproved because the ad was “political.” Therefore, Facebook could not publish the ad without verifying the identity of the person behind it.
The verification process involved three steps. First, Facebook sent a message to my cell phone, which I entered online. The next step was to confirm my primary country. For this, I had to supply Facebook with a residential address in the United States. After several days, a letter arrived from Facebook via snail mail, containing a new code, which I used to complete the second step.
There were two options for the third step: I could answer a series of personal questions Facebook posed, or I could submit a notarized document attesting to my identity. I found a notary and scanned and uploaded the completed document. Finally, I resubmitted the ad, and it was published.
The effort took two weeks. I had to go through most of the same steps again when I submitted a different ad. I have often read the term “chilling” effect, but it did not mean much. Now I know better. Every time I considered raising my posting an ad on social media, I wonder if it is worth the fuss. Others may well feel the same way.
Speech is too important for its control, which should be limited and sparse, to be left to the technocrats. Elected officials and the courts should be the main controllers—and they should control only when there is a clear and present danger to our security and to our democratic process.
Amitai Etzioni is a University Professor and professor of international affairs at The George Washington University. His latest book, Reclaiming Patriotism, was published by the University of Virginia Press in 2019 and is available for download without charge.