Facebook just banned the posting of “deepfake” videos. Congress should require other platforms to follow. “Deepfake” refers to an image, audio recording, or video that has been altered using artificial intelligence (AI), so that while the presentation seems completely authentic, what one sees or hears did not actually happen. For example, in April 2018, comedian Jordan Peele put out a video in which former President Barack Obama appeared to say “We’re entering an era in which our enemies can make it look like anyone is saying anything at any point in time, even if they would never say those things,” followed by a series of controversial and profane statements. The video soon revealed itself to be a deepfake, with a split-screen showing both the artificial Obama and the real Peele as they urged people to be careful about their online consumption of news and to be sure that their news sources are reliable.
In April 2019, a start-up called Canny AI released a video featuring a variety of world leaders appearing to sing John Lennon’s song “Imagine.” None actually did. Although the technology behind the production of deepfakes is still in its early stages, already “it is hard for some people to tell what is real and what is not,” according to Subbarao Kambhampati, a computer science professor.
There are deep concerns about the impact that deepfakes could have on politics. In 2018, Senator Mark Warner, the Senate Intelligence Committee’s ranking Democrat, stated, “I’m much more worried about what could come next – could bad actors target kids with fake videos from people they trust?” Warner added, “This ultimately begs the question – how do you maintain trust in a digital-based economy when you may not be able to believe your own eyes anymore?”
Representative Adam Schiff, the chair of the House Intelligence Committee, which held a hearing on deepfakes and AI in June 2019, said, “I don’t think we’re well prepared at all. And I don’t think the public is aware of what’s coming.” A December 2019 New York Times article reported that “[f]ew politicians have teams to spot false statements about them online, or to fight back before they spread.” Earlier this year, Facebook refused to remove a video of Nancy Pelosi that made it appear as though she was drunk. The video went viral and, although it was created without deepfake technology, it shows the speed and ease with which inauthentic videos can spread online.
Even those who believe in detection technology as a possible solution acknowledge that, due to the speed with which deepfake technology is improving, deepfake detectors “will require nearly constant reinvention.” It is going to be an arms race between those who aim to perfect deepfakes, already surprisingly strong, and those who seek ways to expose them, with those who make the fakes taking the lead.
Moreover, the spread of deepfakes will feed into the narrative that the media is manipulated and cannot be trusted, further undermining public confidence in the free press.
Claire Wardle, the executive director of First Draft, a global nonprofit focused on digital truth and trust, argues that some of the responsibility for handling deepfakes falls on the technology companies and social media platforms; they must come up with policies for handling efforts to share these pieces of content. She also calls on people to recognize their own responsibility and power with regard to the spread of deepfakes, saying that “if you don’t know 100 percent, hand on heart, ‘This is true,’ please don’t share, because it’s not worth the risk.”
Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence, points out that the development of deepfakes is one more reason that we need to be able to establish the authenticity of the source of whatever we see or read on the internet. Etzioni advocates for the use of digital signatures, to be certified by central authorities, encryption, or blockchain. (He holds that in effect all online activities, including emails and image uploads, should have an authentic digital signature by default.) Still, he recognizes the need for people to be able to voice their opinions anonymously. I suggest that we need an e-Hyde Park, where people can continue to post whatever they want without disclosing who they are, but everyone will be on notice that the park may include many fakes.
The startling advances in deepfakes raise a question even more profound than that of how democratic politics will survive this new, powerful assault: Do we as a society have any tools to deal with whatever the technologists seek to jump on us, at a seemingly ever-increasing pace? The New York Times sent reporters to hang out with a bunch of young engineers working on deepfakes, taping their dialogues (see episode 21 of The Weekly). At one point, we learn that the deepfake video the engineers made is almost perfect, but still, there are some telltale signs a trained eye can use to detect that it is not authentic. The engineers, with the enthusiasm of kids opening their Christmas gifts, rush to perfect the fake—to ensure no one can tell it is a fake. Asked if they wonder about the consequences of the witch’s brew they are concocting, the engineers demur for a minute. And then they rush ahead with, “Well, but would it not be neat to make it?”
I often argue against those who urge us to ban this or that tech (as San Francisco did to facial recognition), on the ground that we should ban some usages but welcome others because many new technologies have both very promising and some very troubling usages. In contrast, the profile of deepfake technology is so one-sided, so harmful, it should be banned.
Amitai Etzioni is a University Professor and professor of international affairs at The George Washington University. Click here to watch a recent, four-minute video called “Political and Social Life after Trump.” His latest book, Reclaiming Patriotism, was published by University of Virginia Press in 2019 and is available for download without charge.