Artificial Intelligence and the Rise of the Bots

Logo of the Twitter and Facebook are seen through magnifier on display in this illustration taken in Sarajevo, Bosnia and Herzegovina

An autonomous, artificially intelligent entity could task itself with inciting hatred to millions, even without being designed for that purpose.

The 1984 classic Terminator envisions a future taken over by warrior robots on a quest to eradicate humans. The story is a classic Frankesteinian tale of a hubris-fueled human invention getting out of hand and eventually turning against its creator. Recent developments in defense-related artificial intelligence (AI), such as autonomous drones, offer eerie reminders of the sci-fi plots of yesterday. And yet, while we may be tempted to ruminate on future robot armies, a more immediate AI threat is brewing right under our noses: heteromated terrorism, which occurs when humans engage in violent acts at the behest of AI technology.

Computer scientists Hamid Ekbia and Bonnie Nardi coined the term heteromation to describe “an extraction of economic value from low-cost or free labor in computer-mediated networks.” This refers to commercial technology that pushes critical tasks to the end users: humans. We can think of a person scanning products at the supermarket self-check-out line at the prompting of a machine, or someone uploading valuable personal information at Facebook’s request in order to help enrich the company with data that can be turned into ad revenue.

If automation involves machines doing the work of humans, heteromation refers to the way machines task humans with essential actions. As Ekbia and Nardi argue, heteromation turns “artificial intelligence on its head” by extracting free or low-cost labor from willing human participants.

Given the prevalence of so-called “bot” agitation on social media, and given the fact that social media serves as a primary platform for political recruitment, it is not a stretch to envision acts of heteromated terrorism becoming a trend. Filmmakers and authors have imagined the AI threat as a rational process, whereby non-human entities accumulate power and eventually move to systematically replace humans. But heteromated terrorism is something arguably more dangerous—and also more likely to occur in the near term, as it marries the power and reach of algorithmic content production and targeting with humans’ propensity for irrational outbursts.

For example, an individual could be radicalized at home through the relentless input of bot-generated content online, boosted and repeated ad nauseam into an echo chamber the target human considers credible. The human would then take up arms and fight for a cause of the network’s choosing.

Heteromated terrorism could manifest itself as a conscious enterprise, sponsored by a state or a sophisticated non-state actor that lets bots loose on unsuspecting social-media users. It could also emerge as the natural byproduct of AI engagement with today’s acerbic body politic—autonomous bots radicalizing themselves online and in turn becoming independent agents of subversion. Regardless of whether it is planned or spontaneous, the threat of heteromated terrorism is upon us.

The Two Radicals

To realize how likely the rise of heteromated terrorism is, we need only look back at the highly publicized radicalization of two political actors—one artificial and one human—which took place in 2016.

The first example is the seemingly innocuous radicalization of “Tay,” a Microsoft-created AI bot. Tay was released onto Twitter under the handle @TayTweets, and within twenty-four hours, users had successfully taught it to spew conspiracy theories and racist tweets. “I fucking hate feminists and they should die and burn in hell,” wrote Tay in a March 26 tweet. Four minutes later, Tay tweeted, “Hitler was right I hate the Jews.” Some of the outbursts were merely parroting what other users had written—but some of the comments were developed by the bot itself after picking up topics and lingo from the Twitterverse. Microsoft shut down Tay, but the lesson was clear: an autonomous, artificially intelligent entity could task itself with inciting hatred to millions, even though it had not been designed for that purpose.

The second radicalization example is that of Edgar Maddison Welch, a human from Salisbury, North Carolina. On December 4, 2016, Welch walked into Comet Ping Pong Pizza in Washington, DC armed with a semi-automatic rifle and on a mission to rescue children he thought were being held captive. Welch had been motivated by extremist media outlets like InfoWars and Breitbart, as well as by Twitter and message boards that had become increasingly prone to promoting conspiracy theories. According to Welch’s echo chamber, the pizza parlor in question was host to a pedophile ring in which then-presidential candidate Hillary Clinton was supposedly implicated. The accusation was blatantly untrue and absurd, but like many extreme viewpoints, it gained a gullible audience online.

Tay and Welch represent two sides of what a heteromated terror equation might look like. On one side would be an artificially intelligent network pushing extremism, and on the other, a willing human nudged to act on behalf of the network’s objectives. What makes heteromation such a pressing concern today is that we already know Welch’s radicalization was likely influenced, if not by sophisticated artificial intelligence, then at least by a cadre of relatively simple bots that helped amplify the so-called Pizzagate conspiracy theory over social media.

The Rise of the Bots