Twitter has a bot problem. With every national disaster comes another episode of Twitter bots amplifying a false message. The deadly school shooting in Parkland, Florida, last month was no exception. A day after the tragedy, Russian bot tracking website Hamilton 68 reported that #NRA, “shooter,” and “Florida” were all trending topics among Russian influence campaigns.
Of Twitter’s 328 million monthly active users, an estimated 5 to 15 percent, or 16 to 49 million, are bot accounts. While the social media platform has addressed this problem, it is caught between profits and patriotism. With advertising revenue accounting for nearly 24 percent of Twitter’s $13.7 billion value, the company has little incentive to decrease user numbers. Because the use of bots to trend a topic or generate artificial likes on Twitter is now commonplace, policymakers have called the massive social media platform to account. A plethora of problems suggests it is not doing enough to identify and shut down the use of bot accounts that actively spread disinformation. To put a stop to these bots, Twitter should work with the U.S. government.
Twitter is aware of its weakness and is actively trying to tackle it. Last month, Twitter’s director of public policy and philanthropy, Carlos Monje, testified before the Senate Committee on Commerce, Science, and Transportation that the company is increasing its account verification technology in an effort to crack down on malicious accounts. In fact, Monje claims that Twitter is constantly fighting malicious automation by challenging four million accounts per week for further verification. Yet the bots keep coming back. Monje admitted as much, calling it a “cat and mouse game.” There have also been no updates or major amendments to the platform’s rules and regulations regarding possible misuse. Further, this internal self-improvement effort lacks transparency.
If technological progress created the issue of bots, it should be able to solve them. Artificial intelligence has proven to be very useful in the detection of bot and spam accounts. Twitter claims to have adopted a machine learning strategy that shuts down 75 percent of suspected bot accounts before their first tweet. Nonetheless, disinformation still runs rampant on Twitter, especially now that the platform’s verification system is down, making new personal accounts indistinguishable from bot accounts.
The digital propagation of disinformation is not just Twitter’s problem: the U.S. government should also be working to find ways to counter the spread of disinformation. After all, Twitter is an American company and the proliferation of bots that generate traffic have influenced the way people receive and process news, as demonstrated by the recent shooting in Florida.
There’s no question that bots spreading false information played a role in the Brexit vote and 2016 U.S. presidential election, among others. Malicious actors, backed by geopolitical rivals creating bots to spread false information and jeopardize democratic elections, have ushered Twitter into an ongoing information war for which the United States cannot remain on the sidelines.
The United States is not idly vulnerable to these technology-based threats. The Defense Advanced Research Projects Agency (DARPA), an agency of the U.S Department of Defense, analyzes technology risks and ways the United States can combat them. In 2015, DARPA conducted a Twitter bot challenge, which asked teams of computer scientists to target bots on a specific topic. The winner of the challenge was able to correctly identify thirty-nine bots in a stream of over four million messages from seven thousand accounts. This is a clear example of a partnership between the technology and policy communities, but the work should not stop there. Engagement opportunities between the U.S. government and the technology sector must occur more frequently and better translate into behavioral changes in both the public and private sectors, perhaps even mandated by law.
Twitter’s acknowledgement of its bot problem is a good start. However, Twitter cannot solve it until social responsibility takes priority. By shifting its focus away from the bottom line and towards a public-private partnership, Twitter would have more resources to devote to technology development and policies to counter bots spreading malicious information. If Twitter is serious about a successful strategy against bots amplifying malicious disinformation, it must focus its energies on cultivating a relationship with the U.S government.
Claire Prichard is a former researcher at the Center for a New American Security (CNAS).