A lengthy manifesto about a purported demographic conspiracy posted to an online forum. An attempt to broadcast the attack on a livestreaming site. Etchings of white supremacist figures, memes, and ideas on weapons used in the attack. The initial evidence related to the horrific attack that left ten dead at the Tops supermarket in Buffalo suggests an intrinsic link between this act of racially motivated violent extremism and other terrorist attacks in Texas, California, Germany, Norway, and elsewhere.
The perpetrators of these terrorist attacks each drew significant aspects of their “playbooks” from one source. In March 2019, a white supremacist lone gunman disseminated a seventy-five-page manifesto entitled “The Great Replacement” on social media and simulcasted the murders of fifty-one mosque-goers in Christchurch, New Zealand, using weapons labeled with white supremacist references and messages. The perpetrator and many aspects of his attack were allegedly praised by the Buffalo supermarket shooter in his own manifesto, and several aspects—including the decision to livestream the attack—were replicated during his assault in Buffalo.
However, white supremacists planning attacks were not the only people who studied the Christchurch playbook. The Christchurch attack constituted a turning point in the way major social media companies—whose services were exploited by the shooter during the attack—view and address the problem of terrorist use of social media platforms. In the wake of Christchurch, particularly in response to the livestreaming of the attack, the consortium of major online service providers known as the Global Internet Forum to Counter Terrorism (GIFCT) formed a rapid response mechanism between its member companies to react to online terrorist content. Now, when a GIFCT member company identifies a livestream of a terrorist act, it can immediately remove the broadcasting account, capture the unique digital signature of the video, transmit the information to other companies, and prevent the video’s spread. These actions were codified into a first-of-its-kind joint pledge by social media companies and governments to “take transparent, specific measures seeking to prevent the upload of terrorist and violent extremist content … on social media.” In reference to the incident that prompted it, the document became known as the Christchurch Call.
Fortunately, initial evidence from the Buffalo attack shows that the online counterterrorism architecture developed by social media companies after Christchurch is paying dividends. The Christchurch attacker broadcast his shooting on Facebook Live for seventeen minutes, during which time over 200 people saw the original video. Before Facebook could remove the video, as many as 4,000 people viewed it on the platform, and copies of the broadcast had spread rapidly on other social media sites. In contrast, only two minutes after the Buffalo shooter allegedly began livestreaming on Twitch, the company took the video down and shared its unique hash identifier with other companies, allowing other providers to automatically detect existing uploads of the video and block new uploads before they were posted. Social media companies have taken other steps in the wake of the attack, including removing the shooter’s social media profiles and auto-blocking copies of his manifesto.
By most metrics, the post-Christchurch incident response protocol functioned as it was designed during the Buffalo attack. Not everyone agrees, however. Many, especially elected officials, ascribe a “zero-fail” mission to online service providers with regard to removing terrorist content that is simply not feasible. For instance, during a press conference following the attack, New York governor Kathy Hochul said, “The fact that this act of barbarism … could be livestreamed on social media and not taken down within a second says to me that there is a responsibility out there” for social media companies to “be more vigilant in monitoring social media content.” It is understandable why Governor Hochul and others would like to hold social media companies to this standard. Every single view of a livestream of a terrorist attack is one too many, and those who hold the bar this high may be describing an ideal scenario to provoke companies to live up to the challenge. In some circles, however, this viewpoint is less of a best-case scenario and more a societal expectation of tech companies, revealing several important points about collective metrics for online counterterrorism successes and failures in a post-Christchurch world.
First, the role in which social media companies have been most effective in online counterterrorism is harm reduction, not incident prevention. The latter is a job for law enforcement and counterterrorism authorities, not private companies. Social media companies only have jurisdiction over their own platforms, and they have access to a very limited scope of tools to prevent a user on their platform from engaging in a specific offline behavior. Unlike law enforcement tools such as investigations, arrests, and prosecutions, social media’s online counterterrorism tools—account suspensions, content removal, and content moderation, for example—were only designed to limit the spread and/or mitigate the effect of a particular type of online content.
From this perspective, it is hard to articulate a counterterrorism goal that could have been improved had social media companies “vigilantly monitored” and instantaneously taken down the Buffalo shooter’s livestream (as opposed to two minutes after its initiation). In this scenario, the attack would have gone on as planned, with all of its horrifying results, and even in lieu of a livestream or manifesto, white supremacists would have held up its example online to radicalize potential followers. Yet, through their immediate actions, Twitch and other companies were able to minimize the spread, and thus the harm, of the shooter’s online presence.
More importantly, it is critical to hold online and offline counterterrorism authorities to the same standard. In the weeks to come, it is likely that governmental authorities beyond Governor Hochul will attempt to bring social media providers to account for their response to the Buffalo attack livestream. If the standard of a zero-fail mission is a guiding principle, policymakers should remember that social media was not the only forum in which potential evidence about the attacker’s ideations could have been found prior to the shooting. In fact, there are reports that the attacker was the subject of a 2021 New York State Police investigation after he reportedly made threatening statements about conducting a shooting at his high school; this resulted in no criminal charges. Lone actor shooters who had previous touchpoints with law enforcement prior to their attack are more common than not, and there is no evidence in this case that the police could have conducted the investigation any differently or brought about a different outcome. However, it is important to recognize that counterterrorism investigations—whether they are conducted by law enforcement or social media companies—ask responsible authorities to find needles in haystacks. They are riddled with complexity and constrained by a variety of factors that can alter the final outcome.
These points underscore that while it can be prudent as a society to ask social media companies to attempt the extraordinary by eliminating all terrorist and violent extremist activity on their platforms, it is unwise to expect no less from them. Instead, it is incumbent on policymakers and the public to clarify their visions of what social media companies can and cannot achieve when it comes to their primary counterterrorism responsibility: mitigating the dissemination of violent extremist and terrorist content online. The response of social media companies to the Buffalo attack appears to show that many of the response protocols developed after the Christchurch attack are bearing more fruit than policymakers may give them credit for. In the future, while continued encouragement from the government will be essential to improve these mechanisms, they should always adhere to a clear-eyed understanding of social media companies’ roles, responsibilities, and abilities in the fight against violent extremism.
Bennett Clifford is Senior Research Fellow at the Program on Extremism at George Washington University.