The Ethics of Artificial Intelligence in Intelligence Agencies

The Ethics of Artificial Intelligence in Intelligence Agencies

The defense community has already begun a healthy dialogue about the ethics of AI in combat systems

Some of society’s brightest minds have warned that artificial intelligence (AI) may lead to dangerous unintended consequences, yet leaders of the U.S. intelligence community—with its vast budgets and profound capabilities—have yet to decide who within these organizations is responsible for the ethics of their AI creations.

 When a new capability is conceived or developed, the intelligence community does not assign anyone responsibility for anticipating how a new AI algorithm may go awry. If scenario-based exercises were conducted, the intelligence community provides no guidelines for deciding when a risk is too great and a system should not be built and assigns no authority to make such decisions.

 Intelligence agencies use advanced algorithms to interpret the meaning of intercepted communications, identify persons of interest and anticipate major events within troves of data too large for humans to analyze. If artificial intelligence is the ability of computers to create intelligence that humans alone could not have achieved, then the U.S. intelligence community invests in machines with such capabilities.

 To understand the ethical dangers of AI, consider the speed-trading algorithms commonly used in the stock market—an example of the employment of AI in a highly competitive, yet non-lethal, environment. A computer algorithm issues orders to buy a stock and floods the market with hundreds or thousands of apparently separate orders to buy the same stock. Other algorithms take note of this sudden demand and start raising their buy and sell offers, confident that the market is demanding a higher price. The first algorithm registers this response and sells its shares of stock for the newly higher price, making a tidy profit. The algorithm then cancels all of its buy orders, which it never planned to complete anyway.

 The sequence of events takes place in less than one second, faster than any human could have observed what was occurring, let alone make a decision to buy, sell or hold. The Securities and Exchange Commission reports that only 3 to 4 percent of stock orders are filled before they are canceled, an indication of how widespread this practice has become. The first algorithm was successful because it gamed the system; it understood how its competitors collect and analyze information in the environment, and it used the competitors’ decision-making criteria against them.

 Instead of trading stocks, imagine if these algorithms were designed to sense and anticipate the moves of foreign governments. Envision the intelligence community creating algorithms that listen to enemy communications in every language and dialect, digest reports from human spies around the world, monitor our entire terrestrial and space-based network of sensors, and then fuse all of this information into AI-generated all-source analysis. If one day a computer warned of an imminent military, economic, or cyber attack, what would the United States do with this intelligence?

 In today’s intelligence community, an analyst would likely write a paper, spend days (or hours, in a truly urgent scenario) vetting the assessment with colleagues and senior officers, and then share the assessment with policymakers. Yet the value of such algorithms resides in their ability to initiate immediate responses. Once an AI system detects an imminent attack, its value would reside in the ability to respond before the attack could take place.

 Like stock market speed traders looking for their competitive edge, adversaries of the United States can be expected to relentlessly seek ways to use the nation’s algorithms against it. AI could provide strategic advantages but also create enormous vulnerabilities when the United States negotiates new treaties and trade agreements or tries to peacefully resolve a conflict.

 For example, imagine the United States and its allies are negotiating a nuclear treaty with a hostile regime intent on building nuclear weapons. The success of such negotiations requires all parties to reach an agreement that satisfies each participant’s most fundamental requirements, and the art of negotiation is discovering which terms an adversary is and is not willing to negotiate. When AI is creating the all-source intelligence that policymakers use to craft their negotiating position, the adversary regime would benefit enormously by manipulating the inputs and calculations the system uses to create its assessments. When the State Department diplomats arrive at the negotiating table with an inflated perspective of the adversary’s bottom line, the adversary has predetermined the negotiation’s result. This same scenario could occur during economic trade treaties and peace accords.

 The Department of Defense is already grappling with the war-fighting equivalent of this issue, which revolves around autonomous combat systems that can identify targets, strike, assess, and strike again and again—faster than any human could identify the problem and respond. Such a system might include a swarm of self-piloted unmanned aerial vehicles, like the Collaborative Operations in Denied Environment program that the Defense Advanced Research Projects Agency is already developing. The Department of Defense created a directive, “Autonomy in Weapon Systems,” that requires an autonomous weapon system to include a human interface and allow for a human operator to intervene at any time.

 Yet the intelligence community has no such directive.

 Consider what could happen if the intelligence community creates a policy similar to the Pentagon directive and requires a human operator be allowed to intervene at any moment. One day the computer warns of an imminent attack, but the human analyst disagrees with the AI intelligence assessment. Does the CIA warn the president that an attack is about to occur? How is the human analyst’s assessment valued against the AI-generated intelligence?

 Or imagine that a highly sophisticated foreign country infiltrates the most sensitive U.S. intelligence systems, gains access to the algorithms and replaces the programming code with its own. The hacked AI system is no longer capable of providing accurate intelligence on that country.

 The defense community has already begun a healthy dialogue about the ethics of AI in combat systems, a discussion that remains critically important and must continue as technologies evolve. Meanwhile the stock market’s “flash crash” of 2010 showed that the dangers of autonomous algorithms are not unique to weapon systems. The intelligence community should take a proactive approach to dealing with the risks of AI by creating policies that pursue innovation in a safe and deliberate manner and consider safeguards and oversight. They should start by developing standards for AI risk assessments, assigning responsibility for conducting such assessments, and assigning program oversight authority.

 Cortney Weinbaum is a former intelligence officer in the U.S. intelligence community. She is a national security researcher with the Intelligence Policy Center at the nonprofit, nonpartisan RAND Corporation. 

Image: US Navy