Air Force Secretary: The Law of War and the Power of Computing

The sun sets behind an Australian F-35A Lighting II aircraft at Luke Air Force Base, Ariz., June 27, 2018. The first Australian F-35 arrived at Luke in December, 2014. Currently six Australian F-35's are assigned to the 61st Fighter Squadron where their p
September 4, 2018 Topic: Security Region: Americas Tags: Air ForceArtificial IntelligenceMilitaryTechnologyWar

Air Force Secretary: The Law of War and the Power of Computing

"We should be cautious about undermining that society and the legitimacy of nation-states."

At its best, computing in warfare allows us to achieve just objectives to protect the nation and our vital national interests, while minimizing unnecessary destruction and risk to our military and innocent civilians. I would argue that, to this point in history, computing in warfare has allowed us to make better decisions as combatants. War is a horrible thing, and it remains imprecise, but the jus in bello effect of computers has been generally a movement toward greater precision and more narrow applications of force.

We must still face, however, the jus ad bellum effect of computers. Thus far, the application of computing in warfare has not really changed much about the authority to use force, which we still place within the province of the sovereign state. And, to the extent that computing has made militaries more accurate, it has advanced compliance with the principles of the humanitarian law of war.

But, like the canonists in the thirteenth century when the feudal system was dying, we live in a time of tremendous social and technological change. Consider Go, a complicated game of Chinese origin. In 2016, a London laboratory called DeepMind developed the first computer program to defeat a world champion at Go. The program was trained on thirty million moves played by human experts, and it had some capacity to learn.

Last fall, a new version of DeepMind’s AlphaGo program was released: a computer program that did not use any moves from human experts to train. It learned by playing millions of games against itself. After that training it took on its predecessor program—with already the strongest play in the world—and defeated it one hundred games to zero. And then, using something called “transfer learning,” it was also able to defeat chess computers at chess.

Pause to think about that for a moment. A computer program that used no human data beat other machines at a game it was not programed to play.

It’s not hard to imagine how machine learning like this will change our lives dramatically—far more and far faster than we have experienced in the years since Neil Armstrong walked on the moon. And, to be sure, this Machine Learning will enable new modes of warfare.

That should give us pause. As people of conscience, we are afraid that machines will teach themselves how to win the game irrespective of any moral code, undermining the limitations on the use of force that our societies have built over centuries. Will machines decide what to do based on utility or based on a moral worldview? How will strategists incorporate the use of Artificial Intelligence to influence the decisions of adversaries? How will computers navigate the inherent tension in war between the necessity to win and the need to understand that war is an abnormal condition fought for the sake of a better peace? In particular, how will Artificial Intelligences calculate the jus ad bellum considerations for going to war, when the jus in bello precisions brought about by computers make the costs of going to war appear less destructive than they might have been through the past hundred years? What will restraint an Artificial Intelligence if it determines that acting first has inherent advantages?

When writing about World War I, Barry Hunt described the opposed systems of alliances as “propelled by a grim self-induced logic.” We should be wary of self-induced logic in warfare.  There is a moral imperative here. When it comes to warfare, humans must continue to bound and decide the why and the when, even as computers are increasingly engaged in the how.

But it isn’t just the means of warfare, the way we conduct warfare, that is being challenged at the moment. In the wake of World War II, governments were the primary sponsors of basic research. Today, there are seven very large companies that are the leaders in Artificial Intelligence. All of them are headquartered in America or China. More high-risk, long-term research in Artificial Intelligence and Machine Learning is being funded by companies and their wealthy owners than by governments.

And there is certainly tension when the position of a private company and a government are not aligned. Over the past year, technology companies have been heavily criticized for the use and sale of personal information and their obligation to police their platforms. Companies are responsible to their shareholders, primarily, but also to some extent to their customers and employees. And in June 2018, after employee protests, Google CEO Sundar Pichai published a set of Artificial Intelligence principles. Google, he announced, would not develop Artificial Intelligence for use in weapons.

Google is just one company, though a very large one. And, in our society, companies are collections of people who can choose how they will use their money and their talents. But when a handful of large companies control the power of Artificial Intelligence, it raises questions about what entities will make decisions about its application and its impact on our lives in the United States and around the world. We may be living in a time when power is shifting again, not toward popes or feudal lords, but to companies who control tools that learn and act in ways that we are only beginning to understand.

Hedley Bull once described the system of sovereign states as an “anarchical society.” Anarchical, that is to say, but still a society and consequently guided by some rules and norms of behavior. We should be cautious about undermining that society and the legitimacy of nation-states.

Almost fifty years ago, on July 20, 1969, Neil Armstrong stepped onto the surface of the moon. In the generation since, we have all witnessed a profound revolution largely enabled by the power of computing. And yet, even greater change may be coming. There are children alive today who, fifty years from now, may think that things changed quite slowly for the generation after 1969, when compared to the pace of change they will have navigated in their lives.

I hope, from time to time, they pause, and think carefully about the moral choices they will make.

Heather Wilson is the twenty-fourth Secretary of the Air Force [10]

Image: The sun sets behind an Australian F-35A Lighting II aircraft at Luke Air Force Base, Ariz., June 27, 2018. The first Australian F-35 arrived at Luke in December, 2014. Currently six Australian F-35's are assigned to the 61st Fighter Squadron where their pilots train alongside U.S. Air Force pilots. (U.S. Air Force photo by Staff Sgt. Jensen Stidham). Flickr / U.S. Department of Defense

[1] St Thomas Aquinas, Summa theologica, Secunda secundae, Quaesto XL (de bello), quoted in John Epstein, The Catholic Tradition of the Law of Nations (London: Burns, Oates and Washbourne, 1935), p. 83.

[2] James Turner Johnson, Just War Tradition and the Restraint of War (Princeton: Princeton Univ. Press, 1981), p. 151

[3] See Daoud L. Khairallah, Insurrection Under International Law (Beirut: Lebanese University, 1973), 98

[4] G.I.A.D. Draper, “The Implementation and Enforcement of the Geneva Conventions of 1949 and of the Two Additional Protocols of 1978”, Recueil des cours, 164 (1979-III), 10.

[5] Donald L. Miller, Masters of the Air (NY: Simon & Schuster, 2006), 2.

[6] Ian Hogarth, “AI Nationalism”, www.ianhogarth.com/blog/2018/6/13/ai-nationalism, accessed June 21, 2018, 3.

[7] Barry D. Hunt, War Aims and Strategic Policy in The Great War: 1914-1918 (London: Croom Helm, 1977), 9.

[8]  Hogarth, “AI Nationalism”, 9.

[9] Hedley Bull, The Anarchical Society (NY: Columbia University Press, 1977)

[10] This article is based on a speech originally given in June 2018 as part of a lecture series, “The Culture and Consequences of Computers” sponsored by the Dakota State University Classics Institute and its Director, Joseph Bottum.