The Buzz

Pentagon Problems: What Happens When Military Software Becomes Too Advanced?

Psibernetix, it seems, has built an artificial intelligence smarter than a fighter pilot. As I mentioned here at the beginning of the month, the company hatched at the University of Cincinnati has developed software for a Raspberry Pi machine that has defeated at least one retired USAF fighter pilot in simulated combat. Yes, there’s something inherently scary about that. But before the RAAF and every other air force get too excited about drones that could turn on us, we might consider what “inelegant messes” dwell at the heart of the software suites that enable their capabilities. Samuel Arbesman, scientist-in-residence at Lux Capital and a senior fellow at the University of Colorado, argues in his new book Overcomplicated: Technology at the Limits of Comprehension (Current, 2016) that engineers’ endlessly adding to software without understanding the underlying code is “making the world indecipherable.” The question is what can be done about it.

As Dan Goodin recently wrote for Ars Technica, this is true even in cyber. The malware platform GrayFish, “the crowning achievement of the Equation Group” at the NSA, “is so complex that Kaspersky researchers still understand only a fraction of its capabilities and inner workings.” Think about that—Eugene Kaspersky’s people, who make up one the biggest cyber security outfits on the planet, have the code—all the code—and they still can’t figure out what it does. That won’t always be the case for stolen cyber weapons—and that’s a whole other talk show—but the failure is ominous.

Inquiry in the field of software complexity is difficult, so Arbesman's book is in places a not-fully-satisfying collection of anecdotes, interspersed with occasional references to promising research. He notes how Mark Flood and Oliver Goodenough of the US Treasury’s Office of Financial Research have written of how contracts are like code. Representing them as automata thus renders legal analysis rigorous. Maybe, but as Keith Crocker and Kenneth Reynolds wrote back in 1993, aren’t contracts sometimes efficiently incomplete? At times, it’s actually better to agree to figure stuff out later, as the need arises. Their case was “An Empirical Analysis of Air Force Engine Procurement” (RAND Journal of Economics, vol. 24, no. 1), so the argument may still have some relevance to defense contracting.

That points to the human element in these messes. Kluges are everywhere, but they’re not wholly for good. Grafting TurboTax atop the disastrous US tax code cut the cost of compliance, but did little to improve the over-complex set of incentives and penalties that even the accountants can’t fully grasp. As my colleague Steve Grundman once wrote, citing Irving Janis and Leon Mann’s Decision Making (Free Press, 1977), efforts in induce intricate resource allocation planning fail because “people simply do not have the wits to maximize.” Their subtitle, A Psychological Analysis of Conflict, Choice, and Commitment, says much about the problem.

MIT PhD student and USAF officer Christopher Berardi recently described the military’s latest software projects to me as systems “beyond human comprehension.” They’re so big and complex that they can’t be reliably secured, but only patched after a problem emerges. I think that he senses a parallel in organizational problems. In his essay on “The Importance of Failure and the Incomplete Leader,” he argues that

    The complete leader must have the intellectual capacity to make sense of unfathomably complex issues, the imaginative powers to craft a vision of the future that generates enthusiasm, the operational know-how to translate strategy into concrete plans, [and] the interpersonal skills to foster commitment to undertakings that could cost people’s jobs should they fail.