Don't Tell Hollywood: You Have Little to Fear from a Rogue AI

December 7, 2023 Topic: AI Region: World Blog Brand: The Buzz Tags: AITechnologyArtificial IntelligenceMilitary

Don't Tell Hollywood: You Have Little to Fear from a Rogue AI

Artificial intelligence, or AI, is becoming more and more prevalent in our daily lives, but as AI is leveraged for more weapons applications, do we need to worry about it going rogue?

 

Now, imagine we took that same AI agent and put it into a drone tasked with the suppression of enemy air defenses. After programming all of its parameters, we power the system up in a simulated environment and send it out to destroy surface-to-air missile sites. At the end of the test, the AI did exactly as well as it did with the marbles — taking out all of the enemy missile sites, as well as the friendly site that theoretically housed its human operators.

Would we call that an AI agent going rogue? Or would we look at it in the same light as we did with the marbles and recognize that the failure came as a result of the coding, inputs, or sensors that we installed?

 

When this very thing made headlines around the world last month, people overwhelmingly perceived it as the AI going rogue. They projected human ethics, emotions, and values into the simulation and decided the AI had to be working out of malice, when in truth, it simply picked out some blue marbles along with all the red ones we assigned it.

HOW WE BUILD AI SYSTEMS

The basic steps to creating a functioning AI agent are as follows:

  1. Define the problem
  2. Define the intended outcomes
  3. Organize the Data Set
  4. Pick the right form of AI technology
  5. Test, Simulate, Solve, Repeat

In other words, developing an AI-controlled drone for a Suppression of Enemy Air Defense mission would look a bit like this:

  1. Define the problem: An enemy threat to U.S. aircraft operating within an established region of airspace.
  2. Define the intended Outcome: Suppress or eliminate enemy air defense SAM sites and radar arrays.
  3. Organize the Data Set: Collect all of the data the AI will need, including geography, airframe capabilities, sensor types and ranges, weapons to leverage, identifying characteristics of targets, and more. 
  4. Pick the right form of AI technology: Identify the most capable AI agent framework to deal with your specific problem and the breadth of your available data
  5. Test, Simulate, Solve, Repeat: Run the simulation over and over, identifying incorrect outcomes and changing parameters to create more correct outcomes. 

As you can see, in this type of hypothetical scenario, the AI agent opting to take out its human operator is less a result of the system’s broad, generalized intelligence, and more a product of the system’s limited reasoning capabilities. Given a set series of objectives and parameters, AI can run through simulation after simulation identifying every potential solution to the problem it was given, which would include eliminating the operator because, to the simplistic thinking of an AI algorithm, that’s just as valid a potential solution as any. It’s all just red or blue marbles.

It takes human interaction, adjusting data inputs and program parameters, to establish the boundaries of what AI will do. In effect, AI is just like computers have been for decades — the output is only as good as the initial input. 

And that’s exactly why these sorts of simulations are run. The idea is for human operators to learn more about what AI can do and how it can do it, all while maturing an AI agent into a system that’s capable of tackling complex, but still narrowly defined, problems. 

THE FUTURE OF AI IN COMBAT

So what does that mean for AI-powered war machines? Well, it means that we’re a long way off from an AI agent choosing to overthrow humanity on its own, but AI can already pose a serious risk to human life if leveraged irresponsibly. Like the trigger of a gun, today’s AI agents aren’t capable of making moral or ethical judgments, but can certainly be employed by bad actors for nefarious ends. 

 

Likewise, without proper forethought and testing, lethally armed AI systems could potentially kill in ways or at times the operator doesn’t intend — but again, it wouldn’t be caused by a judgment made by the system, but rather by a failure of the development, programming, and testing infrastructure it was borne out of. 

In other words, a rogue AI that goes on a killing spree isn’t outside the realm of imagination, but if it happens, it’s not because the AI chose it but rather because its programmers failed to establish the correct parameters.

The idea that the AI agent might turn on us or decide to take over the earth is all human stuff that we just can’t help but see in the complex calculators we build for war, the same way we see complex human reasoning in our pets, in Siri or Alexa, or in fate itself when we shake our fists at God or the universe and ask why would you do this to me?

The truth is, AI is a powerful and rapidly maturing tool that can be used to solve a wide variety of problems, and it’s all but certain that this technology will eventually be leveraged directly in the way humanity wages war.

AI will make it easier to conduct complex operations, and guide a weapon to its target; it will make it safer to operate in contested airspace… but it isn’t the intelligence we need to fear. 

The truth is, the only form of intelligence that truly warrants the nervous consternation we reserve for AI is also the form we’ve grown most accustomed to… our own. As the technology stands today, we have little need to fear the AI that turns on us. 

The AI we need to fear is the one that works exactly as intended, in the hands of human operators who mean us harm. 

Because while AI systems will mature and take on more and more of our common human responsibilities, tasks, and even jobs… one monopoly mankind will retain for many years to come is the propensity for good… or for evil.

Alex Hollings is a writer, dad, and Marine veteran.

This article was first published by Sandboxx News.

Image: Creative Commons.