Killer Robots Are Coming to the Battlefield
The proliferation of autonomous weapons systems (AWS)—often (mis) labeled ‘killer robots’—is a modern concern.
Rapid technological change has brought myriad new sources of uncertainty and variability to the modern battlefield, and to public discourse. For millennia, militaries have integrated emerging technologies into evermore practical and impactful weapons. The proliferation of autonomous weapons systems (AWS)—often (mis) labeled ‘killer robots’—is a modern concern.
AWS promise to augment battlefield decision-making, be low-cost and scalable, reduce collateral damage, and better protect service personnel and civilians. At the same time, these systems have immense potential to undermine international security and stability. A key question for governments is whether AWS can be developed and deployed ethically.
It’s generally agreed that an algorithm must not be in full control of decisions to kill or harm humans, regardless of the weapons involved. Following United Nations General Assembly discussions in November 2023, the UN’s first committee (disarmament and international security) heard that ‘even if an algorithm can determine what’s legal under international humanitarian law (IHL), it can never determine what’s ethical’. After 11 votes on provisions, the committee approved a draft resolution on AWS. On 22 December, the UN adopted Resolution 78/241 by 152 votes in favor to four (Belarus, India, Mali, and Russian Federation) against. There were 11 abstentions (China, North Korea, Iran, Israel, Madagascar, Niger, Saudi Arabia, South Sudan, Syria, Türki̇ye and United Arab Emirates).
The resolution’s adoption affirms that the UN Charter, IHL, and international human rights law apply to AWS matters.
During other multilateral discussions, including on the UN Convention on Certain Conventional Weapons, Australia and 10 other countries including the US, the UK, and Russia, opposed a pre-emptive ban on lethal AWS. Twenty-nine states support a ban, while China has maintained a deliberately ambiguous stance in supporting a ban on usage but not on development. These positions are supported by differing beliefs in the ability of AWS to uphold the IHL principles of discrimination, necessity, and proportionality. This requires a system to be able to differentiate non-combatants from combatants, use minimal force in achieving military objectives, and operate in a way that weighs the costs and benefits of a particular action against alternatives to make decisions based on a comparison of benefits.
Australia—alongside Canada, Japan, South Korea, the UK, and the US—endorses the lethal use of AWS provided such systems are not designed and developed to be deployed against civilian populations or without meaningful human intervention. Adherence to these rules or limits must be guaranteed if Australia and its key allies and partners are to assure the international community that AWS can be sufficiently anticipated and controlled in conflict.
Ongoing global discussion have explored whether AWS are fundamentally incompatible with IHL, or on the other end of the spectrum, if they represent a ‘moral obligation’ for military use. Regardless of position, the prospect of outright banning AWS is growing increasingly unrealistic, as many militaries around the world are already using AWS to achieve battlefield objectives and strategic advantage. In March 2020, for instance, autonomous Kargu-2 rotary wing attack drones were allegedly used to ‘hunt’ retreating Haftar-affiliated soldiers during the Libyan civil war. More recently in Ukraine, autonomous Saker Scout quadcopters were reportedly used in limited strikes on Russian forces without human oversight.
With autonomous capabilities positioned as a central component of Pillar 2 of the AUKUS agreement, there is a clear need for Australia to develop a dedicated framework guiding the development of ethical AI in a defense context. As identified in a recent RAND report, greater alignment between AUKUS partners in their AI policies is key to ensuring responsible use of military AI. While there’s general alignment among the partners on ethical principles outlined in their respective AI strategies (such as a focus on reliability, accountability, explainability, and human control), the UK and Australia differ from the US in their views on a wider range of principles and designation of system responsibility. Any ambiguity surrounding the implementation of these principles makes coordination more difficult and may impact our ability to meet shared strategic objectives.
Australia has an opportunity to play a leading role in discussions on the development and deployment of future AWS capabilities. The upcoming international dialogue to be convened by the US on the responsible use of military AI and autonomy provides a timely forum for states to build consensus on new rules and norms. Prospective developments could then be shared at the second expert meeting on the legal review of AWS to be hosted by Defence’s Directorate of Operations and International Law in Sydney, Australia from 16-18 April 2024. Where the previous expert meeting concluded that improved transparency and information-sharing was key to universalizing legal reviews and building trust among states, the upcoming meeting aims to formalize a list of guiding practices to assist states in their national AWS legal reviews.
In grappling with the ethical concerns surrounding the development and deployment of AWS, Australia must first establish its own dedicated defense policy for AI-enabled capabilities, rather than relying on ethical principles developed for civilian AI. This, in tandem with continued discussions between AUKUS partners to remove ambiguity surrounding AI principles, greater participation in multilateral discussions, and facilitation of expert groups can ensure Australia’s AI military capabilities remain responsible and act in accordance with international obligations.
Australia can and should continue to advocate for the next wave of AWS. By routinely revising and strengthening governance frameworks as AWS technologies move along the autonomy spectrum—becoming semi- and fully autonomous—Australia can help shape an informed and mature debate around the future of AWS-enabled warfare. This is important because the non-binding nature of UN resolutions will do little to dissuade authoritarian states from misusing AWS in war.
To guard against this distinct possibility, Australian officials must work proactively in multilateral institutions and with civil society—if only to establish rules for a world in which asymmetric AWS capabilities could dominate the threat landscape.
Samara Paradine is a coordinator in ASPI’s Professional Development Centre and Marcus Schultz is a researcher at ASPI.
This article was first published by The Australian Strategic Policy Institute.