Expert Speak Digital Frontiers
Published on Apr 07, 2021
Accountable Autonomy: When Machines Kill

The advent of artificial intelligence (AI) and the emergence of Lethal Autonomous Weapon Systems (LAWS) has made it necessary to re-contextualise many of the established principles of international humanitarian law (IHL). While AI represents the maturing of critical technologies around data collection, computing power and algorithmic decision-making the conversation around LAWS has begun to engage deeper issues around the ethics in the conduct of warfare and democratic decision-making itself.

Although the international community has been engaged in debates around the use and regulation of LAWS for almost a decade, discussions remain trained on clarifying fundamental ideas such as characterisation of these weapons, and the adequacy of human control. The Group of Governmental Experts (GGE) under the Convention on Certain Conventional Weapons has only had a limited degree of success in clarifying these issues. The first set of meetings under the GGE that concluded in 2018 reaffirmed that international humanitarian law applies to autonomous weapons, that human control must be retained over the use of these weapon systems, and that the LAWS should be subject to weapons review processes under Article 36 of the Additional Protocol I to the Geneva Conventions.

As Kara Frederick, Research Associate at the Centre for New American Security pointed out, future deliberations on LAWS such as the second iteration of the GGE must be mindful of the fact that international diplomatic processes that are deliberating on these issues may be falling behind the rate of development of these technologies. Many countries insist that conversations (especially ones focusing on a legally binding treaty) may be premature in light of the fact that no fully autonomous weapons currently exist. Others insist that weapons such as Israel’s Harpy—which can autonomously select and engage enemy radar installations—are proof that not only do these weapons exist, they are being actively deployed by militaries.

Renata Dwan, Director of the United Nations Institute for Disarmament Research highlighted the necessity to clarify exactly what it is that regulations must seek to achieve. Should the focus only be on controlling the lethality of these weapons or should questions around safety and predictability of weapons take centrality? These are important questions and will certainly arise when weapons with different designs and capabilities interact to produce unpredictable outcomes. Hans-Christian Hagman, Senior Adviser and Head of Strategic Analysis, Ministry for Foreign Affairs, Sweden who chaired the panel, noted that these issues may be further complicated when AI is integrated with other equally complex technologies like nano-tech and synthetic biology.

In light of this uncertain future, will ensuring human control over these weapons be enough? If so, at what stage—the design and R&D level, the policymaking level, or at the stage of deployment? As Dwan further highlighted, it may be important to think of meaningful human control as a spectrum rather than a clear red line. That the involvement of humans at every stage in the weapon development and deployment process must be ensured. The thought was echoed by Susan Ridge, Major General, Army Legal Services, Ministry of Defence, UK. Ridge maintained that even at the stage of deployment while machines can discharge some critical functions around targeting and engagement, these decisions must necessarily rest with human operators.

Gilles Carbonnier, Vice President, International Committee of the Red Cross stressed on the need for granularity and contextual analysis in the use of LAWS. Even when these weapons are deployed, he insisted, there must be systems in place to take into account the fluid nature of battlefields and to deactivate these weapon systems. For example, an autonomous system may not be able to distinguish between a regular enemy combatant and one that is injured or is attempting to surrender. Human operators, therefore, must have the ability to instantly respond to these changing circumstances and deactivate these weapons as necessary.

These discussions must also take into account the fact that despite the application of IHL principles, many country positions will be determined by pragmatic concerns rather than ethical ones. The perceived effectiveness of LAWS in improving targeting and mobility and consequently, their ability to reduce military casualties—will be key considerations as nations decide the future of autonomous systems.

Three important questions will be central to the future of autonomous systems. First, relatively few countries comply with the weapons review process under Article 36. How can this compliance be improved and measures for accountability be incorporated into these domestic decisions? Second, how will developers of AI systems address the question of bias in design of these technologies – what cultural contexts and realities inform, for instance, a facial recognition system that autonomously guides a missile to its target in the war zone? And lastly, if the war machines of the future are getting smarter with each passing year, then how will an international multilateral process bureaucratic and slow by its very nature outsmart them?


This essay originally appeared in Raisina Dialogue Conference Report 2019
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.