Authors : Ariel Conn | Ingvild Bode

Expert Speak Digital Frontiers
Published on Jan 30, 2024

Any governance of Artificial Intelligence (AI) needs to consider the deadliest applications of AI

Setting the guardrails for AI in weapons

This essay is part of the series: AI F4: Facts, Fiction, Fears and Fantasies.


For as much as there have been calls for ethical guidelines and new governance of Artificial Intelligence (AI), policymakers, and members of industry have shied away from including the ethical and legal impact and effects of AI in weapons systems. Some recommendations for AI governance exclude military applications of AI explicitly, such as the European Union’s (EU) Draft AI Act, others do this implicitly by simply not mentioning this use context.

However, we cannot expect the various AI governance initiatives to succeed if people do not look at the full scope of challenges associated with AI. We cannot address issues of AI without also considering the deadliest uses of AI: AI in weapons systems.

Who is using AI-enabled weapons?

There is a tendency to treat AI in weapons systems as a topic that belongs entirely in the military domain and is, therefore, not open for discussion outside of military or international diplomatic circles. There is a follow-on assumption in many of the military discussions that AI in weapons systems will only ever be used by militaries. Yet militaries and government officials have been the targets of such attacks for many years now, including the attack against Russian military bases in Syria in 2018 and the attempted assassination of the Iraqi Prime Minister, Mustafa Al-Kadhimi, in 2021.

Moreover, international discussions tend to assume that weapons that are developed with or enhanced by AI will only be built by militaries or by military contractors. However, the case of Unmanned Aerial Vehicles (UAVs) already illustrates how the widespread availability of these technologies in civilian spaces triggers new proliferation dynamics and has enabled the repurposing of off-the-shelf UAVs into armed systems by non-state actors.

International discussions tend to assume that weapons that are developed with or enhanced by AI will only be built by militaries or by military contractors.

Meanwhile, growing concern about the proliferation of autonomous weapons systems–and of AI software that can turn small systems like drones into weapons–is raising questions about what this means for potential criminal use and the impact on public safety and security.

AI capabilities

It can be helpful to consider that there is no such thing as AI for civilian uses versus AI for military uses; there is only AI. AI and autonomy can be thought of as capabilities that are added to existing programmes and systems, rather than standalone or distinct technologies.

The very term ‘AI’ represents a variety of technological capabilities that can be applied to countless platforms and software, which may or may not have been originally designed with AI or defence applications in mind. AI can have far-reaching and often unanticipated effects, impacting everything from war to geopolitics—such as the effect AlphaGo had on the development of China’s AI strategy—to the job market to mental health as in the case of algorithms providing a depressed teenager suicidal content.

In the case of military uses of AI (i.e. non-weaponised uses), many of them are similar to civilian uses. Militaries are looking to improve logistical and planning tasks via predictive analysis, use image recognition technology to quickly analyse battlefields, automate repetitive administrative tasks, develop autonomous driving capabilities, improve navigation, etc. From the standpoint of considering the types of AI capabilities that will help large, expensive, bureaucratic militaries function more effectively, it makes sense to consider military uses of AI.

The very term ‘AI’ represents a variety of technological capabilities that can be applied to countless platforms and software, which may or may not have been originally designed with AI or defence applications in mind.

However, it is AI-enabled weapons that will have the most impact on both militaries and civilians. For now, given the way technology is developing, this is seen most dramatically in the adoption and proliferation of drone technology by civilians and non-state actors, as described above.

Quadcopter drones: An example of the crossover AI capabilities and options for governance

The weaponisation of commercial quadcopter drones can provide an example for considering the full scope of the problem. Police, security forces, militaries, and counter-terrorist organisations have already been dealing with this growing problem for many years. It highlights the importance of considering the weaponisation of AI as a complex problem that goes beyond the military to include manufacturers of weapons that were never supposed to be used in autonomous modes, the people who may be harmed or killed by such weapons, and the law enforcement officers who must address crimes committed with these weapons, as well as face greater risk themselves from the use of such weapons. To look at this in a little more detail:

  • First, we have already seen instances of weapons like handguns and flamethrowers attached to commercial drones. In the United States (US), weaponising a drone has been prohibited by the Federal Aviation Administration, but this is unlikely to sufficiently prevent such uses. In addition, gun manufacturers should now consider this as a problem that they also need to be a part of solving. For example, they could modify their technology accordingly to make it more difficult to mount a drone. From a governance perspective, this might mean requiring fingerprint ID or other forms of unique and traceable identifiers to pull the trigger.
  • Second, drone manufacturers need to consider that people will try to weaponise their products. Drone manufacturers cannot simply say that this is not their problem and suggest that it is too hard to solve. They may need to build new sensors or other components for the drones, such as something that prohibits the drone from flying if non-approved attachments are added to the drone.
  • Third, laws need to be developed that prohibit weaponising robots and drones that are designed for civilian and commercial use.

Police, security forces, militaries, and counter-terrorist organisations have already been dealing with this growing problem for many years.

This example underscores the importance of viewing the challenges of AI holistically, rather than thinking of it as a separate, isolated technology. This includes considering its entire lifecycle, its inherent complexities, and the way it will interact with other technologies, as well as with humans and society more broadly. From a global governance standpoint, it is crucial to grasp the multifaceted nature of the issue and recognise the multitude of stakeholders involved throughout the stages.

Learning from cybertechnology

Issues around cybertechnology provide another useful analogy. During the early years of software development and the Internet, engineers chose not to consider how governments and militaries might manipulate vulnerabilities against other countries. Because of this inaction, cyber security has become a much harder problem to address. We are on track to make similar mistakes with AI. Developers and other actors involved at the early stages of the lifecycle tend to shirk responsibility. But when they do this and design problematic technologies anyway, it becomes everyone’s problem.

Issues around cybertechnology provide another useful analogy. During the early years of software development and the Internet, engineers chose not to consider how governments and militaries might manipulate vulnerabilities against other countries.

More recently, cybertechnology and cyber issues represent another space where military and civilian uses—both beneficial and nefarious—have the most overlap.

Conclusion

If discussions around AI governance continue to follow current trends of separating military uses from other uses, the proliferation risks of AWS increase significantly. But if we consider military uses of AI in the same conversations as other concerns about AI, we can more effectively address many of these issues. The weaponisation of AI presents some of the greatest risks of AI use. However, because of this, militaries must abide by some of the strictest national and international laws, and military commanders will demand weapons systems with AI programmes that will be trusted to adhere to international law and minimise risk to their soldiers, especially by minimising the risk of proliferation of AI weapons that could be used against their soldiers.

Although it may seem counterintuitive, including military uses of AI in the global governance of AI will encourage the responsible use and development of all AI.


Ariel Conn is a co-founder of Global Shield and leads the IEEE-SA Research Group on Issues of Autonomy and AI for Defense Systems. 

Ingvild Bode is an Associate Professor at the Centre for War Studies, University of Southern Denmark.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Authors

Ariel Conn

Ariel Conn

Ariel Conn leads the IEEE-SA Research Group on Issues of Autonomy and AI for Defense Systems, she's the founder of the nonprofit consultancy, Mag10 Consulting, ...

Read More +
Ingvild Bode

Ingvild Bode

Dr Ingvild Bode is Associate Professor at the Centre for War Studies, University of Southern Denmark. She is the Principal Investigator of the European Research ...

Read More +