Author : Prateek Tripathi

Expert Speak Digital Frontiers
Published on May 06, 2024

AI has sparked heavy investment in military R&D, notably in Autonomous Weapons Systems (AWS), prompting urgent global debate on the accompanying ethical concerns.

When AI crosses the line: The impending threat of Autonomous Weapons Systems

The conception of Generative Artificial Intelligence (AI) has led to a booming interest in the development of its applications, with countries investing heavily in AI Research and Development (R&D), especially in the military domain. However, one particularly disturbing consequence of this has been the recent advances in the development of Autonomous Weapons Systems (AWS). While fully autonomous weapons are yet to make their appearance, the continued advancements in the military applications of AI may make them a reality sooner rather than later. Besides being morally questionable to begin with, the very idea of AWS needs serious debate and discussion before any practical steps are taken towards their development. Still, with countries like the United States (US) and China already making significant strides in this direction, the situation is now in dire need of being addressed as swiftly as possible. 

Tracing the roots of AWS     

While concerns surrounding the ethical and legal implications of AWS can be traced back to the early 2000s, they really came into the limelight in 2012, when the US Department of Defense (DoD) published an executive order on it. It laid out the guidelines for the development and use of autonomous and semi-autonomous weapons systems by the DoD. It was also the first policy announcement by any country on fully autonomous weapons. The debate around AWS has gained significant traction since then, with increasing involvement from scholars, military and policy experts, along with international organisations such as the International Committee of the Red Cross (ICRC), Human Rights Watch and the UN Institute for Disarmament Research (UNIDIR).      

One of the chief problems with AWS is that there is no real consensus on their definition at the moment. In the context of AI, they can be roughly defined as weapons systems that use AI to identify, select, and attack targets without human intervention or the need for an operator. Furthermore, a Lethal AWS (LAWS) can be defined as a subset of an AWS with the specific ability to exert kinetic force against human targets.       

Major powers’ pursuit of AWS      

It comes as no surprise that the major military powers of the world have invested heavily in AI R&D. Though this is seemingly not a particularly dangerous development on its own; their ongoing interest in AWS certainly is. The US DoD has already announced its “Replicator” programme in a bid to counter China and Russia’s ambitions in this domain. The idea is to essentially supplement human soldiers on the battlefield with waves of smaller and low-cost AI-driven weapons systems that are expendable and can be quickly replaced after being destroyed. These could take the form of self-piloting naval vessels, unmanned aircraft and mobile “pod” units deployed on land, sea, air, or in space.

The idea is to essentially supplement human soldiers on the battlefield with waves of smaller and low-cost AI-driven weapons systems that are expendable and can be quickly replaced after being destroyed. These could take the form of self-piloting naval vessels, unmanned aircraft and mobile “pod” units deployed on land, sea, air, or in space.

The US Navy has already demonstrated a successful attack on a simulated enemy target using live rockets by an unmanned boat in October 2023. Additionally, the Pentagon seemingly has more than 800 ongoing military AI projects, including the “Loyal Wingman” programme and swarm drones like the V-BAT aerial drone

China has been pursuing AWS along the path of its PLA-supported doctrine of civil-military fusion. As of 2022, there was already evidence of a fully autonomous 10-drone swarm traversing through a forest in China. As a counter-response, the Australian Navy is also working on autonomous submarines called “Ghost Sharks,” which are powered by AI.

Russia has also been reportedly working on AWS. Promotional material released by weapons manufacturer, Kalashnikov, for their Lancet and KUB kamikaze drones suggests they are capable of autonomous operations.                           

Implications for non-state actors and terrorist groups

Military research in AWS has also had the effect of potentially giving non-state groups access to a devastating new form of weaponry. Technological advances in the military domain often end up heralding non-state actor capabilities, particularly when they offer a low-entry barrier. AWS can reduce, or altogether eliminate, the physical dangers of terrorism, while also providing increased anonymity. Terrorists will no longer need to be physically present to conduct an attack, and it will be extremely difficult to identify the operator of an AWS. However, these capabilities are already available to terrorists via the use of manual drones. For instance, Yemen’s Houthi Rebels have been employing this tactic to carry out attacks in the Red Sea. What sets AWS apart is the fact that they are potentially invulnerable to traditional countermeasures like jamming. Additionally, they offer the possibility of force multiplication since they do not necessarily require continuous intervention in the system—swarm drones being a case in point. And while the engineering required for such endeavours is not presently available to them, even rudimentary autonomous drones working in tandem could have potentially fatal consequences.    

What sets AWS apart is the fact that they are potentially invulnerable to traditional countermeasures like jamming. Additionally, they offer the possibility of force multiplication since they do not necessarily require continuous intervention in the system—swarm drones being a case in point.

The problem of attribution 

In December 2023, a drone strike by Nigeria’s military killed more than 85 civilians in the village of Tudun Biri, in what President Bola Ahmed Tinubu called a “bombing mishap”. The Nigerian Air Force has conducted 14 separate reported strikes between January 2017 and January 2023, resulting in more than 300 deaths. While the incident was attributed to an intelligence failure and the Nigerian army’s top officials apologised personally for the mishap, there is now a further cause for concern. With the growing possibility that at least some weapons systems may eventually become autonomous, the perpetrators of a drone attack could simply blame it on “errantly operating AI,” and there would be no possible way to uncover who is actually culpable. Far from being a future concern, there are already reports that Ukraine is employing autonomous attack drones in its war against Russia, to possibly target combatants without any human oversight. 

Appropriate use of AI and the danger posed by AWS

Since AI essentially functions by finding patterns in data, it is most useful for performing mundane or routine tasks requiring no innovation. However, for tasks requiring actual decision-making, the use of AI is inappropriate, and it can have potentially catastrophic consequences, at least in its current form. To understand this, we need only consider a recent example of a reported application of AI by the US military. In June 2023, Colonel Tucker Hamilton, the chief of AI test and operations with the US Air Force, described a simulated test in which an AI-operated drone was being trained to destroy an enemy’s air defence systems. It was trained by “giving points” for eliminating the threat. However, when the human operator ordered the drone not to kill the target, it destroyed the communication tower being used to operate it, thereby essentially “killing” the operator. Col Hamilton claimed that no real person was harmed in the exercise, and later retracted his statement altogether, while a US Air Force spokesperson denied the simulation had ever taken place. Exact details remain elusive, but this serves as a representative example of how indiscriminate use of AI can have unintended and disastrous consequences.

Since AI essentially functions by finding patterns in data, it is most useful for performing mundane or routine tasks requiring no innovation. However, for tasks requiring actual decision-making, the use of AI is inappropriate, and it can have potentially catastrophic consequences, at least in its current form.

Inadequacy of the current global framework 

On 25 January 2023, the US updated its 2012 directive “Directive 3000-09 on Autonomy in Weapons Systems,” whose definition of autonomous weapons closely follows that of its predecessor. However, as with the original directive, the new one applies exclusively to the DoD, while remaining silent on situations outside of armed conflict. It also possesses several loopholes, mostly stemming from the deliberate use of vague language such as giving operators the power to exercise an “appropriate level” of human judgement. The situation is similar when it comes to other countries like China, Australia, Israel, the United Kingdom (UK) and Russia. For instance, despite repeatedly supporting a legal ban on LAWS, China simultaneously promotes a narrow understanding of these systems, with the intent to exclude them from what it deems “beneficial” uses of AI. Australia has no government policy on AWS in place whatsoever.    

While the potential applications of AI are enormous, this must also be accompanied by an unprecedented level of responsibility. Unlike the technologies of the past, AI is the only one with the potential to operate autonomously. Consequently, our approach to it needs to be different as well. Several organisations around the world including the ICRC and Human Rights Watch have already called for a new global treaty prohibiting the use of AWS, with the United Nations also proposing a complete ban on all LAWS. Global military leaders like the US and China need to take the initiative in this regard given that the prospect of putting human lives at the mercy of AI raises serious ethical and moral concerns for all of humanity, and must transcend mere territorial disputes and ambitions.


Prateek Tripathi is a Research Assistant at the Observer Research Foundation.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Prateek Tripathi

Prateek Tripathi

Prateek Tripathi is a Research Assistant Centre For Security Strategy and Technology at ORF. He was given a prize for outstanding physics achievement at the ...

Read More +