Originally Published 2017-12-16 04:24:10 Published on Dec 16, 2017
While it is legitimate to question the ethics and rules surrounding autonomous weapons, the idea that their development will necessarily usher in an apocalyptic future may not be accurate
Why autonomous weapons should not be banned

Ever since the debate on lethal autonomous weapon systems (LAWS) first began circa 2013, polarized opinions and doomsday prophesies have hindered a more nuanced analysis of the issue.

Four key arguments form the mainstay of a recent piece’s call for a ban. First, that the development of autonomous weapons will reduce combat fatalities for the aggressor, driving militaries to engage more frequently. Second, that these weapons will proliferate rapidly, ultimately falling into the hands of authoritarian regimes. Third, that in the past, the international community has successfully banned devastating weapons, such as biological ones. Finally, that they will kick- start an AI arms race.

These are real concerns and were key points for discussion at the UN group of governmental experts (GGE) that met last month in Geneva. The answer, however, does not lie in a ban. At the GGE, the video that Matthan refers to in his piece was criticized by AI experts for sensationalism, and for not accurately reflecting technological realities. While it is legitimate to question the ethics and rules surrounding autonomous weapons, the idea that their development will necessarily usher in an apocalyptic future may not be accurate.

For one thing, autonomous weapons by themselves are unlikely to lower the threshold for war. Political, geographical and historical drivers are far more likely to influence a state’s decision to enter into an armed conflict. The threshold for engaging in conflict depends on countries concluding that they can favourably change a certain status quo—whether it is to gain or safeguard territory, resources or political capital. That autonomous weapons would somehow influence this calculation any more than precision-guided missiles or drones, for example, is mere speculation. If anything, calls for a pre-emptive ban might hinder the deployment of autonomous weapons in defensive capacities, such as the SGR-A1 gun used by South Korea along its demilitarized zone or Israel’s semi-autonomous Iron Dome that intercepts incoming rockets and artillery. These weapons can, in fact, increase the cost of aggression, thereby deterring conflict.

Second, LAWS rely on advancements in AI and machine learning. The argument that a ban might prevent such weapons from landing in the hands of a dictator is unconvincing, because most developments in AI are taking place in the civilian sector, with the potential for “dual-use” military capabilities. Moreover, autonomous weapons are likely to be developed progressively—with autonomy being introduced gradually into various functions of weapon systems, such as mobility, targeting and engagement. It is currently impossible to define which kinds of autonomous weapons need to be banned given the dearth of functioning prototypes.

Third, comparisons between autonomous weapons and biological, or even nuclear, weapons rely on a false equivalence. The latter, by their very nature, are incapable of distinguishing between combatants and non-combatants, thus conflicting with well-established IHL principles of distinction. LAWS, on the other hand, given enough technological sophistication and time, can meet the IHL thresholds of distinction and proportionate response. Initially, new autonomous weapons are likely to be deployed in areas where civilian presence is minimal or absent, such as the high seas or contested air-spaces.

Finally, while the idea of a new arms race is cause for concern, it is undeniable that it has been under way for some time now. The Campaign to Stop Killer Robots reports that at least six states—the US, UK, Russia, China, Israel and South Korea—are already developing and testing autonomous weapons, while another 44, including India, are exploring their potential. That every member of the UN security council refused to consider a ban on the GGE is a powerful indication of how (un)successful a ban is likely to be.

Ultimately, the future of autonomous weapons will pivot around questions of strategic value, not morality. Current international debates on LAWS resemble a prisoner’s dilemma between nations with significant technological capabilities and nations that are wary of an imbalance in strategic stability. India, an emerging power, should not fall prey to the insecurity plaguing smaller nations, like Pakistan and Cuba, who have been joined by 20 other countries in calling for a ban. A pre-emptive ban is only likely to compound inequity in military capability, with the bigger powers employing these weapons anyway.

Rather than mischaracterizing LAWS as new weapons of mass destruction or harbingers of a dystopian future, it is critical to develop principles and norms to govern their use. With nations unanimously agreeing that they will never deploy weapons that can operate outside human control, conversations need to be steered towards identifying degrees of necessary human control. Consequently, new frameworks of accountability and military necessity require significant consideration.

The focus must necessarily shift from controlling autonomy in weapons to controlling the lethality of their use. Any calls for a ban on fully autonomous weapons, even before a functional prototype has been developed, are like deciding what the right answer is even before the correct questions have been framed.


This commentary originally appeared in Live Mint.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Contributor

Akhil Deo

Akhil Deo

Akhil Deo was Junior Fellow at ORF. His interests include urban governance sustainable development civil liberties cyber governance and the impact of future technologies on ...

Read More +