Geneva: In 1991, during Operation Desert Storm in the Gulf War, a Patriot missile defence system installed by the US in its Saudi base of Dhahran failed to intercept incoming Scuds killing 28 soldiers and wounding nearly a hundred. It was easily the most devastating attack against the US by Saddam Hussein’s forces. The US General Accounting Office’s (GAO) investigation would later reveal that the defence system – a fully automated platform – failed because of a “software problem in the weapon’s control computer”. The longer the Patriot system was kept running, the GAO found, the less accurate it became to track the location of incoming missiles. The Dhahran attack was the first and the most significant reminder to the international community of the challenges posed by automated weapon systems. If technology has since advanced considerably, so has the reliance of countries on Lethal Autonomous Weapons Systems (LAWS). The prospect of robots and weapons platforms relying on artificial intelligence (AI) on the frontlines of war is no longer confined to science fiction. The US recently hinted it was considering the use of “new undersea drones in multiple sizes and diverse payloads that can operate in shallow waters, where manned submarines cannot.” Secretary of Defence Ashton Carter, who made this assertion, was speaking in Philippines after a visit to the USS Stennis currently sailing in the South China Sea. The Pentagon has sought $12-15 billion for ‘blue sky’ research projects aimed at developing “intelligent machines”. The US is not alone in the development of autonomous machines. Russia is reported to have deployed robots – armed with grenade launchers and Kalashnikovs – during its recent military intervention in Syria. With their movements “traced by drones” and robots opening fire, one report suggests, “the (Syrian) rebels did not have a chance”. China too is investing heavily in automated weapons systems and platforms. Robots are only one, albeit highly visible, category of LAWS. ‘Automated’ systems, different from ‘autonomous’ ones in that they do not have AI capabilities, like the Patriot that have long been in operation, and with the advancement of cyber weapons, it is reasonable to assume the human component in conventional and sub-conventional warfare today has been greatly reduced. Faced with the imminent use and engagement of Lethal Autonomous Weapons Systems, an informal group of experts met in Geneva last week for a “general exchange of views”. The group was convened by the UN office in Geneva, involving the contracting states to the Convention on Certain Conventional Weapons (CCW). Among the highly complex questions discussed by the experts, some foundational issues stood out:
  1. How is autonomy being developed and utilised in weapon systems for the maritime, aerial, terrestrial spheres?
  2. What is “meaningful human control” of a weapon system?
  3. How many states are currently implementing legal weapons reviews ?
  4. Are there practical measures that states can take to ensure (that the deployment of LAWS) is in compliance with international humanitarian law?
  5. Is there a risk that LAWS affect existing “power balances”?
These are early days, but divisions among nation-states have become quite pronounced over the course of such meetings. The “informal” meeting in Geneva was the third of its kind – India has been an active participant in preceding discussions.India’s statement at the meeting, delivered by D.B. Venkatesh Varma, the Permanent Representative to the Conference on Disarmament, highlighted the need for “increased systemic controls” on international armed conflict, in so far as it relates to LAWS. India hinted the Convention should be “strengthened” to apply to autonomous weapons systems. As for definitions of ‘autonomy’ and ‘meaningful human control’, India suggested that it may not be “prudent” to articulate a restrictive or expansive statement just yet. The Indian line on LAWS appears to be driven both by geopolitical and technological realities. With the South China Sea being considered as a site for the deployment of lethal robots, the region’s stability is likely to be placed under increased stress. India has made it clear that LAWS must “not encourage the increased resort to military force in the expectation of lesser casualties”. More importantly, the widening technology gap between countries creates little incentive for the US or China, who have pumped billions into robotics weaponisation, to frame a restrictive regime for LAWS that could level the playing field. New Delhi faces the dual challenge of developing AI systems as well as circumventing current export control regimes that limit the transfer of technology. In particular, the role of LAWS in perpetuating low intensity conflict in South Asia should raise concerns for India and its current conventional superiority in the region. Mindful of the technology deficit, therefore, India seems to suggest international law should constrain advancements in the use of LAWS. This is a strategically sound position, but as LAWS are deployed across the globe, state practice will lend legitimacy to their use, which New Delhi should factor in its long term calculus. Meanwhile, states which see LAWS as central to their evolving security doctrines have supported the legality of their use. France, while reserving judgment whether the use of LAWS is prohibited “in some circumstances”, has declared that “the development and use of lethal autonomous weapons systems cannot be regarded as intrinsically contrary to international humanitarian law.” Not surprisingly, the United States has sought a “non-legally binding outcome document” on policy and technological best practices relating to LAWS. As with cyber weapons, the US strategy on LAWS is clear: facilitate the harmonisation of international law with US guidelines (such as Defence Directive 3000.09), while politically constraining competitors like China and Russia to maintain its technological edge. For the debate on LAWS to move forward, some basic questions will have to be addressed. India, like several other countries, has called for a “mapping of autonomy” to distinguish between “oversight, review, control or judgement” of a weapons system. The International Committee of the Red Cross defines an autonomous weapon system as “one that can select (i.e. search for or detect, identify, track, select) and attack (i.e. use force against, neutralise, damage or destroy) targets without human intervention.” This definition has been contested by many states, who argue it would include processes/platforms that are used in any 21st century weapons system. For instance, the US has asserted that LAWS should apply only to “emerging technologies” (namely, ones that rely on AI) and not to cyber weapons which are already in advanced stages of development and deployment. The ICRC definition presents a sliding scale problem: even the Gulf War Patriot systems could fall in its ambit. An international regime to regulate lethal autonomous weapons is some years away. Meanwhile, Great Powers have already indicated their willingness to deploy them in battlefield. The CCW signatories are faced with a dilemma: international law is likely to be outpaced by technology, but waiting till highly sophisticated weapons replace human intervention would leave little room to guide state behaviour. It would be no exaggeration to state that the absence of clear rules of engagement may lead to catastrophic incidents involving robots in the high seas or outer space. This commentary originally appeared in The Wire
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.