Author : Abhijit Singh

Expert Speak War Fare
Published on Sep 17, 2016
Artificial Intelligence is considered to be an indispensable component of new-age naval weapons like hypersonic missiles.
Naval missile systems and the limits of artificial intelligence

The new buzzword in militaries across the world today is 'artificial intelligence' (AI) — the ability for combat platforms to self-control, self-regulate and self-actuate, using inherent computing and decision-making capabilities. That advanced computing technologies today enable autonomous systems to identify and strike hostile targets is no surprise. What is new is that a fast deteriorating security environment in the maritime commons has led to a growing interest in 'intelligent' naval missiles that promises to revolutionise future maritime combat.

While advancements in remotely operated weapons like drones have been driving superior AI technology, there are, however, complex questions that remain unanswered. Many of them have to do with the logic of AI in defence systems. What, for instance, is the real incentive for military commanders to encourage the development and deployment of autonomous weapons? Is the case for divesting human executive control over weapons systems fundamentally self-defeating? Does the growing deployment of anti-access/area denial weapons justify AI enabled systems in littorals spaces? Lastly, and perhaps most crucially, in the face of electronic/cyber capability advancements, is the use of autonomous weapons at sea an unavoidable reality?

A good point of departure for the discussion on autonomous combat systems is a recent report in the Chinese media about the development of a family of cruise missiles with artificial intelligence (AI) capabilities. In August this year, a Chinese daily reported that China’s aerospace industry was developing tactical missiles with inbuilt intelligence that would help seek out targets in combat. The 'plug and play' approach, a Chinese aerospace executive pointed out, could potentially enable China’s military commanders to launch missiles tailor made for specific combat conditions.

AI, hyper-sonic, LRASM, LAWS, CODE, DARPA, Artificial Intelligence

Oddly enough, no clarifications were offered for what "tailor made cruise missiles with high levels of artificial intelligence and automation" really meant. Apart from reiterating China’s global leadership status in the field artificial intelligence, the Chinese source did not provide any insight into the specific nature of autonomous capability being developed.

The real issue for many maritime policymakers is the dichotomy between the theoretical definition of Artificial Intelligence and its popular interpretation. Technically, AI is any onboard intelligence that allows machines in combat to execute regular tasks, allowing human more time to focus on demanding and complex missions. Modern-day combat requires war-fighters to operate with the active assistance from sensors and systems. In theory, AI provides the technology to augment human analysis and decision-making by capturing knowledge that can be re-applied in critical situations. It purports to change the human role from "in-the-loop" controller to "on-the-loop" thinker who can focus on a more reflective assessment of problems and strategies, guiding rather than being buried in execution detail.

In practice, however, Artificial Intelligence is a term used for a combat system that has the ability to take targeting decisions. This is more in the nature of "who to target", as opposed to "how to target", which is anyway a task that guided missiles have been performing with some precision. It’s worth emphasising that maritime forces remain skeptical of autonomous weapon systems with independent targeting capability. In the nautical realm, the launch of a missile on an enemy platform is an act of war. The decision to execute a missile launch is the exclusive preserve of the command team (led by the ship’s captain), that must independently assess the threat and act in pursuit of war objectives.

Despite recent improvements allowing for a more precise targeting of platforms, the logic of maritime operations hasn’t essentially changed. As a result, naval missiles haven’t been invested with any serious intelligence to make command decisions to target enemy units. While their ability to strike targets has been radically enhanced — through the use of superior onboard gyros, computing systems and track radars — the basic mode of operation of cruise missiles remains the same.

To be sure, Artificial Intelligence is considered to be an indispensable component of new-age naval weapons like hypersonic missiles. After China’s recent high-speed (over Mach 10), "extreme maneuvers" hypersonic tests, the need for a human-machine interface in future combat missions seems clear; which is why four other Asian states — Japan, India, South Korea and Taiwan — have been developing supersonic and hypersonic systems. Each one of them has expressed an aspiration for an advanced and dispersed maritime force, with long range sensors, armor protection, precision weapons and networking technologies. Yet, their naval elite seem to harbour doubts about missile systems with artificial intelligence.

A useful illustration of the predicament that AI poses for the naval community is the US navy’s Long Range Anti-Ship Missile (LRASM). Often portrayed by senior officers as a single-shot remedy for America’s surface-combat deficit at sea, the long range ASM is a replacement for the Harpoon missile (albeit a more powerful version) and a supposedly 'intelligent' missile system. Guided first by ship-borne equipment and then by satellite, the projectile is jam-resistant and capable or operations without the Global Positioning System. Flying through a series of waypoints, evading static threats, land features, and commercial shipping, the LRASM has the capability to detect threats independently and navigate around them.

The nature of the LRASM’s onboard 'intelligence', however, tells a story. The missile is smart enough to avoid the engagement zone of an enemy ship that is not on the target list. To bypass enemy warships that aren’t on the target list, it skips waypoints that lie within their weapons-engagement range. With an inbuilt capability to dive to sea-skimming altitude in its approach to the target vessel, the missile can strike at an independently calculated "mean point of impact."

Notwithstanding its considerable computing and processing capabilities, however, the LRASM does not select its target in flight. Human operators feed that information into the missile, providing it with a continuous stream of data. In crime-investigation lingo, the missile is not the mastermind of the encounter; only the assassin. This also demonstrates of the limits of artificial intelligence, where the missile takes its own decisions only after it receives critical targeting information from the command team. Despite its coordinated attack capabilities, the LRASM cannot be termed as a fully autonomous weapon.

AI, hyper-sonic, LRASM, LAWS, CODE, DARPA, Artificial Intelligence

Understandably, the debate surrounding artificial intelligence and autonomous naval platforms has a humanitarian dimension. AI might have the potential to radicalise naval operations at sea, but many maritime practitioners are uncomfortable with its use in combat — particularly the prospect of lethal autonomous weapons systems (LAWS), being directed at people and the realisation that an inanimate system could take a decision to terminate human lives.

The critics of LAW point out that international humanitarian law — which governs attacks on humans in times of war — has no specific provisions for such autonomy. The 1949 Geneva Convention on humane conduct in war requires any attack to satisfy three criteria: military necessity; discrimination between combatants and non-combatants; and proportionality between the value of the military objective and the potential for collateral damage. But these are subjective judgments no current AI system seems able to fully to satisfy.

In the absence of consensus around 'artificially intelligent' weapons, autonomous naval combat systems are yet to find ready acceptance in the military. Even where there is some agreement — as in the case of the US Defense Advanced Research Projects Agency’s (DARPA) LAWS for Collaborative Operations in Denied Environment (CODE) — operations have been limited to targeting enemy platforms in situations where signal-jamming makes communication between human commanders is impossible.

Maritime policy makers are not against the use of AI technologies to hasten command and control processes and human decision-making on naval platforms, but it is unlikely they will easily acquiesce to weapon systems taking independent targeting decisions that could endanger lives at sea.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.


Abhijit Singh

Abhijit Singh

A former naval officer Abhijit Singh Senior Fellow heads the Maritime Policy Initiative at ORF. A maritime professional with specialist and command experience in front-line ...

Read More +