Author : Sameer Patil

Expert Speak Raisina Debates
Published on Apr 26, 2022
The Future of War in the Age of Disruptive Technologies This article is part of the series—Raisina Edit 2022

From the battlefields of Yemen and Ukraine to Syria, Armenia and Azerbaijan, war has reinforced its centrality in the 21st century. These modern conflicts have spread across the land, air, maritime, and cyber domains, fuelled by ethnic antagonisms, territorial claims and geopolitical competition. But, more significantly, they have demonstrated the critical role of disruptive technologies in shaping military doctrines and influencing future battlefield tactics.

Technology has always played a significant role in warfare; there is almost a symbiotic relationship between the two. Ever since the ‘gunpowder revolution’ of the 11th century, technological advancements have shaped the battlefield dynamics and enabled military planners to increase the lethality of their forces. Conversely, changing threat environments have driven the quest for novel disruptive technologies.<1>

Ever since the ‘gunpowder revolution’ of the 11th century, technological advancements have shaped the battlefield dynamics and enabled military planners to increase the lethality of their forces.

When utilised appropriately, technology upended the existing military balance and gave a relatively weaker side an advantage. For instance, during the Punic wars (264–146 BCE), faced with the formidable sea power of the Carthaginians, the Romans developed the Corvus boarding device to take on and defeat the Carthaginian ships. In other cases, technological advances tended to reinforce military superiority. Consider, for example, the advent of nuclear weapons in the 1940s, which significantly strengthened the already-superior military capabilities of the US and Soviet Union.<2>

Techno-Wars

While these trends persist, the wars of the 21st century are differentiated by disruptive technologies such as robotics and autonomy, quantum computing, cyber warfare, artificial intelligence (AI) and the like. The ongoing Ukraine conflict has demonstrated the critical role of these technologies in strengthening the kinetic war effort: Russia’s deployment of KUB-BLA drones, which can identify targets using AI; Ukrainian drones armed with anti-tank weapons that use satellite feed for aerial reconnaissance; and the wave of Russian cyberattacks and disinformation campaigns against Ukraine to cause ‘fog of war.’

Automation and data-driven technologies such as AI-powered drones and robotics have gained salience and spawned autonomous systems within the disruptive technologies. Equipped with sensors and processors, these systems give situational awareness of the threat environment to the military crew and, if armed, can be activated to select and engage targets without human intervention. Moreover, with little risk of human casualties to one’s forces, autonomous systems can be deployed in larger numbers, as seen, for instance, by the concept of ‘drone swarm,’ where several drones communicate and coordinate with each other to achieve a tactical mission. For example, in July 2021, the Israeli military operated a drone swarm in southern Gaza to target Hamas terrorists.

Israel’s currently operational Iron Dome missile defence system uses AI to analyse enemy missiles entering its territory and determine whether citizens are at risk.

Several precursor systems—also described as semi-autonomous systems—clearly show the trend of increasing autonomy, from the skies to underwater. For instance, the US Navy is on the verge of deploying an AI-enabled submarine prototype that can fire a weapon without human intervention. Likewise, Russia is developing an unmanned ground vehicle that can patrol without human control and operate a swarm of drones. Israel’s currently operational Iron Dome missile defence system uses AI to analyse enemy missiles entering its territory and determine whether citizens are at risk. The development of these systems underlines several countries’—including India’s—strategic pursuit of data-driven technologies. In addition, some countries, such as China, are pursuing a civil-military fusion approach for the integrated development of these systems and other emerging technologies.

Use of Force on ‘Automatic Mode’?

But as autonomous systems advance and proliferate, they have also raised ethical considerations (who will pull the trigger?) and protocol challenges (can these systems be regulated?).

Autonomous systems are beneficial where there is a need to make quick decisions. With algorithms determining their functioning, these systems can undoubtedly keep up with the rapidly evolving battlefield situations and respond. With a speedy evolution in what autonomous systems can achieve, they have generated scenarios where machines can and will decide when to use lethal force. This debate accentuated after a United Nations (UN) panel of experts on Libya revealed that government forces had used Turkish-origin Kargu-2 autonomous armed drones to target and ‘hunt down’ troops of rival militia in March 2020. The panel noted that these systems were coded “to attack targets without requiring data connectivity between the operator and the munition: in effect, a true “fire, forget and find” capability.”

As more semi-autonomous systems make their way into the military inventory and autonomous systems become a reality, the issue of human control—who should be held accountable for killings and how much autonomy is permissible and acceptable—will assume more significance. Unfortunately, international regulations are slow to catch up on these technological advances and battlefield developments.

A related dimension is the proliferation of such technologies to non-state actors, who can magnify their lethality and utility through the innovative application.

The UN has been debating the autonomous systems issue since 2017 at the ‘Group of Governmental Experts on emerging technologies in the areas of lethal autonomous weapons systems’. It has suggested certain principles for using lethal autonomous weapons systems, such as applying international humanitarian law and the need for human responsibility to ensure accountability. Civil society is pitching in in this effort through the ‘Stop Killer Robots’ campaign, calling for a ban on fully autonomous weapons systems.

While stakeholders debate these issues, many states will not await their resolution and press ahead with the deployment of such systems. A related dimension is the proliferation of such technologies to non-state actors, who can magnify their lethality and utility through the innovative application. For instance, terrorist groups of various hues have used drones to target military bases and critical infrastructure facilities. Besides, drug smuggling syndicates and terrorist groups from Pakistan have already used drones to transport drugs and small arms in north India’s border states.

Reimagining Military Labour Force

Even without seeing Terminator-like robots on the battlefield, autonomous systems will play a critical role for the militaries as automation evolves in functionality based on machine learning. The days are not far when militaries deploy these systems to perform hazardous tasks like demining and operating in radioactive environments, and monotonous tasks like border defence. The US Department of Defense’s Unmanned Systems Roadmap: 2007-2032 notes that autonomous systems are better suited for “dull, dirty, or dangerous” missions than humans because human capacity and presence are often inhibiting factors in many tasks. But by utilising machines, military planners can reduce human casualties and use the workforce more efficiently.

With the possibility of robots and other autonomous systems confronting each other on the battlefield and no fear of human casualties, military planners can go all out in fighting and engage in escalation.

This potential development could profoundly impact the military labour force. Automation will require “reducing certain specialties, re-skilling many service members, and creating entirely new job families.” In addition, it will expand the scope for critical thinking and emotional intelligence jobs such as caregivers, educators, and subject-matter experts.

Conclusion

Rapid advances in technology and automation will continue to shape the future of conflict and war. Countries are pursuing these technologies with the sole aim of weaponising and deploying them as quickly as possible. With the possibility of robots and other autonomous systems confronting each other on the battlefield and no fear of human casualties, military planners can go all out in fighting and engage in escalation. This is the most consequential implication of this technological evolution for warfare. It will eliminate the political consequences of warfighting and the ethical necessity of finding a peaceful solution to end hostilities. This distinct possibility of heading into anarchy makes regulating technologies such as lethal autonomous weapons systems and robotics imperative.


<1> Alex Roland, War and Technology: A Very Short Introduction (New York: Oxford University Press, 2016), 36–41.

<2> Michael Quinlan, Thinking About Nuclear Weapons: Principles, Problems, Prospects (Oxford & New York: Oxford University Press, 2009), 9.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Sameer Patil

Sameer Patil

Dr Sameer Patil is Senior Fellow, Centre for Security, Strategy and Technology and Deputy Director, ORF Mumbai. His work focuses on the intersection of technology ...

Read More +