Expert Speak War Fare
Published on Feb 01, 2020

Despite the hype, some of the biggest impacts from the use of AI and applied machine learning in the defence context are mundane.

The future of war: Less fantastic, more practical

Artificial intelligence (AI) is a defining element in a societal transition from the Information Age to one dominated by data, information, and cyberphysical systems. As states now compete in “the gray zone” or through  hybrid measures — tactics intended to remain below the threshold of armed conflict — leveraging the massive amounts of information and data at hand is of strategic importance. Kinetic effect will no doubt remain crucial in armed conflict. However, securing advantage in a world with artificial intelligence, data analytics, and cloud computing requires mastery of data and information awareness — i.e., the non-kinetic and digital.

The AI toolbox

AI is an umbrella term that often includes various disciplines of computer science, learning strategies, applications, and use cases. AI has experienced a surge of excitement, research, and application in the past decade, driven by an increased availability of data and computing power, advances in machine learning (as distinguished from rules-based systems ), and electronics miniaturisation. It has gone from largely residing in the realm of academia and research to widespread application across the public and private sectors.

Despite the hype, some of the biggest impacts from the use of AI and applied machine learning in the defence context are mundane. Areas like logistics, predictive maintenance, and sustainment are ripe for computational innovation by assisting humans with repetitive tasking and enabling the processing of large volumes of data. The operational reality in the near term may simply be optimising resource allocation and management that allows actors to act efficiently within their opponent’s decision loop.

Data quality and security are hurdles to AI application in the relatively data-scarce environment of national security contexts.

While the pursuit of AI for national defence and military systems raises concerns on the role of technology in warfare, many of the concerns are not inherently new. International law of war as well as nation-specific law on use of force and military action are applicable whether AI is incorporated into systems or not. Principles of military necessity, distinction, and proportionality along with existing frameworks, structures, and institutions remain relevant and regulate the development and deployment of any advanced technology for armed conflict. Military legal advisors, ethicists, and policymakers, to name a few, continue to work to identify potential gaps in existing law and guidance and have yet to reach consensus that such gaps do exist. What relevant stakeholders do agree on is the necessity for system attributes such as robustness, safety, transparency, and traceability.

Further, an AI system exists within an AI ecosystem that includes not only the algorithms but also the data from which the algorithms “learn,” computing infrastructure, governance structures, and the many humans that design, interact with, deploy, and are impacted by the technology. The development of the AI ecosystem will be critical to the success of AI systems in future conflicts. Many nations face an underdeveloped AI ecosystem and are rapidly working to invest the requisite time, attention, and financial support for growth.

For learning-based solutions, we must also address the necessity and availability of data and computing resources. Data quality and security are hurdles to AI application in the relatively data-scarce environment of national security contexts. While defence-related data does exist, it is often unstructured and not suited to machine learning solutions, let alone statistical data analysis. Foundational computational infrastructure and networking often require significant upgrades in most cases while simultaneously addressing the necessity of “compute.”

Effectively, the one aspect of AI that is actually an arms race is the competition for talent. Developing an educated and skilled workforce is of critical importance to the success of highly capable machines. This includes professional education, university Science, Technology, Engineering, and Math (STEM), and incorporating computer science into early education for children. While we must not only contend with the significant time required to grow capable talent, nations will continue to compete for the existing talent. For example, both Russia and China are recognising the imperative to retain or bring home technical talent and expertise in addition to STEM education initiatives.

Mitigating risk and managing expectation

Even with all puzzle pieces in place, the capabilities of AI and machine learning are still quite limited relatively and are expected to remain so for the foreseeable future. AI is always purpose-built, problem-specific, and context-dependent. It operates effectively on discrete tasks over well-bounded problems. Further, machine learning requires volumes of labeled datasets that are time-intensive to create and maintain, and the need for access challenges the defence sector’s traditional approach to securing sensitive data through silos and restricted access.

A misunderstanding of the limitations of AI, in part through mismanaged expectations on the promise of intelligent machines, in fact serves as a mechanism for exacerbating risks and increasing the potential for accidents. AI introduces new vulnerabilities and failure modes into systems. While system failure in warfare is not unique to AI, failure in machine learning may look different, possibly in unrecognisable, new, and unexpected ways. Further, it may be difficult to verify a system is behaving as intended. Even more challenging for applied machine learning is classifying an unwanted behaviour and ensuring a system does not exhibit said behaviour again. Deploying machine learning in the context of warfare thus requires an assessment of the consequences of failures. Even in the most well-known defence applications, like drone video analysis, the technical maturity and capability of AI currently presents an unacceptable risk in relying completely on machines.

While the applicability of AI to security challenges holds promise in areas with repetitive, well-defined tasking, we should resist the temptation to blindly tackle with AI our hardest problems of how and when humans wage war.

The decisions men and women face in combat are uniquely human. While the applicability of AI to security challenges holds promise in areas with repetitive, well-defined tasking, we should resist the temptation to blindly tackle with AI our hardest problems of how and when humans wage war. AI will not make the difficult choices and decisions inherent in armed conflict any less difficult. Additionally, the use of AI, machine learning, and analytic support tools is not a mechanism by which humans can abdicate responsibility over decisions.

Two conclusions become clear. One, AI is likely to be one tool of many in the digital toolbox where it is applicable. Given technical maturity considerations, learning-based systems may not be the most appropriate solution for many problems. However, the defence enterprise is ripe with areas that are appropriate for the application of AI and limiting considerations should not be lost in discussions on lethal force.

Two, investing in people may be the best safeguard against missteps and misuse. From senior leaders to developers to end users, people must understand the capabilities as well as the limitations to guide the development and deployment of AI and machine learning capability. In our increasingly digital future, featuring increasingly digitised warfare, we cannot afford to under- or over-estimate the applicability and potential of AI.


Melissa Dalton, Kathleen H. Hicks, Megan Donahoe, Lindsey Sheppard, Alice Hunt Friend, Michael Matlaga, Joseph Federici, Matthew Conklin, Joseph Kiernan, By Other Means Part II: Adapting to Compete in the Gray Zone (Washington, DC: CSIS, 2019).

Lindsey Sheppard and Matthew Conklin, “Warning for the Gray Zone”, By Other Means Part II: Adapting to Compete in the Gray Zone, August 13, 2019,

Machine learning, natural language processing, knowledge representation, automated reasoning, computer vision, and robotics as identified in Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 3rd ed. (Harlow, UK: Pearson Education

Limited, 2014).

Also known as “expert systems,” rules-based systems are those in which functionality is implemented through hard-coded rules or specified relationships as programmed by humans. At a fundamental level, rules consist of an “IF this condition” and a “THEN that output or action.” As distinct from machine learning, rules-based system performance does not learn or improve over time with new contexts unless new functionality is programmed by a human.

Michael Chui, James Manyika, Mehdi Miremadi, Nicolaus Henke, Rita Chung, Pieter Nel, and Sankalp Malhotra, Notes from the AI Frontier: Insights from Hundreds of Use Cases, McKinsey Global Institute, April 2018.

An example of “nation-specific law” is the US Department of Defense’s Law of War Manual.

For the United States perspective on the significance of Law of War to Artificial Intelligence, see: Defense Innovation Board, AI Principles: Recommendation on the Ethical Use of Artificial Intelligence by the Department of Defense, Supporting Document, October 31, 2019, pages 22-24, 53-58.

Lindsey Sheppard, Robert Karlen, Andrew Hunter, and Leonard Balieiro, Artificial Intelligence and National Security: The Importance of the AI Ecosystem (Washington, DC: CSIS, 2018).

Raymond Perrault, Yoav Shoham, Erik Brynjolfsson, Jack Clark, John Etchemendy, Barbara Grosz, Terah Lyons, James Manyika, Saurabh Mishra, and Juan Carlos Niebles, “The AI Index 2019 Annual Report”, AI Index Steering Committee, Human-Centered AI Institute, Stanford University, December 2019.

Meredith Whittaker (@mer__edith), “Only ~5 companies in the West have the resources needed to develop AI. AI startups and academic AI research labs license (or are gifted) computational resources from these Big Tech companies”. Twitter, November 29, 2019.

Elsa Kania, “China’s AI talent ‘arms race’”, The Strategist, ASPI, April 23, 2018,

Samuel Bendett, “Russia’s National AI Center Is Taking Shape”, Defense One, September 27, 2019; Don Weinland, “China in push to lure overseas tech talent back home”, Financial Times, February 11, 2018.

Dawn Liu, “China ramps up tech education in bid to become artificial intelligence leader”, NBC News, January 4, 2020.

Rodney Brooks, “My Dated Predictions”, Rodney Brooks: Robots, AI and other Stuff (blog), January 1, 2018, Ram Shankar Siva Kumar, David O’Brien, Jeffrey Snover, Kendra Albert, Salome Viljoen, “Failure Modesin Machine Learning”, Microsoft, November 10, 2019.

Ram Shankar Siva Kumar, David O’Brien, Jeffrey Snover, Kendra Albert, Salome Viljoen, “Failure Modes in Machine Learning”, Microsoft, November 10, 2019.

Colin Clark, “Air Combat Commander Doesn’t Trust Project Maven’s Artificial Intelligence – Yet”, Breaking Defense, August 21, 2019.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Lindsey R. Sheppard

Lindsey R. Sheppard

Lindsey Sheppard is a fellow with the International Security Program at the Center for Strategic and International Studies (CSIS).

Read More +