Author : Charles Ovink

Expert Speak Digital Frontiers
Published on Jan 31, 2024

AI already presents major risks for international peace and security. We must address the dangers of misuse.

AI risks for international peace and security

This essay is part of the series: AI F4: Facts, Fiction, Fears and Fantasies.


So much of the current discussion around AI in the mainstream press worries about a future of “Terminator” like malevolent general intelligences, of “rogue AI” operating on its own, and vaguely definedexistential risks”. Yet AI already presents major risks for international peace and security. It is already urgent that we address the dangers of misuse, and it is already within our power to do so. The responsibility for addressing, mitigating, and eliminating them is widespread, and we can work together to do so today.

Artificial Intelligence (AI) is everywhere, but are we paying enough attention to the risks it presents, particularly the risks to international peace and security? While topics like Lethal Autonomous Weapon Systems (LAWS) do draw attention, the habit of viewing technological developments in the “civilian” domain as distinct means that the risks stemming from diversion and misuse of “civilian” technology are under-discussed. Equally, given the private-sector-centric nature of much AI development, the way industry and other stakeholders talk about AI and the specific definitions they use, disproportionately frame any discussion around risk. This has implications even for the way the disarmament and arms control communities talk about AI. Are we talking about the same risks, and if not, how can we get AI practitioners engaged with addressing risks to peace and security?

Given the private-sector-centric nature of much AI development, the way industry and other stakeholders talk about AI and the specific definitions they use, disproportionately frame any discussion around risk.

It is clearly critical that the civilian AI community be engaged in understanding and mitigating the peace and security risks associated with the diversion and misuse of civilian AI technology by irresponsible actors, and this will not be possible without greater support. It is to this end that the United Nations Office for Disarmament Affairs (ODA) and the Stockholm International Peace Research Institute (SIPRI) have partnered for a new project. Funded by a decision of the Council of the European Union, this three-year initiative on responsible innovation in  AI for peace and security was launched in early 2023. The project combines awareness-raising and capacity-building activities to equip the civilian AI community—particularly the next generation of AI practitioners—with the knowledge and means necessary to engage in responsible innovation and to help ensure that civilian AI technology is peacefully applied.

What risks are getting attention now?

Developments in AI have never been higher-profile. Leaders from Joe Biden to Xi Jinping have stressed the potential for AI to provide solutions for disease response and climate change, and build “harm-free” modern industrial systems. Globally, there seems to be some consensus that AI at least has the potential to provide real benefits for development and a cleaner, brighter future.

At the same time, stakeholders at every level continue to emphasize the importance of discussing how we should address the significant risks AI presents. At first glance, there may be a surprising amount of consensus in this direction too, particularly for cutting-edge technology. Even amongst industry leaders at the forefront of modern AI development, there has been substantial commitment to efforts like the Future of Life Institute’s March 2023 open letter calling for a pause on “giant AI experiments” in the name of safety, or the May 2023 “Statement on AI Risk” hosted by the Center for AI Safety, which states, in full, that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. OpenAI CEO Sam Altman famously called for regulation and the incentivization of AI safety in his testimony to the US Senate. When we look a little closer, however, at what risks are being considered, and what regulation is being called for, things become much less clear.

The common thread with these and many other high-profile efforts at addressing AI risk is that their particular focus is on hypothetical risks, primarily from competition with AGI.

The Future of Life Institute’s open letter, for example, focuses on “AI systems with human-competitive intelligence”, and calls for a pause as “contemporary AI systems are now becoming human-competitive at general tasks” (itself a contested suggestion). The “Statement on AI Risk” is brief but is clear that it focuses on the risk of human extinction. OpenAI is clear that its work is driven by its charter, which focuses on Artificial General Intelligence (AGI). The common thread with these and many other high-profile efforts at addressing AI risk is that their particular focus is on hypothetical risks, primarily from competition with AGI. Like so many concepts around AI, Artificial General Intelligence does not have a universally accepted definition. Sam Altman has described AGI as anything “generally smarter than humans”. The concept itself ties back to Marvin Minsky’s idea of “a machine with the general intelligence of an average human being” (then predicted as arriving by 1978). The key idea is one of a human-like ability to generalize, that is, a single system that can, in Minsky’s words, read Shakespeare, grease a car, play office politics, tell a joke, have a fight. Suffice it to say, we’re not there yet.

Current approaches

However, where we are now in terms of AI development has plenty of risks we need to address already, and when it comes to international peace and security, many are increasingly urgent. The responsible AI space has grown in tandem with the rest of the field and features approaches from a whole range of stakeholders, including civil society organisations, governments, regional organisations, and professional and standards organizations, like the Global Partnership on Artificial Intelligence (which is built around a shared commitment to the OECD Recommendation on AI), the Montreal AI Ethics Institute, the Distributed AI Research Institute, and IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems. Generally, however, these approaches focus heavily on the risks and impacts of AI in the “civilian” domain, such as algorithmic bias in employment, justice and health. While these are key risks that require addressing, AI is an enabling technology and has great general-use potential. Research and innovation in AI developed for civilian applications can (relatively easily) be accessed and repurposed for harmful or disruptive uses, with significant implications for international peace and security. This can create an illusion that AI risks emerging from the civilian domain are separate from international peace and security risks. In reality, many risk pathways stem from current civilian AI development and use. These are not new phenomena, and the diversion and misuse of civilian technology are not unique to AI. Dual-use technologies are a problem the international community has significant experience dealing with, and governance solutions around dual-use technologies in several areas could provide useful good practices for AI.

The same level of engagement between practitioners and the arms control community, and awareness of the risks civilian technology can present, are not yet present for AI.

In 2022, a group of researchers revealed that they had developed an AI tool that could develop potential new chemical weapons. By adapting a machine-learning model originally used to predict the toxicity of components of new drugs (to avoid them), the researchers ended up with a tool that could design new toxic molecules. In fact, it could do so incredibly quickly: suggesting 40,000 in only six hours. In the life sciences, the risks stemming from the misuse of peaceful research are a well-recognised problem, thanks in part to a long history of engagement between scientists and arms control experts. In this case, the researchers publicized their work to demonstrate just how easily a peaceful application could be misused by malicious actors. Unfortunately, the same level of engagement between practitioners and the arms control community, and awareness of the risks civilian technology can present, are not yet present for AI.

Addressing peace and security risks now

While the diversion and misuse of civilian technology are not new, the problem is complicated by multiple factors when it comes to AI: (i) transfer & proliferation are difficult to control, given the intangible and fast-changing nature of AI algorithms and data; (ii) the private sector’s interest in safeguarding proprietary algorithms, data and code, given its leading role in the research, development and innovation ecosystem, and; and (iii) the global availability of the material resources and human expertise capable of repurposing these technologies. Equally, those in the civilian sector working in AI too often remain unaware of the potential implications of the diversion and misuse of their work for international peace and security or are hesitant to take part in discussions on AI risks in arms control and non-proliferation circles.

A critical element in making any such engagement impactful and sustainable will be to ensure that it is both multi-stakeholder, with a wide range of perspectives, and that it is not geographically limited to a handful of States.

Given the speed of technological development, and the young but developing AI governance environment, addressing this capacity and engagement gap is increasingly urgent. Effectively engaging the civilian AI community on peace and security risks requires building capacity in at least three areas:

  • With the AI industry, including through work with professional associations and standards bodies, to connect with multi-stakeholder expertise from around the world to establish how risks to peace and security can be included in existing risk management and mitigation practices, and where necessary, what new practices might be needed.
  • With educators, to support the mainstreaming of peace and security risks as part of formal training on responsible practices
  • With future generations of AI practitioners themselves, to embed responsible approaches to peace and security risks as a natural element of AI development and risk management.

A critical element in making any such engagement impactful and sustainable will be to ensure that it is both multi-stakeholder, with a wide range of perspectives, and that it is not geographically limited to a handful of States. AI is already presenting significant risks, and we don’t need to wait to start dealing with them. We know who needs to be engaged in addressing risks, and we know that they won’t get better without working together. A safer future that can still reap the benefits of this technology is within our reach, but we have to take the steps to get there now.


Charles Ovink is a Political Affairs Officer at the United Nations Office for Disarmament Affairs

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Charles Ovink

Charles Ovink

Working with the Regional Disarmament and Science and Technology briefs of the United Nations Office for Disarmament Affairs (UNODA), Charles Ovink specializes in responsible innovation, ...

Read More +