Author : Manoj Joshi

Expert Speak Digital Frontiers
Published on Nov 29, 2023

States are not ready to commit to a global regulatory regime for AI. At best, they may be willing to enter into bilateral or minilateral regimes to regulate its military usage.

The urgent challenge of Artificial Intelligence regulation

Artificial Intelligence (AI) is managing to grab our attention even as physical wars rage in Ukraine and West Asia. This is not related to the drama surrounding Sam Altman and OpenAI. However, there is a hint that certain developments relating to AI may have precipitated his sacking in the first place.
According to Reuters, several researchers had written to the board of directors that had sacked Altman, “warning of a powerful artificial intelligence discovery that they said could threaten humanity.” This could be related to a “Project Q*” on artificial general intelligence (AGI) which relates to autonomous systems that can outdo humans in certain tasks. A NATO report on Science and Technology issued in 2023 noted that AGI was the “holy grail of AI innovation” and noted that it did not expect a breakthrough within the next 20 years.
 
The recent Biden-Xi summit at the sidelines of the Asia-Pacific Economic Cooperation (APEC) meeting in San Francisco stood out for the restoration of United States (US)-China military communications and the counter-narcotics cooperation against fentanyl. But a third development drew lesser attention because the two sides provided scanty details. This is related to AI.
 
At the press conference after the summit, Biden said “We’re going to get our experts together to discuss risk and safety issues associated with artificial intelligence.” He noted that wherever he travelled across the world, “every major leader wants to talk of artificial intelligence.”
Reports suggest that the Chinese were receptive to the American initiative in this area, especially in relation to AI command and control systems for nuclear weapons. While this was not explicitly mentioned in the summit, this was an area that could lead to an agreement between the two countries to ensure that command and control of nuclear weapons always remained in human hands.
According to reports,  technology leaders including Sunder Pichai or Google and Sam Altman of OpenAI who participated in various panels in the APEC broadly supported the notion of regulating AI at an international level.

According to reports,  technology leaders including Sunder Pichai or Google and Sam Altman of OpenAI who participated in various panels in the APEC broadly supported the notion of regulating AI at an international level.


 
The US government has taken the leadership here with a significant executive order on the issue and pushing for global norms on the subject of military uses of AI. The order issued on 30 October says that it seeks to ensure that the US not only “leads the way” in developing AI, but also “managing the risks of artificial intelligence (AI).”
The order directed the following principal actions: 1) Requiring developers of the most powerful AI systems to share their safety test results and other information with the government; 2) Develop standards, tools, and tests to ensure that AI systems are safe; 3) Protect against the risks of using AI to “engineer dangerous biological materials.”; 4) Work with other nations to support the safe and secure uses of AI.

The US government has taken the leadership here with a significant executive order on the issue and pushing for global norms on the subject of military uses of AI.

 
The US has been active on the issue since the beginning of the year. In February, the US Department of Defence issued a directive modifying its rules on the development of autonomous weapons by adding paragraphs on ethical rules that the new weapons programmes must follow and creating a new Autonomous Weapons Systems Working Group to supervise its programmes. This was not meant to constrain the development of such systems, but to assist it by clearly outlining the review processes.
In the same month, in The Hague, the US State Department put out a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” that laid out the general US approach. Since then 47 countries, mainly US allies, have endorsed it, though not India or China. The statement does not speak of a ban on such weapons, but on promoting “ethical and responsible” use and ensuring that such systems function “within a responsible human chain of command and control.”
A US National Intelligence Council “Future of the Battlefield” report of 2021 noted that autonomous systems and AI are likely to play a key role in the future of warfare. Such systems and AI played the role of enabling technologies “allowing existing platforms to operate with decreasing levels of human interaction….”
 
A New York Times article has pointed out that the Ukraine conflict has speeded up the advances in AI by creating an environment where radio communications and GPS are being jammed constantly. Earlier, drones relied on human operators to carry out their missions, but now, software is being developed to make them autonomous.
 
The UN Group of Government Experts have been discussing the issue of regulation of AI for some time now, but have not been able to reach a consensus. However, a draft resolution moved by Austria calling for the UN Secretary-General to seek views and submit a report passed 164-5 in the UN General Assembly with the US in favour of the resolution, Russia against it and China abstaining. All three big players, the US, Russia, and China want the real decisions to be taken by the Experts Group.
 
While the debate among the major powers has a military edge, India’s public concerns look somewhat different. Recently, the Prime Minister expressed concern over deepfake videos using AI. The worries relate to the use of AI for creating deepfakes, spreading disinformation, and using algorithms to amplify divisive content. There are worries about how AI is being used to hit the Indian entertainment industry
The National Strategy or Artificial Intelligence issued by the NITI Aayog in 2018 is more about exploiting AI for economic and social benefit. Last July, Union Defence Minister Rajnath Singh launched 75 newly developed AI technologies during the first ever “AI in Defence Symposium”. He is reported to have said that “timely infusion of technologies like AI and Big Data in the defence sector is of utmost importance so that we are not left behind in the technological curve and are able to take maximum advantage of technology for our services.” As of now, there are no indications that India is concerned about the more problematical aspects of AI which is generating international concern.
 
The world is clearly at an inflection point on the issue of regulation of the use of AI for military purposes. The world’s experience with technology that has military applications is that states will not hesitate to press ahead, regardless of ethical issues and concerns. 

The world’s experience with technology that has military applications is that states will not hesitate to press ahead, regardless of ethical issues and concerns.


The UN experience suggests that states are not quite ready to commit themselves to a global regulatory regime. However, as in the case of nuclear weapons, they may be willing to enter into bilateral or minilateral regimes to regulate its military usage.
 
Unfortunately, the call for regulating the military applications of AI is coming at a time when the world’s nuclear weapons regulation regime is coming apart. It is not a particularly good augury.


Manoj Joshi is a Distinguished Fellow at the Observer Research Foundation

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Manoj Joshi

Manoj Joshi

Manoj Joshi is a Distinguished Fellow at the ORF. He has been a journalist specialising on national and international politics and is a commentator and ...

Read More +