Expert Speak Raisina Debates
Published on Apr 08, 2026

The UNGA blueprint on AI in NC3 signals a cautious move toward risk reduction but remains limited by rivalry, uneven commitments, and the nuclear-non-nuclear divide

The UNGA Blueprint on AI in NC3: Realities and Limitations

Introduction

On 1 December 2025, the United Nations General Assembly (UNGA) passed Resolution A/RES/80/23 on the potential risks of integrating artificial intelligence (AI) into nuclear weapons’ command, control and communications (NC3). The resolution received broad support from UN member states, indicating anxiety among them over the risks posed by AI in the nuclear weapons sphere.

The emphasis on risk reduction also reflects the evolution of ambitions within the UN agenda to adapt to changes driven by technological advancements. While this may be the case, the resolution underscored the dichotomy between nuclear and non-nuclear states in the role and integration of AI. Nuclear-armed states are more incentivised than ever to embrace AI in the NC3 architecture, given the intensification of geopolitical rivalries. This raises the question of how, and to what extent, nuclear-armed states are mitigating and managing the AI challenge within the nuclear domain to maintain strategic stability.

The resolution underscored the dichotomy between nuclear and non-nuclear states in the role and integration of AI. Nuclear-armed states are more incentivised than ever to embrace AI in the NC3 architecture, given the intensification of geopolitical rivalries.

AI in NC3: To What End?

AI, as a technology, is meant to support and enable human-level cognition, which involves complex decision-making in national security. AI, like any emerging technology, is configured as part of an offset strategy among states to retain a competitive advantage over their adversaries. As a result, states maintain opaqueness about the status of development and integration of such capabilities in conventional military systems as well as nuclear architecture.

From the states’ perspective, the plausible uses of AI incentivise its integration into the NC3 architecture. Through AI, more accurate information processing offers an opportunity to reduce human bias and enhance decision-making. This can improve the performance of early-warning systems, reduce false alarms, and potentially prevent accidental launches.

Through AI, more accurate information processing offers an opportunity to reduce human bias and enhance decision-making. This can improve the performance of early-warning systems, reduce false alarms, and potentially prevent accidental launches.

As shown in the table below by Alice Saltini of the European Leadership Network, the three major nuclear-armed states, China, the US, and Russia, emphasise the role of AI in supporting and optimising command-and-control decision-making processes.

Table 1: Comparative Assessment of the US, Chinese, and Russian Discourse on AI in NC3

NC3-related functions United States China Russia
Early Warning Systems Highlights the US’s need to enhance integrated tactical warning and attack assessment Stresses the importance of improving strategic early warning Highlights AI as potentially beneficial for swift threat detection and damage prediction
Command and Control Highlights the US’s need to optimise resilience approaches for NC3 architecture with advanced decision support technology and for integrated planning and operations Emphasises command and decision-making as major areas of interest for advancing the role of AI in national defence Highlights AI’s current use in decision support of day-to-day activities and operational combat management. Moreover, sources highlight AI utility in support of retaliation planning
Targeting Highlight AI’s utility in improving targeting ability Highlights the utility of AI in improving targeting and missile guidance Highlights the need for radar systems to tackle new tasks that are poorly performed by traditional AI algorithms, such as target recognition (including for strategic conventional and novel nuclear strike capabilities)

Source: European Leadership Network

Overall, the existing discourse on the integration of AI into NC3-related processes in the US, China, and Russia is inclined towards gaining a competitive advantage and enhancing deterrence vis-à-vis adversaries. However, Chinese strategic experts have also raised concerns about excessive reliance on and trust in AI systems.

Reaffirming Strategic Stability?

It is in this context that the UN General Assembly Resolution A/RES/80/23 aims to highlight issues addressing existing conditions and future challenges. As both a conceptual framework and a policy tool, strategic stability has become essential for managing the incentives that drive states toward nuclear conflict. Accordingly, the resolution calls for the adoption and publication of national policies and doctrines that explicitly affirm and operationalise the principle that AI-enabled NC3 architectures and related systems will remain under human control and oversight. These systems will not be capable of autonomously initiating decisions on the use of nuclear weapons.

Threats Beyond Human-in-the-Loop

The breakneck speed of AI development has also shifted the debate beyond the notion of human-in-the-loop toward the risks of inadvertent escalation, particularly during crises. As technology advances, policymakers’ risk appetite may increase, creating incentives for first-mover advantage and conditions conducive to pre-emptive strikes against adversaries. The reduction in human oversight amid a compressed decision cycle could increase the likelihood of misperception, misinterpretation, and miscalculation.

Voting Pattern of States

The voting pattern over the Resolution A/RES/80/23 reflected a structural dichotomy regarding the role of nuclear weapons between nuclear-armed and non-nuclear-armed states.

A total of 118 member states voted in favour of the resolution, nine voted against it (Argentina, Burundi, Central African Republic, Democratic People’s Republic of Korea, France, Israel, Russian Federation, the United Kingdom, and the United States), while 44 abstained. Evidently, major nuclear-armed states opposed the resolution to avoid future commitments that could impose strategic constraints. The voting pattern also indicates how non-nuclear states, particularly those in geographical proximity to nuclear-armed states, perceive the dangers of AI in the context of nuclear strategy.

The resolution calls for the adoption and publication of national policies and doctrines that explicitly affirm and operationalise the principle that AI-enabled NC3 architectures and related systems will remain under human control and oversight. These systems will not be capable of autonomously initiating decisions on the use of nuclear weapons.

Non-nuclear states advocating for nuclear disarmament have categorised AI systems as a new layer of risk which exacerbates the existing fragility of nuclear weapons and associated architecture. They argue that the difficulty of verifying and validating the degree and extent of AI integration among nuclear-armed states intensifies the security dilemma and creates unforeseen challenges for positive security assurances extended to non-nuclear states. In response, they have proposed new confidence-building and risk-reduction measures that explicitly address the technology element.

Political Consensus to Commitment?

Despite the arms race to maintain a technological edge through AI integration, nuclear-armed states have sought to build a broad consensus around a loosely defined set of limits grounded in a human-in-the-loop framework.

Both the US and China have affirmed the need for a baseline understanding that AI should not replace humans in decision-making. Likewise, the UK and France have advocated for a consensus on the indispensable role of human agency in defining the contours of responsible practices for nuclear-weapon states. The UNGA Resolution acknowledged prevailing geopolitical realities and the serious impediments to the total elimination of nuclear weapons. It noted that the pragmatic course forward lies in risk reduction, emphasising “an urgent need for further effective, concrete and transparent measures to reduce the risk of nuclear weapons.” However, such measures are not a substitute for disarmament, but rather incremental steps to address the emerging challenges posed by existing nuclear weapons.

India’s Approach

India, as a nuclear-armed state, operates within a unique and complex threat environment, facing national security challenges from two nuclear-armed neighbours — China and Pakistan. Recent crisis episodes, including the Galwan Valley clash with China in 2020 and the four-day crisis with Pakistan in 2025, have further strained nuclear deterrence dynamics and the broader security environment for New Delhi.

To this end, keeping pace with its adversaries remains an imperative for New Delhi. Accordingly, it is developing strategic non-nuclear capabilities to deter and respond to threats in the warfighting domain. With respect to nuclear weapons, the Indian security establishment continues to emphasise their primary role as instruments of deterrence. India is therefore developing and acquiring technologies to achieve a robust nuclear triad as an effective deterrent. Given the primacy of political control over nuclear weapons, there is little indication that New Delhi will shift toward automation in its NC3 infrastructure in the short to medium term. Moreover, India’s own defence AI ecosystem remains at a nascent stage.

The rapid adoption of AI on the battlefield, particularly in conflict theatres in Ukraine and West Asia, suggests that India cannot afford doctrinal rigidity.

In a submission to the UNGA’s First Committee in October 2025, New Delhi outlined its five-pillar “Trustworthy AI” framework, comprising: (i) Reliability and robustness; (ii) Safety and security; (iii) Transparency; (iv) Fairness; and (v) Privacy. It underscored that human judgment and oversight in the military application of AI are essential to mitigating associated risks. It can be surmised that in the absence of any concrete plans for integrating AI into nuclear architecture, New Delhi’s approach to military AI amounts to responsible, norm-driven behaviour.

While this normative framework establishes a baseline, the rapid adoption of AI on the battlefield, particularly in conflict theatres in Ukraine and West Asia, suggests that India cannot afford doctrinal rigidity. Given evolving geopolitical dynamics and the shifting doctrinal positions of other nuclear-armed states on AI integration, New Delhi must respond with greater agility. This includes upgrading its NC3 infrastructure to enhance resilience and preserve a credible second-strike capability.

Conclusion

The UNGA resolution on AI and its integration with NC3 is a constructive step toward moving beyond the impasse in the disarmament debate. The disruptive impact of emerging technologies has generated new momentum for nuclear-armed states to more seriously consider associated risks and challenges. Simultaneously, non-nuclear states are incentivised to pursue risk-mitigation measures as nuclear-armed states continue to explore the integration of AI into their NC3 systems. The future trajectory will depend on the kinds of assurance and risk-reduction measures adopted in an environment marked by declining political trust and persistent misperceptions among nuclear-armed states.


Sameer Patil is the Director of the Centre for Security, Strategy, and Technology at the Observer Research Foundation.

Rahul Rawat is a Research Assistant with the Strategic Studies Programme at the Observer Research Foundation.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Authors

Sameer Patil

Sameer Patil

Sameer Patil is Director, Centre for Security, Strategy and Technology at the Observer Research Foundation. Based out of ORF’s Mumbai centre, his work focuses on ...

Read More +
Rahul Rawat

Rahul Rawat

Rahul Rawat is a Research Assistant with ORF’s Strategic Studies Programme (SSP). He also coordinates the SSP activities. His work focuses on strategic issues in the ...

Read More +