Expert Speak Raisina Debates
Published on Jan 15, 2025

As AI enters the field of nuclear security, global discussion forums that produce ethical and regulatory frameworks for the use of AI are necessary

Integrating AI in nuclear security: Challenges and ethical considerations

Image Source: Getty

Artificial intelligence (AI) has been a popular topic since 2022. Despite this, understanding of AI and integrating the tool in different areas has remained a challenge. The use of AI in nuclear weapons is no different. The current nuclear powers (which include India, the United States (US), the United Kingdom (UK), Russia, China, Pakistan, Israel, France and North Korea) are all incorporating AI into their defence procedures. Despite a keen global interest in the automation of militaries, and its overlap with nuclear detection and decision-making, the process does hold the potential for substantial harm if not implemented with adequate precaution.

Machine learning algorithms identify patterns and anomalies, while predictive modelling anticipates future developments in nuclear programmes.

AI has advantages in its applications to nuclear security. One such field that benefits from AI is nuclear verification. Nuclear verification ensures compliance with international agreements, such as standards set by the International Atomic Energy Agency (IAEA), to prevent the spread of nuclear weapons. It involves monitoring nuclear facilities, verifying materials, and conducting inspections. AI can be incorporated to enhance this process by analysing vast amounts of data from satellite imagery, sensors, and environmental samples, enabling faster detection of changes or violations at nuclear sites. Machine learning algorithms identify patterns and anomalies, while predictive modelling anticipates future developments in nuclear programmes. While AI holds promise in such areas, its integration into nuclear policy discussions risks shifting the focus away from more pressing and immediate issues, such as the elimination of human-based decision-making in nuclear arms control, disarmament, and mitigating the risk of accidental nuclear war.

Risks of mixing AI with nuclear diplomacy

Even conventional weapons and their automation are not yet adequately regulated. The 2021 report of the United Nations Security Council (S/2021/229) mentions the case of Libya, where autonomous weapons attacked fighters. However, whether the weapons were fully autonomous or remotely piloted was unconfirmed.  Such a lack of regulation in autonomous weaponry is reflected further in the ambiguity surrounding legal frameworks, which hampers the identification of responsibility when harm occurs. These issues have been brought to the world's attention through efforts from organisations such as the International Committee for Robot Arms Control (ICRAC) and the Campaign to Stop Killer Robots. The result, however, has still not translated into effective domestic governance.

While the automation of military systems, including drones and autonomous weapons, is advancing, international agreements and frameworks struggle to keep pace.

Although conventional automated weapons like drones have been used to a large extent in military operations, they are being governed by ambiguous and non-consistent oversight, which further evidences the inadequacy of present regulatory frameworks. The involvement of AI in non-conventional weapons presents even more pressing challenges. While the automation of military systems, including drones and autonomous weapons, is advancing, international agreements and frameworks struggle to keep pace. This gap in regulation is particularly concerning when AI is introduced to nuclear diplomacy,[1] where the stakes are even higher. Injecting AI into geopolitical negotiations will deepen mistrust, resulting in AI becoming another arms race, much like in the case of the collapse of the Intermediate-Range Nuclear Forces (INF) Treaty between the US and Russia. Introducing AI to this equation will exacerbate geopolitical tensions in the nuclear arms race, as well as the tech race.

Need for focused, separate AI discussions

Due to the impactful and disruptive nature of both nuclear security and AI, any discussions on the usage of AI in nuclear policy must be held separately from regular nuclear conversations. This would require international diplomatic efforts to provide platforms for more nuanced discussions focusing on technology. This may include coalitions such as the Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems, which convened under the Convention on Certain Conventional Weapons (CCW). The GGE discusses concerns with lethal autonomous weapons systems (LAWS), including issues related to autonomous weapon systems. India had emphasised the role of guiding principles for LAWS on this platform, including the most recent meeting held in August 2024.

The GGE discusses concerns with lethal autonomous weapons systems (LAWS), including issues related to autonomous weapon systems.

The Preparatory Committee for the Non-Proliferation Treaty (NPT) Review Conference is a vital forum for nuclear policymakers and researchers to meet before the five-year NPT Review Conference. These preparatory meetings aim to assess the progress made under the NPT and address the persistent disagreements surrounding nuclear non-proliferation. The third of the three meetings is scheduled for April-May 2025, while the Preparatory Committee for the 2026 NPT Review Conference progresses. This forum allows its members to underline the need for regulating AI in nuclear security so that discussion on its possible application does not outpace the frameworks that could mitigate risks when used.  So far, AI has not been recorded in the Preparatory Committee Review reports, and the inclusion of AI in command control and communication is discussed in side events. In the upcoming committee meeting, members can highlight the importance of AI and the regulation or limits on the inclusion of AI in nuclear security before the 2026 NPT Review.

Involvement of the right participants

The need for an ethical framework surrounding nuclear technology and AI, and the formulation of a new interdisciplinary field, “Ethics of Nuclear and AI Technologies (ENAI)”, has been recognised by the International Atomic Energy Agency (IAEA). AI integration into nuclear systems, from monitoring and verification to weaponisation, raises ethical concerns about accountability, decision-making, and unintended or unidentified consequences. The IAEA has called for the establishment of ethical guidelines and an accompanying committee to ensure that the development and use of these technologies follow the principles of transparency, trust, and non-proliferation. India can be an active participant with its growing technological capabilities and commitment to nuclear non-proliferation. While such a committee is formed at a global level, the ethics of nuclear security with AI inclusion can also be approached domestically under existing structures such as the Atomic Energy Regulatory Board (AERB). Pre-empting such a move will ensure that India is ahead of the curve in monitoriIndia can be an active participant with its growing technological capabilities and commitment to nuclear non-proliferation. ng nuclear security impact. This will not only have benefits domestically but will also allow India to offer learned experiences for formulating such an ethical committee in the future.

India can be an active participant with its growing technological capabilities and commitment to nuclear non-proliferation.

Integrating AI into nuclear technology presents unprecedented opportunities and grave risks, making it essential to address this issue within global forums such as the NPT Review. India remains firmly committed to the “no-first-use” of nuclear weapons and technological leadership. It needs to actively take this lead in shaping those international frameworks by nurturing such regional and domestic engagements. By contributing to global discussions and ensuring proper ethical AI governance within its borders, India can lead toward a secure, transparent, and responsible approach to the future of nuclear technology and AI. The international community must prioritise these discussions on AI and nuclear security, and establish a regulatory framework safeguarding the future of the nuclear order in the face of technological advancements.


Shravishtha Ajaykumar is an Associate Fellow with the Centre for Security, Strategy and Technology at the Observer Research Foundation.

[1] Management and negotiation of nuclear weapons and issues, in most cases with the goal of eliminating or at least reducing the proliferation of nuclear arms, ensuring arm control, and promoting disarmament. It involves international agreements, treaties, and dialogues meant to reduce the threats associated with nuclear weapons

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Shravishtha Ajaykumar

Shravishtha Ajaykumar

Shravishtha Ajaykumar is Associate Fellow at the Centre for Security, Strategy and Technology. Her fields of research include geospatial technology, data privacy, cybersecurity, and strategic ...

Read More +