The breakneck pace at which Artificial Intelligence (AI) is advancing has turned the hypothetical conjecture of AI's transformative impact on our lives, politics, communities, and work into a palpable reality. The ramifications of AI are no longer a distant possibility, but a tangible and rapidly unfolding phenomenon that demands our earnest attention.
First, let's establish a clear understanding of what we mean by responsible AI. While different definitions exist, Responsible Artificial Intelligence (RAI) is an approach to developing, testing, and deploying AI systems in a way consistent with certain ethics and legal values so that AI becomes a force for good.
The use of AI in the military is by no means new but the rapidly expanding horizon of AI capabilities has unveiled unprecedented opportunities that are reshaping the landscape of modern warfare and compelling us to re-evaluate our understanding of its implications. Countries are racing to formulate strategies to integrate AI, robotics, and other emerging technologies into their Intelligence, surveillance, and reconnaissance (ISR), decision-making, war-gaming, logistics, personnel management, and weaponry.
: Select AI defence strategies, white papers and reports
In India, the Ministry of Defence AI Taskforce was constituted in 2018 and presented its report (which is not publicly available) in the same year. In 2019, the Ministry announced the creation of the Defence AI Council (DAIC), as well as the Defence AI Project Agency (DAIPA). It initiated Defence India startup challenges
, which support startups and MSMEs in prototyping and commercialising solutions, and provides a roadmap for Defence Public Sector Undertakings (PSU). Between 2019–2022, 40 AI projects were completed of the 70 identified in the Roadmap for Defence PSUs.
While different definitions exist, Responsible Artificial Intelligence (RAI) is an approach to developing, testing, and deploying AI systems in a way consistent with certain ethics and legal values so that AI becomes a force for good.
When it comes to the military domain, there are compelling reasons why AI applications require strict oversight compared to other fields. Some uses of AI in weapons systems could delegate life and death decisions to machines. Similarly, the use of AI in the military must be consistent with International Humanitarian Law (IHL), such as the principle of proportionality, discrimination, and accountability. Arguably, determining the proportionality of using force in a military operation requires human judgment and cannot solely rely on AI systems.
The use of AI in the military domain has significant geopolitical implications. It is already being dubbed
a new arms race, with worrying impacts on strategic stability. Another challenge in the responsible adoption of AI in the military domain, relating to current worsening geopolitical fissures, is the lack of global consensus on its scope of use and potential impact. Different countries have varying perspectives on the use of AI in the military, and there are no universally accepted norms or regulations. Furthermore, the US and China are shaping
the way militaries perceive the future military use of AI, in part due to their high levels of investment in military AI, and so their political cultures and imperatives are driving narratives around military AI.
Ethics—beyond the bare minimum—become an afterthought, especially if the rationale for one’s AI in defence strategy is to catch up or out-compete rather than contextualise and identify gaps in the capacity that do not have non-technological solutions. Yes, drones are a force multiplier but do they alleviate challenges in a border conflict or increase the potential for inadvertent escalation? For countries whose primary pull on military resources is from internal conflicts, does the presence of unmanned systems augment or diminish sustainable peacebuilding? How can military leadership leverage AI to improve the deployment conditions of military personnel on the ground?
When it comes to the military domain, there are compelling reasons why AI applications require strict oversight compared to other fields.
The geopolitics of military AI is crystal clear at the UN GGE on Lethal Autonomous Weapons Systems (LAWS), where there has been little progress
since the 11 Guiding Principles adopted in 2019. As many others have noted, the consensus-based process results in a tyranny of the minority, as progress is stalled by great powers like Russia and the US that are opposed to a binding instrument. There is also a lack of clear goals amongst other member countries; while some are completely opposed
to the development and deployment of LAWS, others are opposed only to the use of LAWS (e.g., China). In fact, some countries have even questioned whether the Convention on Certain Conventional Weapons (CCW) is the appropriate space
for these discussions.
“International weapons regulation is feasible only when there is a shared political interest among states
.” Aside from the UN Group of Governmental Experts (GGE), there have been relevant developments in the last year where smaller groups of countries came together to release their own calls to action: The US-led political declaration on responsible military use of AI and autonomy
and the Responsible AI in the Military REAIM) Call to Action
. There is not much value added in terms of new rules, beyond calling for greater AI literacy within militaries, auditability of systems, and safety mechanisms along with failure modes. However, these can serve as starting points, should the parties agree to move beyond signalling.
The protracted journey of formulating rules and norms to govern novel technologies has struggled to keep pace with the rapid strides in technological innovation. This trend has been worsened by arms racing and geopolitical pressures shaping the narratives on military AI. It was striking, for instance, that the UK’s responsible AI approach paper
stresses on an enabling rather than restrictive environment, and frames its rationale more in the context of public consent for military AI, rather than objective principles for responsible and ethical use. In India as well, policy papers
often frame the rationale for military AI as “catching up” with China. While the broader strategic context is essential, there are real non-tech issues in the military that need to be addressed rather than imposing tech solutions on them.
Some countries have even questioned whether the Convention on Certain Conventional Weapons (CCW) is the appropriate space for these discussions.
National military AI strategies and principles must be shaped by local context and multistakeholder inputs because the use cases, security environment, and level of trust of citizens—all are variable across countries. “Military branches often narrow their available
options by focusing too heavily on what goes into making a weapons system as opposed to the goals they hope to achieve through them.” This leads to piecemeal acquisitions without a full grasp of how they will be integrated into broader strategy and aims. Militaries would also be wise to avoid tech solutionism: There are never any easy solutions.
Trisha Ray is Deputy Director at the Centre for Security, Strategy and Technology, Observer Research Foundation.
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.