Author : Siddharth Yadav

Expert Speak Raisina Debates
Published on Mar 17, 2025

In an increasingly competitive geopolitical landscape, choosing between AI Safety and AI Security is quickly becoming a pressing concern for the future of AI governance

AI safety on the chopping block: How US-China rivalry is redefining regulation

Image Source: Getty

This article is part of the series—Raisina Edit 2025


“The AI future is not going to be won by hand-wringing about safety,” stated United States (US) Vice President J.D. Vance at the 2025 Paris AI Action Summit (PAIAS). His remark underscores three realities: first, AI will be a defining technology for the future; second, there will be winners and losers in the AI race; third, regulatory approaches will determine whether a nation can capitalise on AI’s transformative power. The significance of AI for the future of humanity has become a truism in tech and geopolitical circles, with industry leaders predicting that humanity is either at the dawn of “The Intelligence Age” or “the race to human extinction”. Governments are contending with the reality that, regardless of which outcome emerges, cutting-edge AI research and innovation are largely concentrated in the US and China. Indeed, zero-sum efforts in both Washington and Beijing to outdo the other in the AI race appear to have increasingly marginalised the scope for joint endeavours

Governments are contending with the reality that, regardless of which outcome emerges, cutting-edge AI research and innovation are largely concentrated in the US and China.

Furthermore, the drive to win the AI race is superseding commitments towards responsible innovation and international cooperation, particularly amongst leading AI powers. Triangulated between its existing economies of scale in the research and development (R&D) sector, geopolitical imperatives driving efforts towards global leadership as an AI power, and a shift towards deregulatory policies at home, the US’s AI policy has catalysed a growing divide within AI governance doctrines worldwide. Underpinning this division is the distinction between two interrelated regulatory issues: AI safety and AI security. This paper will explore how the two concepts are relevant for the evolving regulatory approaches of governments globally.

AI safety vs. AI security

Defining the terms: Differentiating between AI safety and security may appear to be an exercise in semantics, but it can play a crucial role in defining policy approaches to AI regulation. The key difference between the two lies in how they target intent and origin of risk as matters of regulatory concern. AI safety has an ethical dimension and focuses on mitigating unintended consequences arising during the life cycle of systems that may be secure and aligned with the goals of developers and regulatory standards. A safety perspective focuses on internal risks of AI systems like biased training data, biased algorithms, and misalignment between the intended use of an AI system and the actual output of the system. Conversely, AI security entails protecting the integrity of systems, their components, and dependent systems from external threats. A regulatory focus on security will emphasise identifying threats that seek to exploit or misuse AI systems. An ideal approach would involve prioritising both safety and security. However, a focus on safety can require establishing guidelines and frameworks after years of consultations with stakeholders, like in the case of the 2024 EU Artificial Intelligence Act (AIA). Whereas security may help regulatory expediency as it can be achieved in the short term by enforcing existing cybersecurity frameworks and data protection regulations.

AI safety has an ethical dimension and focuses on mitigating unintended consequences arising during the life cycle of systems that may be secure and aligned with the goals of developers and regulatory standards.

A rogue Anglo-sphere: Governments have begun prioritising either safety or security to define their strategic orientation. For instance, the United Kingdom (UK) and the US, two countries that abstained from signing the PAIAS AI Pledge, have staked their position as pro-security. On the heels of the Summit, the UK Department of Science, Industry and Technology (DSIT) renamed its AI Safety Institute to ‘AI Security Institute’ (ASI). Speaking at the 2025 Munich Security Conference, UK Technology Secretary Peter Kyle stated that the unfolding “AI revolution” requires regulatory reorientation and that the new ASI will not be “deciding what counts as bias or discrimination.” Instead, the institution will focus on mitigating external threats and investigating AI systems regardless of which jurisdictions they come from in coordination with the UK’s allies.

Across the Atlantic, Elizabeth Kelly, the director of the US AI Safety Institute established under the Biden administration, stepped down from her role following the release of President Trump’s executive order on AI that marked the end of Biden-era pro-safety domestic policies. From a geoeconomic perspective, the executive order on AI designed to promote “America’s global AI dominance” epitomises a pro-security shift. Internationally, this shift, along with the tariff war between the US and China and the questionable success of existing export controls for advanced US AI chips, may lead to stricter restrictions to further isolate Chinese supply chains. Singapore is already under investigation by Washington due to allegations that DeepSeek was able to bypass US export controls by acquiring Nvidia chips through the island nation. A focus on AI security may lead to the emergence of non-tariff trade barriers to tech diffusion, as demonstrated by Western calls to ban China’s DeepSeek R1 AI model due to security concerns considering that the model in question can be run locally (preventing cross-border transfer of data), is less resource intensive and priced significantly lower that Western offerings.

Ethicists involved in drafting the ethical guidelines for the AIA have characterised the regulation as a veiled attempt at “ethics-washing” AI to appease public concerns and accelerate AI adoption across the European market.

Undecided EU: The safety-security dynamic is also gaining momentum in the EU as the first compliance deadline of the pro-safety risk-based EU AIA came into effect in February 2025. Ethicists involved in drafting the ethical guidelines for the AIA have characterised the regulation as a veiled attempt at “ethics-washing” AI to appease public concerns and accelerate AI adoption across the European market. The announcements made by European leaders during PAIAS for increasing public-private investments in AI development and reducing regulatory burdens for developers may lend credence to such criticisms for some. Previous regulations like the 2016 General Data Protection Regulation (GDPR) have become policy templates for other jurisdictions, allowing the EU to secure a leadership position in tech regulation. Similarly, a pioneering regulation like the EU AIA also has the potential to set global standards for AI governance. Whether subsequent iterations of the AIA retain the regulation’s original framework or adopt a stance that prioritises external threats over internal risk factors will prove to be globally consequential.

Systemic risks and global fragmentation

As countries leading the AI race make bets on AI safety versus security, the resulting fragmentation of global AI governance will also affect two key areas: labour markets and environmental sustainability. Leaders like J.D. Vance optimistically argue for AI’s potential to contribute to productivity and job creation without paying much attention to job displacement, an inevitable phenomenon, according to Gilbert Houngbo, Director-General, International Labour Organisation. The 2025 International AI Safety (IAS) Report prepared for the PAIAS suggests that current general-purpose AI systems can impact 60 percent of jobs in advanced economies and 40 percent in emerging markets.

The withering support for sustainability principles in favour of regulatory efficiency in countries leading the AI race has the potential to become a model for emerging players as well.

AI development is a rapidly growing problem for environmental sustainability goals and a contributor to greenhouse gas (GHG) emissions. As long as scaling laws hold, AI development will entail investments in bigger data centres and more expansive infrastructures, which will proportionally increase environmental impact due to increased resource consumption. For instance, Microsoft made a climate pledge in 2020 of becoming carbon-negative by the end of the decade. Yet, its carbon emissions grew 30 percent by 2023 due to its investments in AI development. The withering support for sustainability principles in favour of regulatory efficiency in countries leading the AI race has the potential to become a model for emerging players as well. The remapping of the global AI governance landscape presents some systemic risks and leaves crucial questions unanswered. As governments in the West shift away from cooperative relationships towards competitiveness, what will remain of declarations and pledges signed for promoting responsible AI development? How effective will “pinky-promises” like MoUs and non-binding agreements like the PAIAS Pledge be in the face of geopolitical and economic pressures for scaling AI investment? Such open questions add weight to UN Secretary-General António Guterres’s pressing statement, “Are we ready for the future? The answer is easy. No”.


Siddharth Yadav is a Fellow in Technology at ORF Middle East

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Siddharth Yadav

Siddharth Yadav

Siddharth Yadav is a PhD scholar with a background in history, literature and cultural studies. He acquired BA (Hons) and MA in History from the ...

Read More +