Image Source: Getty
World leaders, tech moguls, and AI advocates gathered in Paris on 10-11 February 2025 for the third Artificial Intelligence (AI) Summit. Compared to the 2023 inaugural “AI Safety Summit” at Bletchley (UK) and the 2024 “AI Seoul Summit” (South Korea), the re-wording i.e., “AI Action” signals a shift from theoretical safety concerns to the nuances of implementation. While the focus on ‘action’ is imperative, the Paris AI Action Summit showcases that the world’s current AI priorities may not entirely be aligned with optimal outcomes.
By prioritising ‘action’ at the front and centre of AI-related concerns, the summit focused on discussion around employment, investment, ethics, regulation, and public interest AI. The 2025 AI Action Summit secured a non-binding declaration entitled “Statement on Inclusive and Sustainable Artificial Intelligence,” with 61 signatories pledging an open, ethical, safe, and secure use of AI. The event also launched the ‘public interest’ partnership called ‘Current AI’ with an initial investment worth US$400 million, aiming to raise US$2.5 billion over the next five years to facilitate open-source access to databases, software, and tools to trusted AI actors.
The 2025 AI Action Summit secured a non-binding declaration entitled “Statement on Inclusive and Sustainable Artificial Intelligence,” with 61 signatories pledging an open, ethical, safe, and secure use of AI.
India’s co-chairing of the Paris Summit and announcement of its role as the host for 2026 signifies the involvement of global actors, preventing mainstream AI discussions from becoming a part and parcel of the US-China tech fight. India has also proposed the setting up of an ‘AI foundation’ and a ‘Council for Sustainable AI’, to promote global cooperation in the development of AI technologies. However, it was the American and British refusal to sign the Paris AI Summit declaration that stole the spotlight. While the UK criticised the declaration for lacking substance and practical clarity in terms of global governance, the US Vice President, JD Vance, decried the necessity for stringent AI regulations.
While Trump’s AI policy still lingers in the balance, his tech-savvy advisory board appears to have influenced their AI policy advocating for the adoption of a more cautious approach against strict regulation, slowing down innovation.
America’s loud cry for innovation opportunities loosely corresponds to Europe’s show at the summit. French President, Emmanuel Macron, announced private investments in the AI industry worth US$109 billion. Surprisingly enough, the European Union (EU), which had a head start on data regulation with General Data Protection Regulations (GDPR), gearing up for a full implementation of its AI Act by 2027, now appears to be softening against a stringent regulation. EU Chief, Ursula von der Leyen noted the need for balanced regulation to maintain public trust while fostering innovation.
China’s AI breakthrough, DeepSeek, demonstrates that smaller AI outfits can drive innovation equally well, reinforcing the global energised spirit and shift toward AI development. Such a catalytic effect is clearly visible in Europe’s renewed focus on investment in the private AI industry, as the hopes of competing in the AI race have certainly heightened. Moreover, a moderate stance on regulation also signals a lack of competitive advantage finally hitting the minds of European policymakers, adding to the wary of falling behind their Western peers.
China’s AI breakthrough, DeepSeek, demonstrates that smaller AI outfits can drive innovation equally well, reinforcing the global energised spirit and shift toward AI development.
While Western leaders looked intimidated by China’s recent AI progress, the latter has yet again harnessed a global platform to showcase its latest AI advancements, governance measures, and vision for international cooperation, during an event organised by the China AI Safety and Development Association on the sidelines of the Summit. Although, Xi Jinping sat this one out, his narrative on building “a community of shared future of mankind” echoed loudly in Chinese Premier Zhang Guoqing’s speech.
With AI scientists warning of emerging models within Artificial General Intelligence (AGI) harbouring super-human level AI capabilities— expected to surface in the next five years—there is a shifted focus on the AI race and an increasingly hesitant approach. Nevertheless, such concerns have been put at the back seat, as Vance made it clear that the Trump Administration “cannot and will not” accept foreign governments “tightening the screws on US tech companies”. Given this development, the prospects of ‘public interest AI’ also remain in flux as the US is unlikely to support the initiative, fearing a possible diversion towards China.
The 2025 AI Action Summit fell short for three key reasons:
First, the summit represented an unseized moment where the leaders failed to acknowledge how soon powerful and disruptive AI systems are bound to arrive. Instead of acting on guardrails, the summit became a forum for mere declarations and announcements for national projects, and priorities.
America’s U-turn on AI regulation suggests that the Trump administration cares more about leading the technological and innovation curve over the future of stringent AI safety regulations.
Second, the US criticism of the EU’s stringent regulations highlights an emerging divide between nations favouring a laissez-faire approach, led by the US, and those advocating stricter regulatory frameworks. America’s U-turn on AI regulation suggests that the Trump administration cares more about leading the technological and innovation curve over the future of stringent AI safety regulations.
Third, although a renewed optimism for AI innovation is gaining ground, it also signals an aggressive development in AI strategies, costing us concrete consensus on AI safety.
While the Paris AI Action Summit advances this discussion, certain critical questions remain: Are the current standards of safety tools sufficient enough to catch up with the pace of the ongoing AI race? How is the broader AI ecosystem adapting to the storm set by the AI arms race? More importantly, will the same concerns get overshadowed as a result of geopolitical rivalry in the next AI Summit?
Megha Shrivastava is a Doctoral Candidate at the Department of Geopolitics and International Relations, Manipal Academy of Higher Education (Institute of Eminence), India.
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.