-
CENTRES
Progammes & Centres
Location
Siddharth Yadav and Anirban Sarma, “U.S. AI Action Plan: Recommendations to the National Science Foundation and the Office of Science and Technology Policy,” ORF Special Report No. 257, April 2025, Observer Research Foundation and Observer Research Foundation Middle East.
Introduction
The Donald Trump administration is prioritising the enhancement of United States (US) leadership in artificial intelligence (AI) through deregulation, private-sector investment, and strategic policy development. To this end, President Trump announced the Stargate Project on 21 January 2025. The joint venture—whose initial equity funders are OpenAI, SoftBank, Oracle, and MGX—aims to invest up to US$500 billion in AI infrastructure across the US by 2029. Stargate will focus on constructing data centres, building new AI infrastructure, enhancing AI capabilities, and creating “hundreds of thousands of American jobs”.[1]
Shortly thereafter, on 23 January 2025, President Trump issued Executive Order 14179, marking a clear shift in American domestic and foreign policy on AI development. The Trump administration aims to “sustain and enhance America’s global AI dominance to promote human flourishing, economic competitiveness, and national security.”[2] To achieve this objective, the US will need to balance a domestic pro-innovation approach with a pro-security foreign policy.
This report is in response to the White House call for public comment regarding the development of an AI Action Plan.[3] The authors recommend two sets of interventions for the proposed Action Plan: reprioritising domestic policy through the US AI Safety Institute (AISI) and adjusting export controls.[a]
Modifying the Mandate and Operations of US AISI
The US AI Safety Institute (AISI) was established in 2023 with a focus on AI risk mitigation, regulatory oversight, and responsible AI deployment. However, President Trump’s emphasis on the “removal of barriers to American AI innovation” and “global leadership in AI”[4] may necessitate a shift in AISI’s priorities towards minimising bureaucratic and regulatory hurdles, boosting AI competitiveness, and building a more pro-business AI ecosystem. The following actions could be considered.
Shifting Towards Self-Regulation and Market-Driven AI Governance
Adopting a More Pro-Business Approach to AISI’s Strategic Goals
Building R&D Capacities and Leveraging International Collaborations
Adding Flexibility to Export Controls to Foster Strategic Partnerships
Increasing Export Caps Through the ‘Validated End-User’ Authorisation Programme
On 15 January 2025, the Bureau of Industry and Security released the Framework of Artificial Intelligence Diffusion (FAID) as the latest amendment to US export controls regulating the diffusion of advanced chips and computing capacity. The methodology adopted by FAID divides countries into three tiers. Entities based in top-tier countries are eligible for ‘Universal Validated End-User’ (UVEU) authorisation, which would allow them to manufacture, deploy, and export cutting-edge chips with negligible restrictions in top-tier countries.
While FAID has succeeded in setting clear standards for acquiring licences, a case-by-case approach towards middle-tier countries may be beneficial. Longitudinally, increasing the export cap on compute capacity from UVEUs to ‘National Validated End-User’[c] (NVEU) entities in countries that have economic and military agreements with the US should be considered. For instance, the export limit on compute for NVEU-authorised entities and one-time export licences for entities in middle-tier countries should be increased if the host country implements adequate protocols for cybersecurity, supply-chain independence from embargoed countries, physical systems security, and model weight security, among others. Increasing compute export to large and strategic markets will ensure that favoured middle-tier countries, such as those participating in alliances like I2U2 (India, Israel, United Arab Emirates, and US), are better able to meet their compute requirements, ensuring a deeper integration into the US supply chain and meeting President Trump’s stated goal of sustaining American leadership in AI development.
Creating a Pathway to Acquire UVEU Authorisation
In order to cultivate a cooperative geopolitical and technological international order that disincentivises bad actors, US export controls should incentivise regulatory and governance alignment with strategic partners. Middle-tier countries may be more likely to establish supply-chain independence from embargoed nations if a pathway exists for acquiring UVEU status.[15] In the absence of such a pathway, private entities and countries in the middle tier may perpetually face uncertainties in developing local AI ecosystems which will, in turn, hinder the consolidation of a US-led global AI ecosystem. Formulating a path to a ‘Conditional Universal Validated End-User’ status for middle-tier countries based on government-to-government agreements, compliance verification, and regularly validated security protocols should be considered in the forthcoming AI Action Plan.
Exempting Open-Weight Frontier Models
An exemption to US export controls in FAID applies to AI developers in middle-tier countries that develop open-source models, even when said models exceed the 1026 FLOPs limit.[16] However, the recent release of open-weight models like DeepSeek R1 suggests that algorithmic distillation and optimisation techniques can be used to develop open models that are competitive with closed frontier AI models in the US. Since the release of R1, officials from South Korea, Australia, Italy and Taiwan have called to ban the use of Chinese frontier models selectively[17] whereas some US officials have demanded a complete ban of DeepSeek models in the US.[18] While implementing national security measures is the prerogative of governments to prevent the malicious use of AI and illicit cross-border transfer of data, an outright ban of open-weight models can stifle research and innovation in the global AI sector, considering the innovations made by Chinese AI developers, which have also been highlighted by industry leaders in the US.[19]
To facilitate collaborative research and development in line with the 2024 G7 declaration,[20] research and selective adoption of open-weight Chinese frontier models should not intensify restrictions on middle-tier countries in the FAID framework, provided all other security requirements for NVEU authorisation are met. Furthermore, narrowing US export controls to restrict the development of open models in middle-tier countries may counter US interests by allowing China to position itself as an alternative provider of open-source stacks.[21] To avoid this scenario, international partnerships, accelerator programmes, and research collaborations can be used to cultivate an open-source ecosystem that aligns with US interests. Building on previously stated recommendations, the establishment of an International Democratic Compute Cluster between the US and key partners should be considered.
Conclusion
The recommendations outlined in this report are pursuant to the Trump administration’s objective of sustaining and enhancing US leadership in AI development. The recommendations articulate necessary regulatory tools like R&D initiatives, acceleration funds, and public-interest compute clusters while minimising the regulatory burden on AI developers based in the US. However, given that the AI supply chain and consumer base are globally distributed, sustaining US leadership will involve incentivising countries to integrate into the US AI ecosystem.
To facilitate such integration, it can be useful to introduce gradations in the middle tier of FAID based on inter-governmental ties and agreements. Recent advances in open-weight frontier-model development in China present the possibility of an alternative AI ecosystem that is misaligned with the strategic and national security interests of the US and its allies. While narrow export controls may be effective in the short term, a balanced approach that accounts for the computing and infrastructure needs of geopolitically aligned countries may be more sustainable for ensuring US leadership.
Endnotes
[a] Executive Order 14179 tasked the National Science Foundation and the Office of Science and Technology Policy to develop the AI Action Plan. Accordingly, this report’s recommendations are directed at these two agencies.
[b] At the AI Action Summit in Paris in February 2025, US Vice President JD Vance emphasised that “the Trump administration will maintain a pro-worker growth path for AI so it can be a potent tool for job creation” and that it “will guarantee American workers a seat at the table”. See: https://www.presidency.ucsb.edu/documents/remarks-the-vice-president-the-artificial-intelligence-action-summit-paris-france
[c] Countries situated in the middle-tier of the FAID are not eligible for UVEU authorisation and are limited to NVEU authorisation, licensing options and limited exemptions. The FAID emphasises the need to secure government-to-government agreement between the US and the host country before an entity from a middle-tier country can apply for NVEU status. Furthermore, US companies are required to keep half of their AI compute within US borders, companies in top-tier countries are required to keep 75 percent of their AI compute in top-tier countries and no more than 7 percent in any one middle-tier country.
[1] “Announcing the Stargate Project,” OpenAI, January 21, 2025, https://openai.com/index/announcing-the-stargate-project/
[2]“Removing Barriers to American Leadership in Artificial Intelligence,” The White House, January 23, 2025, https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/
[3] “Public Comment Invited on Artificial Intelligence Action Plan,” The White House, February 25, 2025, https://www.whitehouse.gov/briefings-statements/2025/02/public-comment-invited-on-artificial-intelligence-action-plan/
[4] “Removing Barriers to American Leadership in Artificial Intelligence,” The White House
[5] Voluntary AI Safety Standard: Guiding Safe and Responsible Use of Artificial Intelligence in Australia, Department of Industry, Science and Resources, Australian Government, September 5, 2024, https://www.industry.gov.au/publications/voluntary-ai-safety-standard#:~:text=
[6] Melissa Heikkila, “AI Companies Promised to Self-regulate One Year Ago. What’s Changed?,” MIT Technology Review, July 22, 2024, https://www.technologyreview.com/2024/07/22/1095193/ai-companies-promised-the-white-house-to-self-regulate-one-year-ago-whats-changed/
[7] Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, The White House, October 30, 2023, https://bidenwhitehouse.archives.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
[8] Jason I. Epstein et al., “Utah Law Makes AI Subject to Consumer Protection Laws,” The National Law Review, March 21, 2024, https://natlawreview.com/article/utah-law-makes-ai-subject-consumer-protection-laws
[9] Titus Wu, “Law on AI Watermarks, Detection Tool Enacted in California,” Bloomberg Law, September 19, 2024, https://news.bloomberglaw.com/artificial-intelligence/law-on-ai-watermarks-detection-tool-enacted-in-california
[10] The United States Artificial Intelligence Safety Institute: Vision, Mission and Strategic Goals, NIST, May 21, 2024, https://www.nist.gov/system/files/documents/2024/05/21/AISI-vision-21May2024.pdf
[11] The United States Artificial Intelligence Safety Institute: Vision, Mission and Strategic Goals, NIST, May 21, 2024, https://www.nist.gov/system/files/documents/2024/05/21/AISI-vision-21May2024.pdf
[12] Jack Kelly, “Revitalizing the Job Market: Key Takeaways from President Trump’s Address,” Forbes, March 5, 2025, https://www.forbes.com/sites/jackkelly/2025/03/05/revitalizing-the-job-market-key-takeaways-from-president-trumps-address/
[13] “Senator Wiener Introduces Legislation to Protect AI Whistleblowers and Boost Responsible AI Development,” Scott Wiener, February 28, 2025, https://sd11.senate.ca.gov/news/senator-wiener-introduces-legislation-protect-ai-whistleblowers-boost-responsible-ai
[14] “Mission Statement,” International Network of AI Safety Institutes, November 20–21, 2024, https://www.nist.gov/system/files/documents/2024/11/20/Mission%20Statement%20-%20International%20Network%20of%20AISIs.pdf
[15] Barath Harithas, “The AI Diffusion Framework: Securing U.S. AI Leadership While Preempting Strategic Drift,” Center for Strategic and International Studies, February 18, 2025, https://www.csis.org/analysis/ai-diffusion-framework-securing-us-ai-leadership-while-preempting-strategic-drift
[16] Lennart Heim, “Understanding the Artificial Intelligence Diffusion Framework,” RAND, January 14, 2025, https://www.rand.org/pubs/perspectives/PEA3776-1.html
[17] “Which Countries Have Banned DeepSeek and Why?,” Al Jazeera, February 6, 2025, https://www.aljazeera.com/news/2025/2/6/which-countries-have-banned-deepseek-and-why
[18] Anthony Cuthbertson, “DeepSeek Users in the US Could Face Million-Dollar Fine and Prison Time under New Law,” The Independent, February 5, 2025, https://www.independent.co.uk/tech/deepseek-ai-us-ban-prison-b2692396.html
[19] Dario Amodei, “On DeepSeek and Export Controls,” January 2025, https://darioamodei.com/on-deepseek-and-export-controls; Jay Hilotin, “DeepSeek AI Is a ‘Gift to the World’: The Biggest Story out of China Right Now, Here’s Why,” Gulf News, January 28, 2025, https://gulfnews.com/special-reports/deepseek-ai-is-a-gift-to-the-world-the-biggest-story-out-of-china-right-now-heres-why-1.500023386
[20] G7, G7 Industry, Technology and Digital Ministerial Meeting, G7 Italia, March 14-15, https://www.g7italy.it/wp-content/uploads/G7-Industry-Tech-and-Digital-Ministerial-Declaration-Annexes-1.pdf
[21] Gregory C. Allen, “DeepSeek, Huawei, Export Controls, and the Future of U.S.-China AI Race,” Center for Strategic and International Studies, March 7, 2025, https://www.csis.org/analysis/deepseek-huawei-export-controls-and-future-us-china-ai-race
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.
Siddharth Yadav is a PhD scholar with a background in history, literature and cultural studies. He acquired BA (Hons) and MA in History from the ...
Read More +Anirban Sarma is Director of the Digital Societies Initiative at Observer Research Foundation (ORF). He is presently a Lead Co-Chair of the Think20 Brazil Task ...
Read More +