Author : Joon Baek

Expert Speak Digital Frontiers
Published on Jan 31, 2024

The “Manhattan Project for AI” oversimplifies the challenges associated with its governance. The dynamic nature of AI requires a fresh perspective, one that embraces collaboration and inclusivity.

The Manhattan Project for AI is a bad idea

This essay is part of the series: AI F4: Facts, Fiction, Fears and Fantasies.


Everybody talks about AI nowadays. Whether it be politicians, business leaders, labour activists, or security experts, AI is the talk of town. As AI technology captivates the public's attention, much chatter has been about regulating AI to be responsible, safe, and fair. There is also valid discourse on the ramifications of digital privacy when AI capabilities are so much commercialised. Questions about the copyright of trained datasets, de-biasing large language models, and preventing misuse of AI for nefarious purposes are all questions, both theoretical and practical, that we as the AI community need to work on.

Yet there is one whack-a-mole that gets brought up again and again.

Coupled with the release of the movie Oppenheimer this summer, there are growing calls for the “Manhattan Project for AI” or some variant. Just like the United States (US) military successfully harnessed the power of the atom through secretive projects in the middle of the New Mexico desert, maybe the same can be done for the latest AI technologies. Some opine about the “Manhattan Project for AI Safety” or “Manhattan Project for Generative AI.” Some even go further to call for a “Manhattan Project for Military AI,” adding how geopolitical adversaries are already doing the same.

Coupled with the release of the movie Oppenheimer this summer, there are growing calls for the “Manhattan Project for AI” or some variant.

The proposers point out the sheer importance of AI and the effectiveness of the Manhattan Project. If a bunch of physicists managed to create a history-changing weapon through wartime mobilisation, imagine what AI we could build now with all the resources and attention. For all these suggestions, there are a few shared threads: the government needs to put a massive amount of money that brings together a team of the world’s brightest minds in AI in a top-down and coordinated structure to create an AI system for a desirable purpose (military or generative or safe AI, etc). These proposals ask the US government (or any government for that matter) to be the leading force in AI research and development (R&D), coordinating and directing its technology research capabilities among universities and private enterprises.

However, AI is not nuclear technology. It should not be regulated like nuclear technology. This mindset of the “Manhattan Project for X AI” is even detrimental to building a responsible, safe, and fair AI that will exclude various voices like the youth and other minorities.

AI does not work like nuclear weapons

It's crucial to recognise that AI, like other dual-use technologies, holds immense value both in civilian and military domains. While significant R&D in AI has been driven by the private sector, the analogy to government-led nuclear technology development falls short. The contributions of major tech companies such as Google and well-funded startups like OpenAI have mirrored the scale of research once undertaken by the US government in the nuclear realm. Adjusted for inflation, the Manhattan Project cost around 30 billion dollars, while Google’s AI research budget over the last decade has been about 200 billion.

The contributions of major tech companies such as Google and well-funded startups like OpenAI have mirrored the scale of research once undertaken by the US government in the nuclear realm.

Contrastingly, in the domain of nuclear power, governments maintained tight control. Nuclear technologies were shrouded in secrecy and were never open-source. Stringent import-export controls governed nuclear material and knowledge transfer. AI, on the other hand, has seen a significant portion of its advancements coming not only from the closed code of the commercial sector but also from open-source projects. Open-source large language models like OpenLLaMa and Vicuna, image and video generative models like Stable Diffusion, and many more projects show that no one government organisation, corporation, research institute, or entity controls the field, for better or for worse. Anybody with adequate skills could deploy and fine tune these models.

The Manhattan Project analogy

Governments lack a monopoly over AI development and that is why the “Manhattan Project for AI” is such a detrimental framework—it posits that the government can do it unilaterally. It signals to the policymakers and the general public to carry over the non-transparent state behaviours of the nuclear age. In reality, any meaningful progress in promoting responsible AI necessitates a multi-stakeholder approach that encompasses various perspectives to establish responsible norms. Divyansh Kaushik and Matt Korda of the American Federation of Scientists summarised it best: “If we draw lessons from the decades of nuclear arms control, it should be that transparency, nuance, and active dialogue matter most.” I would add inclusion to that list as well.

Engaging young minds in AI policymaking ensures that the AI policies are not only inclusive but also informed by those most impacted.

Lessons from the nuclear age must be repurposed to ensure that global AI governance includes a diverse array of stakeholders, particularly the youth who are deeply embedded in this digital world, thus the focus should be on collaboration over unilateral control. With half of global internet users being under 30, it's clear that the youth play a substantial role in shaping the digital landscape. They're also at the forefront of AI adoption and innovation. A UN report states that around 68 percent of young people in their global survey responded with “high trust” in AI and another report shows 44 percent of teens are likely to use AI for their homework. The development of policies without their input is not only misguided but counterproductive. Just as the young people drove the internet culture, expect the same for AI culture. Engaging young minds in AI policymaking ensures that the AI policies are not only inclusive but also informed by those most impacted.

In conclusion, the “Manhattan Project for AI” oversimplifies the challenges associated with its governance. The dynamic and fast-evolving nature of AI requires a fresh perspective, one that embraces collaboration, inclusivity, and adaptability. To build responsible norms and harness the power of AI for the collective good, we must heed the lessons learned during the nuclear age and forge a path that empowers all stakeholders, particularly the youth.


Joon Baek works as a software engineer and is an OECD Youth advisory member.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Joon Baek

Joon Baek

Joon B. works as a software engineer and is a Data Values advocate for the Global Partnership for Sustainable Development Data, #Leaders4Tomorrow for the UN ...

Read More +