Author : Trisha Ray

Expert Speak Digital Frontiers
Published on Feb 22, 2024

With Artificial Intelligence (AI) under increasing regulatory scrutiny, there is a concerted effort to dissect the dualistic nature of this technology

The paradox of innovation and trust in Artificial Intelligence

This article is part of the series—Raisina Edit 2024


“All technology is dual use.” As Artificial Intelligence (AI) has become the subject of regulatory scrutiny, this refrain has found a new lease on life, including in the hallowed halls of global tech giants. What does “dual use” mean? In the context of export controls, the term refers to technologies that have both military and civilian/non-military applications. Increasingly, in the context of AI, the term is used to describe the technology’s potential to both benefit and harm humanity. This blurring of use, intent, and capability is emblematic of the Janus-like nature of AI technology itself.

Is AI dual use? 

The fact that technologies can have both peaceful and harmful uses is by no means a dilemma unique to AI. In 1953, at the UN General Assembly, President Dwight Eisenhower announced the “Atoms for Peace” programme, accompanied by the powerful refrain, “The United States knows that peaceful power from atomic energy is no dream of the future. It is here, now, today.” This programme also arguably contributed to nuclear proliferation in the decades since. Less profound, but illustrative nevertheless, is the fact that Austrian glass producer, Swarovski, also manufactures rifle scopes.

With AI, the dual-use dilemma is compounded by a global race to “lead” in AI even as there is very little understanding of how it will impact our lives.

Global private investment in AI as well as the proportion of companies that have adopted the technology has increased by leaps and bounds over the past half decade.

Global private investment in AI as well as the proportion of companies that have adopted the technology has increased by leaps and bounds over the past half decade. In 2022, private investment stood at US$ 91.9 billion, along with nearly 1,400 newly-funded AI companies, nearly double the number of such companies in 2016. New policy measures, such as the Biden administration’s AI Executive Order, are also encouraging AI adoption by government agencies, aiming to improve the provision of essential services.

Yet trust in AI is on the decline.

The 2024 Edelman Trust Barometer finds that many are ambivalent about the impact of AI on their lives. A 2022 IPSOS survey notes that, on average, only 50 percent of respondents globally trust companies that use AI. Counterintuitively, perhaps, high-income countries in the developed West tend to be more sceptical of AI and have a lower perceived understanding of the technology. Paradoxically, then (but with a few notable exceptions like China), people living in the countries that possess the most innovation capital are rejecting AI. In other words, AI is missing one crucial ingredient that determines the success of diffusion of innovation: simplicity. To Silicon Valley gurus and laypeople alike, AI is magic in code.

A world ordered by AI 

“The things we call “technologies” are ways of building order in our world.” In his seminal essay, “Do Artefacts Have Politics?”, political theorist Langdon Winner noted how all technologies that involve design choices, conscious or otherwise, dictate how these technologies can be used. In this sense, the dual-use nature of AI has already raised concerns regarding market concentration and potential harm to consumers. As United States (US) Federal Trade Commission Chair, Lina Khan, warned in an op-ed in the New York Times, “The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms. A handful of powerful businesses control the necessary raw materials that start-ups and other companies rely on to develop and deploy A.I. tools.” However, in the world of AI, even seemingly innocuous algorithms have nefarious uses. On 31 January 2024, the Pentagon released its reviewed list of “Entities Identified as Chinese Military Companies Operating in the United States”. The list now includes Beijing Megvii Technology Co., Ltd.. Megvii’s Face++ is behind popular beauty apps like Camera 360 and Meitu Beauty Plus. Megvii also provides services for government surveillance projects.

The troubling implication, of course, is that ethical innovation through self-regulation is a pipedream at best and dangerous at worst.

It is, therefore, also time to consider the second part of Winner’s argument: that some technologies require—or at the very least are highly compatible with—certain kinds of social, political, and economic systems. Tech reporter Ezra Klein wrote last year, “I have tried to spend time regularly with the people working on A.I. I don’t know that I can convey just how weird that culture is. And I don’t mean that dismissively; I mean it descriptively. It is a community that is living with an altered sense of time and consequence. They are creating a power that they do not understand at a pace they often cannot believe.” The argument then is not just that cutting-edge AI is being created in this weird culture, but that it can only be created in such an environment. The troubling implication, of course, is that ethical innovation through self-regulation is a pipedream at best and dangerous at worst.

Conclusion: The false binary of innovation and regulation 

Distrust in AI is healthy. It is a natural consequence of the opacity of the inner workings of the technology and the insular, homogenous, “weird” nature of the cultures in which it is built. At the same time, AI is dual use, especially when it helps humanity make leaps in healthcare, education, and sustainable development at a pace that would not be possible without it. Regulation serves as the mediator: elevating uses that benefit all, controlling for those that cause harm and mandating meaningful checks and assessment requirements, carried out by teams that are sufficiently empowered to do so.


Trisha Ray is an Associate Director and a Resident Fellow at the Atlantic Council

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Trisha Ray

Trisha Ray

Trisha Ray is an associate director and resident fellow at the Atlantic Council’s GeoTech Center. Her research interests lie in geopolitical and security trends in ...

Read More +