Author : Siddharth Yadav

Expert Speak Digital Frontiers
Published on Dec 11, 2023

AI will lead to an age of radical abundance; the development of increasingly capable AI should be monitored to due to the risk they pose for societies at large.

Beyond speculation: Shaping realistic practices in generative AI

In the past year, the landscape of technology has been dramatically reshaped by the advent of generative artificial intelligence (AI), marking a significant leap in machine learning capabilities. Following the release of OpenAI’s ChatGPT in late 2022, several other tech companies rushed to catch up to the AI race, with Google demoing its BARD AI, DeepMind’s Chinchilla AI, Meta’s LLaMA, and Elon Musk-led xAI releasing its generative AI-powered chatbot called Grok to compete with OpenAI. Operational AI has transcended from being a mere novelty and is gradually becoming a cornerstone in various sectors, including education, industry, and society. Based on the recent success of AI platforms and the exponential speed of their improvement, industry leaders are predicting an ever more impactful AI revolution within the coming decade. At the centre of this revolution is the concept of “Artificial Capable Intelligence” (ACI) or “AI-powered agents,” referring to AIs that can analyse data to generate knowledge and use that knowledge to make decisions and participate in the global economy. This paper delves into the near-term future of generative AI and the risks of loose information strategies.

Operational AI has transcended from being a mere novelty and is gradually becoming a cornerstone in various sectors, including education, industry, and society.

Potential future of generative AI

The hype surrounding AI in the past year has led to serious consideration and studies regarding its near-term impact and investment potential. Shortly after the release of ChatGPT, generative AIs were divided into two categories: text generation (like ChaptGPT, Bard, LLaMA) and audiovisual media generation (like DALL-E and Midjourney). However, in the past months, OpenAI has released iteratively newer versions of its flagship platform that allow users to generate text, and images and perform data analytics. The consolidation of various functionalities into a user-friendly product with an accessible learning curve has allowed companies like OpenAI and Meta to pitch enterprise-level generative AI products designed for enterprises and institutions.

Currently, regardless of how powerful an AI platform is at workshopping ideas, analysing data, creating workflows, image generation, it requires user prompts at every stage. In November 2023, Bill Gates stated that the next step in AI development will be to create AI agents that can make decisions and achieve tasks autonomously. Beyond augmenting productivity, AI agents will “help you write a business plan, create a presentation for it, and even generate images of what your product might look like.” According to Gates, “white-collar personal assistants” will become ubiquitous.

As fantastical as such predictions might sound, there is value in understanding that the trajectory of technological progress generally falls somewhere between utopian and dystopian since it is more often than not unpredictable.

Gates is not the only technologist making this prediction. Cofounder of DeepMind and current CEO of Inflection AI Mustafa Suleyman has stated that ACI will arrive within the coming decade. He argued that given the current capabilities of large-language model (LLM) AIs, conventional benchmarks like the famous conversation-based Turing test are inadequate for assessing AI innovation. Instead, he proposes a modern Turning test which would assess not what an AI is capable of but rather what it can achieve in the world. Suleyman’s test would require AI to execute tasks like creating a business, lobbying, selling, manufacturing, hiring, planning and consequently securing profits on investment with minimal human oversight. The overarching vision is that the exponential growth of AI will drastically “reduce the price of achieving any goal,” constant reinforcement-learning on ever-expanding datasets will cause a “hyper-evolution” of “omni-use” AI which in turn will lead to an age of “radical abundance.” As fantastical as such predictions might sound, there is value in understanding that the trajectory of technological progress generally falls somewhere between utopian and dystopian since it is more often than not unpredictable. Regardless of whether omni-use AI will lead to an age of radical abundance; the development of increasingly capable AI should be monitores to due to the risk they pose for societies at large.

Separating the wheat from the chaff

A core issue with discourses surrounding AI development, even if it is via industry leaders and experts, is separating speculation from realistic prediction. Scholars have pointed how tech demos, science fiction stories, and press releases about emerging technologies are used by developers for fundraising. The problem is that the exponential pace of technological development makes it difficult to separate science fiction from scientific speculation. As easy it is to disregard the AI-induced existential risk warnings by experts, the proliferation and large-scale adoption of AIs with astonishing capabilities makes every fantastic speculation appear less unbelievable. Therefore, it is reasonable to highlight possible problems that can arise due to ACIs AI-powered agents of the near future even if such technologies do not lead to an era of “radical abundance.”

Scholars have pointed how tech demos, science fiction stories, and press releases about emerging technologies are used by developers for fundraising.

An obvious issue that arises after a survey of the policy-related and predictive research on the implications of AI is either a disbelief or complete alignment with predictions stated in tech publications and by industry leaders. Furthermore, any disruption or chaos taking place within the tech sector with highly consequential technologies like generative AI can create unnecessary confusion for policymakers, researchers and stakeholders. A good example of this confusion is the OpenAI fiasco that took place in November 2023. The CEO of OpenAI Sam Altman was fired by the board for no clear reason and then reinstated several days later after the ensuing turmoil. The situation sent waves of confusion and speculation throughout the tech sector. The turmoil was compounded by the circulation of an internal leak that OpenAI had achieved a new breakthrough in their GPT-4 foundational model that could potentially be a threat to humanity. Following Altman’s re-entry to the company, no statements have been made regarding the veracity of the leak. This lack of transparency has similar implications as utopian speculations presented previously regarding the near-term future of AI platforms. For policy-makers and societies at large to not fall into either normalcy bias or catastrophizing, a prudent approach would be for regulators to issue guidelines for transparency, explainability of products and services, and marketing campaigns by AI companies.

Policymakers globally are already scrambling to draft regulations for the continuously evolving technological landscape with generative AI, the Metaverse, synthetic media. If fantastic speculations and unverified leaks about the near-term future of products that have not even entered their development cycle like AI-powered agents are allowed to run amok, it will only add to the confusion and might take valuable time and resources away from more imminent issues. For instance, prior to the launch of ChatGPT, a conflation of Machine Learning (ML) or predictive analytics with AI lead to the overestimation of the capabilities of enterprise-level products and services. Furthermore, the lack of reliable and tested expert knowledge about iteratively developing AI applications compounded by media hype and sensationalism aggravates the risk of misrepresentation on the part of startups to generate funds.

The CEO of OpenAI Sam Altman was fired by the board for no clear reason and then reinstated several days later after the ensuing turmoil.

The way forward

The argument of this paper is not that technology is not evolving at an exponential pace or that unregulated AI does not pose a large-scale economic and socio-political risk to societies around the world. Rather, it is because of the rapid pace of technological advancement in recent years that a conservative attitude is necessary towards how the developmental trajectory of AI-related tech products is regulated. An impactful strategy in the wild west of frontier technologies could be stricter user-oriented guidelines for companies developing frontier tech. There are four areas where such guidelines can be implemented: 

Classification: In the policy paper “Governing AI: A Blueprint for India,” Microsoft has recommended that government should create a classification for high-risk AI systems being deployed for controlling critical infrastructure. A similar guideline could be issued to AI companies requiring them to create classifications for internal projects based on the larger risk they pose. Even though companies understandably would like to keep advancements under wraps to maintain a competitive edge, for a severely consequential technology like AI it is prudent for companies to keep policymakers and stakeholders informed about the nature and risk-level of products and services being developed.

Explainability: In a white paper, Google has argued for establishing explainability standards for AI products and services so that industry and general users can understand the benefits, constraints and potential bias in advanced AI platforms. As acknowledged in the National Strategy for Artificial Intelligence published by NITI Aayog, this step can be crucial for India and other developing countries where the general tech literacy is not at the level of developed countries. Establishing explainability standards can pave the way for clear country-specific and sector-specific liability frameworks that encourage tech development that is socially and ethically responsible. 

Transparency: As mentioned in the mission statement of the AI Safety Summit hosted by the UK Prime Minister in London in October 2023, a key risk factor of frontier tech like generative AI is the uncertain and accelerated pace of its growth. Sudden management changes and internal leaks of high-risk projects at leading companies like OpenAI are illustrative of this risk. While some leeway should be afforded to for-profit companies to keep their trade secrets, policymakers and regulators internationally should request closed and secured briefings with AI companies in case of sudden events as a way to avoid regulatory and investor panic. 

Establishing explainability standards can pave the way for clear country-specific and sector-specific liability frameworks that encourage tech development that is socially and ethically responsible. 

Marketing: The US government has taken useful steps in this direction through the US Federal Trade Commission (FTC). The government body issued a warning to companies in March 2023 to not overstate claims about AI-related products that cannot be substantiated. These guidelines can be implemented in the Indian tech sector as it would prevent companies from overstating the capabilities of their AI products, thus mitigating the risk of misinformation and unrealistic expectations among the public and investors. Key actions would include mandating disclosures about the developmental stage of the AI product (e.g., prototype, beta, commercial release, etc.).

In conclusion, the rapid evolution of AI platforms necessitates a balanced and cautious approach to policymaking. As we stand at the precipice of disruptive technological advancements, it is crucial to distinguish between speculative hype and realistic potential. Establishing guidelines on classification, explainability, transparency, and marketing of AI technologies will be pivotal. This approach will not only foster responsible tech development but also mitigate risks, ensuring that AI serves as a boon rather than a bane for global societies. The challenge lies in crafting policies that navigate the fine line between stifling innovation and safeguarding societal interests in an era increasingly dominated by digital technologies.


Siddharth Yadav is a PhD scholar with a background in history, literature and cultural studies. He acquired BA (Hons) and MA in History from the University of Delhi followed by an MA in Cultural Studies of Asia, Africa, and the Middle East from SOAS, University of London.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Siddharth Yadav

Siddharth Yadav

Siddharth Yadav is a PhD scholar with a background in history, literature and cultural studies. He acquired BA (Hons) and MA in History from the ...

Read More +