Artificial Intelligence (AI) is surpassing human cognition and shaping our world. However, AI’s incumbent risks and concerns have created an increasingly widening trust gap. Regulations will fail until AI algorithms or training data remain a black box, as decisions made by AI stay invisible to users and elude understanding. It is time, therefore, to look beyond the opacity of the AI black box.
Foundational AI models trained on enormous raw data created mainly by the United States- (US), United Kingdom- and China-based tech giants, such as GPT4-o, GPT 4 & 5, Phi 2 & 3, Gemini 1.0 Ultra, Llama 2, Granite, Titan Text, Claude 3.5, Fuyu 8B, Jurassic-2, Luminous, StarCoder, Mistral 7B, Stable Diffusion 3, and Palmyra, are getting closer to achieving PhD-level intelligence. While these powerful AI models deeply affect human behaviour and society, they score poorly in transparency. Transparency in data access declined from 20 percent in October 2023 to 7 percent in May 2024. Researchers attribute this decline to the legal risks of data disclosure, especially when dealing with copyrighted, private, or illegal content.
Exploring the real cost of data
The race to AI market dominance and lax laws is making things more challenging. Nvidia dominates the AI value chain and holds about 80 percent of the graphics processing units (GPU) market, as competitors aim to break its dominance. The prevailing GPU monopoly, high cost, long queues to buy GPUs, and emergence of new chips may hurt India as it is considering buying 10,000 GPUs to boost its AI mission. Interestingly, India generates about 20 percent of the world’s data (the raw material used to train AI) and 19 percent of AI projects.
The race to AI market dominance and lax laws is making things more challenging. Nvidia dominates the AI value chain and holds about 80 percent of the graphics processing units (GPU) market, as competitors aim to break its dominance.
Global AI giants tap India as a large market and a test bed for use cases to validate new technologies, which are offered free. Harnessing a goldmine of diverse data and dialects of 1.4 billion citizens, AI leaders are moving closer to superintelligence. Yet, India is far from creating indigenous AI models.
Fortifying national security
Significantly, the first code-breaking computer, Colossus, helped crack Nazi code during the Second World War, 80 years ago. Today, AI is disrupting warfare, posing serious national security risks. Notably, the US military AI spending tripled from 2022 to 2023, and the US Senate sought US$32 billion to keep American AI ahead of China. According to the AI Index Report, in 2023, the US invested US$ 67.2 billion, 8.7 times higher than China and 48 times more in India. China’s US$ 47.5 billion chip fund, too, focuses on AI amid US export curbs.
Semiconductors, quantum computing, and AI are critical to military innovations and security risks. While the US gets closer to curbing targeted tech investments in China, Russia is also upping its AI game to prevent monopolies. Such sanctions raise questions about ethics, security, sovereignty, and policies that govern AI. OpenAI’s decision to restrict access and shut the door on China could further widen the trust deficit in global AI geopolitics. As Nvidia’s Blackwell and CUDA-Q software platforms boost computing, India must consider issues like the Cuda lock-in and the need to develop indigenous AI. It’s time for India’s US$4 trillion GDP to assess its AI readiness as the trillion-dollar Big Tech firms grow bigger.
Strengthening AI Mission
India’s AI Mission envisages boosting domestic innovations to ensure tech sovereignty. As the government reviews the mission’s progress in building an AI innovation ecosystem, it must focus on strategic partnerships, democratising access, developing indigenous AI, acquiring top talent, nurturing start-ups, and building inclusive AI.
To protect national security, India must bolster its AI know-how, privacy protection, regulatory issues, copyright law, and anti-trust inquiries that cover even previously created AI systems. It must define who owns the rights to the ‘inputs’ and the ‘outputs’ of the AI models. To achieve chip-self-reliance and tech sovereignty, India must boost its moonshot investments.
To protect national security, India must bolster its AI know-how, privacy protection, regulatory issues, copyright law, and anti-trust inquiries that cover even previously created AI systems.
While closed large language models (LLMs) outperform open-sourced models, how ‘open’ is open source? The raw material for AI tools and data access to the AI models, their computations, terms, and deals are opaque and controlled by tech giants. Such opacity concerns have driven the US House of Representatives to ban Copilot and ChatGPT for staff. AI firms claim to help India’s mission by focusing on India and learning from India to tap the large Indian language market through affordable ChatGPT access. However, to succeed, India should focus on R&D in indigenous AI, quantum computing, and digital trade-offs that can push the GDP growth to 10 percent.
Making workforce AI-ready
As AI challenges law enforcement, India must explore having a Chief AI Officer like the US Justice Department. A hands-off approach might prove to be detrimental when it comes to privacy, security, fair competition, and consumer protection. Models trained on unlawfully acquired data must be scrutinised, as much as the data itself, to harness the full potential of AI, rather than a few extracting value from creators and locking out small and medium businesses.
As AI challenges law enforcement, India must explore having a Chief AI Officer like the US Justice Department. A hands-off approach might prove to be detrimental when it comes to privacy, security, fair competition, and consumer protection.
The International Monetary Fund (IMF) warns that over 40 percent of jobs in emerging economies are AI-exposed. Even elite engineering graduates find getting employment a tough challenge. India must urgently improve its AI preparedness, upskill talent, and foster innovations aligned with its economic and national security. As AI gets smarter at answering questions, we must question our purpose, values, behaviour, and actions to build a better future: How do we know, what do we know, and how do we solve what we do not know?
Nurturing fair competition
The FTC Chief stresses that harnessing the opportunities and risks of AI calls for scrutinising the full AI stack for unfair opacity, be it chips, cloud, models, compute, raw materials, partnership, terms, apps, or access to inputs. None of these layers should stifle the market, gobble proprietary data, breach privacy, or offer undue concentration of power. The European Union (EU) has intensified antitrust scrutiny of AI deals by dominant technology players.
Law enforcers must hire scientists, AI experts, economists, and policymakers to deal with systematic monopolies. The legal system must have the right talent and skills to understand how AI really works and its implications for social, economic, and national security. Establishing an AI Safety Agency can complement law enforcers and promote responsible AI development and adoption by nurturing fair competition and securing the AI supply chain.
Act with responsibility
Despite the hype, the promise of AI comes with responsibility. AI firms must not govern themselves, as self-regulation cannot withstand the pressure of profit. Antitrust lawsuits could be costly, eroding hard-earned reputations. The IMF warns that AI can amplify the next economic downturn unless we address the risks. It is time to regulate AI, upskill the workforce, and equip the legal system to adopt the AI shift. AI does not respect borders; thus, we should make our labour and tax policies human-centric.
As committed at the recent Global INDIAai Summit, India must assess the ‘real risks and opportunities’ to democratise AI. Innovations must be rooted in trust, transparency, and transformation. Making AI parochial or coddling monopolies will only harm innovation. The UN General Assembly adopting the resolution to boost AI cooperation to bridge the digital divide between nations is a welcome step. India can provide global learning to solve AI’s trust problem and harness AI that is safer, accessible, inclusive, trusted, and impactful.
Kiran Yellupula is an expert in managing strategic communications for IBM, Accenture, Visa, Infosys, JLL, and Adfactors.
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.