As AI is rapidly developing, a robust government-led industry self-regulation regime should be put in place
Government-led industry self-regulation is then proposed as an alternative regulatory approach to harness opportunities for AI-enabled productivity gains and public benefit, while also addressing critical public safety concerns around AI adoption.This article analyses the ineptness of CAC regulations against the backdrop of breakneck innovation in artificial intelligence (or AI)-based technologies and the dynamically expanding universe of risks and benefits associated with their accelerated adoption. Government-led industry self-regulation is then proposed as an alternative regulatory approach to harness opportunities for AI-enabled productivity gains and public benefit, while also addressing critical public safety concerns around AI adoption.
CAC regulators must demonstrate an accurate understanding of what they are seeking to regulate, and the competence to deliver prescriptions that could adapt to temporal variations in the regulatory subject and scope.AI-based technologies are evolving at a breakneck pace, with “an above average number of technologies on the Hype Cycle (see Figure 1) reaching mainstream adoption within two to five years,” according to Gartner. The extremely fast-evolving nature of the AI-innovation landscape strongly disfavours any anticipatory recognition of a finite set of distinct use-cases and the universe of associated risks and benefits associated with their adoption, in a timely and accurate manner. This gap between advancements in AI and optimal regulatory responses to steer them in the right direction becomes even wider in low- and middle-income countries, where regulatory institutions continue to grapple with resource crunches. Figure 1: Hype Cycle for Artificial Intelligence, 2021
The AI industry has become increasingly aware of the existential risks from widespread public censure of unethical applications of AI in today’s hyper-connected world.However, this classical critique of industry self-regulation, though valid per se, cannot be tenably levelled against the proposed GIS arrangement for responsible AI adoption for three principal reasons: First, the proposed regulatory approach is fundamentally predicated on the adoption of a clear-cut framework for reporting industry practices to the government, with a view to demonstrate the AI industry’s due diligence in pursuing self-regulation in alignment with the regulatory goals and principles for responsible AI adoption laid down by the government. This is unlike a pure self-regulation arrangement without any form of government involvement. Put simply, in the proposed self-regulation arrangement, the fox does not remain in full charge of the hen house. Therefore, apprehensions around inherent misalignment between private incentives and public interest are allayed via this arrangement, especially so long as the government remains committed to facilitating public engagement on critical regulatory questions concerning the ethics of AI adoption. Second, strong market incentives for responsible AI adoption are becoming apparent. AI-led enterprises are recognising the medium- to long-term value for their shareholders from strategic investments in risk assessment and mitigation tools and resources, and strengthening corporate governance structures for end-to-end adoption of responsible AI best practices that prioritise user trust and safety. This value for AI-led enterprises and their shareholders is expected in the form of competitive advantage through enhanced product or service quality, better talent and customer acquisition and retention, and an upper hand in competitive bidding. Third, globally, a good number of civil society organisations and interdisciplinary think tanks have hit the ground running to track and report emergent risks from AI adoption, propose rigorous measures for their mitigation, and make sure that fairness, transparency, and accountability remain listed as the highest corporate priorities for all AI-led enterprises. With this, the AI industry has become increasingly aware of the existential risks from widespread public censure of unethical applications of AI in today’s hyper-connected world. It realises that any instance of irresponsible behaviour—whether proven or perceived—could potentially decimate the government’s confidence in industry self-regulation of AI in the wink of an eye, forcing external enforcement measures by the government that would likely be prescriptive and sub-optimal for its growth. This constant, implicit threat of facing unfavourable regulatory obligations—dubbed by scholars as the ‘shadow of authority’—is arguably a formidable motivating factor in promoting desirable firm behaviour. A good case in point could be Meta’s recent announcement to launch a ‘Personal Boundary’ feature on its virtual reality platform—Horizon Worlds—in response to public outcry on social media over claims of sexual harassment on the platform.
It realises that any instance of irresponsible behaviour—whether proven or perceived—could potentially decimate the government’s confidence in industry self-regulation of AI in the wink of an eye, forcing external enforcement measures by the government that would likely be prescriptive and sub-optimal for its growth.However, even then, detailed inputs from industry stakeholders must be sought proactively to guide the regulatory design of any such intervention to ensure inter alia precision and intelligibility of the regulatory subject and scope, and practicability of mandated compliance procedures. As previously discussed, these inputs will remain critical to hedge the risk of negative fallouts for AI-led markets and the public from prescriptive regulation of AI adoption. No wonder even outspoken sceptics of industry self-regulation like former Lead of Ethical AI at Google, Margaret Mitchell have emphasised the indispensable value of industry insights in designing an optimal regulatory regime for responsible AI adoption: “…it’s possible to do really meaningful research on AI ethics when you can be there in the company, understanding the ins and outs of how products are created. If you ever want to create some sort of auditing procedure, then really understanding — from end to end — how machine learning systems are built is really important.”
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.
Raj Shekhar is Lead Responsible AI at NASSCOM driving NASSCOMs efforts at defining a roadmap for an extensive roll-out and adoption of responsible AI in ...Read More +