Authors : Var Shankar | Phil Dawson

Expert Speak Digital Frontiers
Published on Jan 30, 2024

Given the rapid pace of change in AI systems, the development of AI standards and certification programmes has become more urgent

AI standards and certification programmes in a competitive global landscape

This essay is part of the series: AI F4: Facts, Fiction, Fears and Fantasies.


Artificial Intelligence (AI) standards and certification programmes are generally developed by international groups of technocratic experts, through processes that are often not well-understood by the public. So, these ‘soft law’ AI governance mechanisms have evoked mixed reactions—hope, confusion, scepticism—among AI experts in public policy, civil society, academia, and industry. In this piece, we will look into the role of AI standards and certification programmes in AI governance and address some common arguments against them.

Given the rapid pace of change in AI systems, demonstrated most recently by generative AI advancements and adoption, the development of AI standards and certification programmes has become more urgent. AI standards can help translate responsible AI principles like explainability and accountability into specific governance, process, and performance requirements. AI certification programmes can help verify, document, and audit claims about specific AI implementations in use cases as diverse as automated employment decision tools, automated consumer lending, and skin disease screening by smartphone.

AI standards can help translate responsible AI principles like explainability and accountability into specific governance, process, and performance requirements.

Developing AI standards that are international in scope can more consistently protect people from being impacted by AI systems, promote the participation of smaller players in the AI space, and reduce barriers to international trade. However, the development of international AI standards and certification programmes requires a significant investment in people, time, and resources by governments, practitioners, standards development organisations, auditing firms, and civil society.

It is, therefore, heartening that governments in the United States (US), home to the world’s leading AI companies, and the European Union (EU), which has put forth the most robust regulatory proposal, have recognised the need for international AI standards and certification programmes. The National for AI Systems Institute of Standards and Technology (NIST), the leading US standards organisation, published its AI Risk Management Framework (AI RMF) in January. The White House’s Executive Order of 30 October  2023 elevated the importance of the AI RMF across the US government and industry. The EU’s proposed AI Act anticipates and relies heavily upon the development of technical AI standards, conformity assessments, and certifications. Additionally, through the EU-US Trade and Technology Council, the two governments are working to align taxonomies and cooperate in AI standards development.

These efforts are being further internationalised among leading democracies. The US Secretaries of State and Commerce wrote in a July op-ed that they would use the Hiroshima AI Process—a G7 ministerial-level effort to coordinate national approaches to AI governance, including “the development and adoption of international technical standards in standards development”—to develop a shared understanding of AI risks with democratic partners. In May, the newly formed EU-India Trade and Technology Council agreed to collaborate on responsible AI efforts. In October, major governments from around the world sent delegates to the United Kingdom (UK) AI Safety Summit and emphasised that international AI standardisation efforts are necessary.

Developing AI standards that are international in scope can more consistently protect people from being impacted by AI systems, promote the participation of smaller players in the AI space, and reduce barriers to international trade.

Some commentators oppose AI standards and certification programmes on democratic grounds. They advance four primary arguments. First, they argue that since these mechanisms are primarily voluntary, they distract policymakers from an alternative to robust democratic legislation. Second, they argue that because standards development is a resource-intensive process; large multinational companies will dominate the AI standardisation process. Third, they argue that because the AI standards landscape is fragmented, it is cumbersome for smaller actors to navigate. Fourth, they argue that given China’s efforts to shape the development of international AI standards, it is unrealistic to expect these mechanisms to reflect democratic values. In the remainder of this article, we will address each of these arguments with a policy lens. 

1. Policymakers should use AI Standards and Certification Programmes to support robust AI laws and governance 

The development of AI standards underway at organisations such as  the International Standardisation Organisation and of AI certification programmes at organisations such as the Responsible AI Institute are important steps. However, policymakers should think of AI standards and certification programmes as mechanisms that support legislation and regulatory objectives, rather than as alternatives to legislation.

Thoughtful and effective AI legislation should emerge from the democratic process and account for the interests of key stakeholder groups. Policymakers can use these mechanisms to apply regulatory objectives to specific AI uses, bring the dynamism of practitioners, auditors, and civil society organisations into the service of regulatory compliance, and provide taxonomies, industry benchmarks, and certification marks that can be incorporated into legislation. As Peter Cihon has discussed, “AI-related [certification] programmes could include both voluntary certification to ethics principles and mandatory conformity assessment to regulatory requirements.”

The White House’s announcement of voluntary testing and external auditing commitments by seven leading generative AI companies and support of a public ‘read teaming’ event for major Large Language Models at DEFCON 31 are welcome steps.

Even as policymakers create demand for AI standards, by leveraging them in various policy instruments, they should also consider priming the supply side. For example, they should invest in AI research programmes focused on advancing new methods and tools for assessing the impacts and risks of AI systems, which form the foundation for the development of the AI standards and certification programmes themselves.

This is especially important with respect to systems with advanced or even so-called ‘frontier’ capabilities, for which best practices for measuring and evaluating outcomes and risks require applied research. The White House’s announcement of voluntary testing and external auditing commitments by seven leading generative AI companies and support of a public ‘read teaming’ event for major Large Language Models at DEFCON 31 are welcome steps. However, government-funded research is necessary to develop and validate the assessments, methods, roles, responsibilities, and tools required for effective generative AI standards and certification programmes.

Different democratic countries will, and should, use different approaches when incorporating international AI assurance mechanisms into laws, policies, and other strategic investments. 

2. Policymakers should fund the participation of smaller actors in international AI standardisation and certification efforts 

Participating in the development of AI standards and certification programmes requires technical knowledge, time, and resources—all of which multinational corporations have at their disposal. However, leaving the development of AI standards and certification programmes to multinational corporations would leave out the perspectives of small businesses, indigenous groups, researchers, and activists.

Governments should similarly fund and encourage the participation of small businesses, indigenous groups, researchers, and activists in the development of international AI standards and certification programmes.

The American and British governments are already seeking to provide cutting-edge AI resources, data, and tools to researchers, students, and civil society organisations. Governments should similarly fund and encourage the participation of small businesses, indigenous groups, researchers, and activists in the development of international AI standards and certification programmes.

In addition to supporting smaller actors in the development of international AI standards and certification programmes, democratic countries should play a leading role in demonstrating their adoption, by funding pilots and by publishing implementation guidance for smaller actors.

3. Policymakers should communicate which international AI standardisation and certification efforts they consider important

Despite recent alignment efforts, the global AI assurance landscape is fragmented, with organisations in the US, EU, and China defaulting to different AI standards. Even for motivated smaller actors seeking to contribute to international AI standardisation efforts, this fragmentation causes confusion. Additionally, since the number of AI assurance efforts is large and growing, smaller actors find it difficult to keep up with—let alone implement—these AI assurance mechanisms.

Given these challenges, innovative companies—including Armilla and other Responsible Artificial Intelligence Institute members— are developing technologies that make it easier for industry to operationalise international AI standards, certification programmes, and best practices to enable AI governance at scale.

Despite recent alignment efforts, the global AI assurance landscape is fragmented, with organisations in the US, EU, and China defaulting to different AI standards.

Democratic governments, too, have an important role to play in helping smaller actors navigate the AI assurance landscape. They should publicly communicate which international AI standards and certification programmes they consider important, including by referencing these mechanisms in government guidelines and incorporating them into government procurement documents. In addition to jumpstarting the formation of AI assurance ecosystems around key AI assurance efforts, this kind of signaling will help align AI assurance efforts with those of other democratic governments. 

4. Policymakers should not let international tensions undermine common efforts to manage AI risks

American policymakers are alarmed by reports alleging that China seeks to improperly influence international standard setting. The reports allege that the Chinese government is installing sympathetic individuals in key standardisation positions, funding delegates from ‘national champions’ like Huawei, and sometimes requiring Chinese organisations to vote in a block rather than individually on the basis of technical merit. In its May 2023 National Standards Strategy for Critical and Emerging Technology, the Biden administration asserts that China, “through proxy companies, promotes prescriptive standards, irrespective of technical merit, designed solely to entrench market dominance.” American lawmakers find this particularly troubling because China is a major supplier of digital infrastructure to emerging markets. Experts such as Matthew Erie and Thomas Streinz have suggested that digital sovereignty in these emerging markets may be ‘illusory’ as the Chinese government retains significant control of the organisations providing this digital infrastructure.

Though China’s ambitions in AI standardisation are significant, international standardisation has always been political. For example, the US and EU have well-publicised differences in their standardisation approaches, with the former preferring business-led approaches and the latter preferring public-private partnerships.

Overreacting to China’s influence could be counterproductive, contribute to a fragmented global AI economy, and cause engineers and researchers—who often collaborate internationally despite geopolitical tensions – to stop contributing to standardisation efforts.

Rather than despair over China’s increasing involvement in the development of international AI standards, democratic countries should significantly bolster their efforts to ensure that international AI standards reflect democratic values. Participation from democracies in the emerging markets in standards development should be emphasise. An important player in this camp is India, which published its own standardisation strategy in 2018 and which is hosting this year’s Global Partnership on AI Summit.

Overreacting to China’s influence could be counterproductive, contribute to a fragmented global AI economy, and cause engineers and researchers—who often collaborate internationally despite geopolitical tensions – to stop contributing to standardisation efforts. A better approach is to carefully monitor China’s activities in international standardisation and address them when necessary, while collaborating on particular standardisation initiatives when it is feasible, is of mutual interest, and contributes to interoperability and openness.

The path forward

AI is advancing and being adopted at a rapid pace. Lawmakers around the world are responding by putting in place general and industry-specific regulatory measures that reflect their national priorities and concerns, while collaborating internationally on shared norms. In this landscape, AI standards and certification programmes have a key role to play. They can provide common and authoritative requirements and definitions, extend the application of regulatory and assurance objectives to a wide range of contexts and use cases, and be adopted as voluntary or mandatory in different contexts. Yet AI experts cannot avoid prioritising their objectives and making hard choices simply by relying upon these ‘soft law’ mechanisms. As they grapple with how to regulate AI, policymakers should better understand the characteristics, benefits, and limitations of AI standards and certification programmes.


Var Shankar is Director of Policy at the Responsible AI Institute.  

Philip Dawson is a lawyer and public policy adviser specialising in the governance of digital technologies and AI. 

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Authors

Var Shankar

Var Shankar

Var Shankar is Executive Director at the Responsible AI Institute. An attorney by background, Var advises organizations on AI governance and technology policy. He is ...

Read More +
Phil Dawson

Phil Dawson

Phil Dawson is Head of AI Policy at Armilla AI, a leading provider of AI/LLM risk assessment and transfer solutions. He is an experienced leader ...

Read More +