Expert Speak Raisina Debates
Published on Apr 27, 2022
Government Regulation and the Path to an AI-Driven Future This article is part of the series—Raisina Edit 2022.
Russia’s invasion of Ukraine has brought to the fore the role of governments in regulating technology, especially platforms, in ways few had fully imagined before. Images of violence, the spread or blocking of news or ‘news’, the actions of widely used global platforms, and journalists’ role in reporting on and documenting the conflict have been highlighted or reframed in a period unique in Europe since the Second World War. This reframing takes place during an ongoing debate about Internet privacy and security, with differing approaches being adopted by the European Union (EU), the US, and China. The EU’s General Data Protection Regulation (GDPR) sets strict rules about data protection for individuals, and imposes large fines for non-compliant actions, irrespective of the domicile or headquarters of the offending platform or organisation. Its new Digital Markets Act, passed into law in late-March 2022, “aim to create a safer digital space where the fundamental rights of users are protected and to establish a level playing field for businesses.” The GDPR’s “single legal interface to the entire EU digital market” “has been an economic boon, not only to Big Tech but to startups and the digital economies of many European member states. Even smaller countries like Austria and Greece have seen their digital economies” flourish.
The Digital Services Act (DSA) recognises the benefits of this transformation of EU internal markets to make them more efficient and transparent, while helping them expand and access new markets.
Extending the GDPR’s success, the Digital Markets Act (DMA) broad definition of “digital services” includes everything from simple websites to “internet infrastructure services and online platforms.” Notably, it includes some gatekeeper platforms, “digital platforms with a systemic role in the internal market that function as bottlenecks between businesses and consumers for important digital services”. The Digital Services Act (DSA) recognises the benefits of this transformation of EU internal markets to make them more efficient and transparent, while helping them expand and access new markets. But it also recognises the problems they create, notably by enabling trade and exchange of illegal goods, services, and content online, and the misuse “by manipulative algorithmic systems to amplify the spread of disinformation, and for other harmful purposes”. These new challenges and the way platforms address them have a significant impact on the fundamental rights enshrined in the EU’s basic documents. China and the US have similar concerns as well, expressed in different ways and with different impact. China’s goals in regulating its Internet focus on social protection and social harmony, the millennia-long goals for China’s leaders since imperial times. The Great Firewall encloses significant and unique platforms, including Weibo, a widely used and heavily monitored microblogging and social media platform that has expanded to include commercial marketing, and Ant Financial’s Alipay, an e-Wallet app that lets users, inside and outside China, store financial details to make digital online and in-store purchases and payments, similar to Apple Pay. Both platforms have attracted the Chinese authorities’ interest and active intervention. Alipay contains and retains financial records of users’ spending and investments, and their assets stored on the platform, and until recently, it was increasingly accepted abroad. Content on Weibo is not only monitored, curated, and censored, but also has an online payment system, WeChat Pay. Both are monitored by China’s financial authorities, given their 55 percent share of all payments in China and the loans offered to users of the platforms. Together, more than 90 percent of China’s online payments run through Alipay and WeChat Pay.
American tech companies and platforms embrace the same principles of protection of personal data, but without a formal data protection regime like the EU, or the active direct involvement of government in platforms or the infrastructure.
Under widely accepted academic and popular definitions, China is not a democracy. Yet, its two dominant apps, and the way they are regulated, offer a framework by which the US platform and its internet infrastructure providers can be assessed in the context of protecting and promoting democracies and democratic values. Recent events related to the Russian invasion of Ukraine illustrate these points. American tech companies and platforms embrace the same principles of protection of personal data, but without a formal data protection regime like the EU, or the active direct involvement of government in platforms or the infrastructure. Piecemeal regulations are in place, consistent with Americans’ attachment to freedom from government interference and the right of individuals. These include the obligation of companies whose databases are hacked to inform users and the government, in some cases facing fines from the US Federal Trade Commission, the federal government’s consumer protection agency. Individual health and financial records face stricter regulation, mainly to prevent unauthorised sharing of personal medical and financial data with third parties. Only California has any state data and personal privacy legislation approaching EU standards, yet its application beyond the state is not yet authorised. In times of conflict, governments are challenged to intervene in what appears on social media. Many members of the EU, the US, Canada, and other democracies have decided to limit what Russian users may post on social media that appears in their jurisdictions, just as, for example, Germany and Canada have long prohibited Holocaust denial. Meta has also increased Russian-and Ukrainian-speakers to monitor war-related issues on Facebook, has barred Russian state media from posting or advertising, and has identified and removed accounts seeking to have Ukrainian users posting about the conflict. On the other hand, Russia has ordered its media not to publish interviews with Ukrainian President Volodymyr Zelenskyy, a prior restraint action that would be unconstitutional in the US, and one that Russian users of virtual private networks can circumvent if VPNs and VPN apps remain available and legal.
Many members of the EU, the US, Canada, and other democracies have decided to limit what Russian users may post on social media that appears in their jurisdictions, just as, for example, Germany and Canada have long prohibited Holocaust denial.
After widespread controversy about campaign interference in multiple elections, the platforms themselves raised their game to identify and remove troll farms, bots, and fake accounts. But as long as their proprietary algorithms for suggesting content remains opaque to users, there will always be questions about how effective platforms’ efforts to block nefarious or illegal activity have been or can be. As long as there are questions, governments will have to consider what the role of the state is in regulating online content offered by Internet platforms. In China, this has been relatively easy. In the EU, GDPR and the new DSA provide a comprehensive EU-wide framework, that supplements the European Commission’s role in encouraging competition and policing monopolies, including Meta, other social media platforms, and Google.

Politics and Social Media: Principles-Driven Improvisation?

In times of war, the platforms’ role becomes even more problematic. In Ukraine, social media has collected and disseminated raw, graphic evidence of atrocities against civilians and possible war crimes. In previous wars, military sources prevailed in the information space, with journalists embedded with the combatants and their reporting censored. Some Twitter users have become hobbyist intelligence analysts, using landmarks and geolocation tags to validate video postings and provide open source intelligence. Today, “the people’s history of the war is being written in real time, through thousands of social media posts,” not because every social media post is potentially evidence in an eventual war crimes trial, but because media organisations transmit this testimony and visual evidence very quickly to their viewers and users worldwide. As long as internet connections are possible, the information space is open to influence public opinion. Short of geofencing to shut down the web entirely within its territory, or elaborate real-time content moderation and fact-checking, democratic governments have to rely on the platforms to manage information and thereby, influence public opinion and government response.
Some Twitter users have become hobbyist intelligence analysts, using landmarks and geolocation tags to validate video postings and provide open source intelligence.
Twitter’s permanent ban on former US President Donald Trump after the attempted violent overthrow of the US government showed the risk and controversy associated with leaving it to the platform and app owners to set their own standards. Twitter’s new safety tool will make it easier for users to tune out harassment, including by automatically blocking “uninvited” messages. “But the social network is still grappling with whether those protections should extend to politicians, who free speech advocates fear could use the features to silence critics.” There is already a tricky trade-off for platforms. The same tools that could be used to shield public officials from violent or hateful posts could serve to muzzle dissenting viewpoints, such as, for example, elected officials seeking to block individuals from criticising their stands or votes on matters of broad public interest.

Regulating AI-based Platforms and Apps

Looking ahead, governments must anticipate the arrival and widespread use of apps and platforms driven by machine learning (ML) and artificial intelligence (AI). It is hard to conceive of anything for which AI is not important, or will not disrupt current ecologies. Education, labour and skills in the digital economy, digital standards for finance and transportation, data protection and exchange across jurisdictions, including for ‘big data’ research drawing on ‘anonymised’ data, and responsible social media, all will likely drive regulatory responses. Even now, we can see glimpses of two overarching challenges ahead. First, because global platforms want users to trust them, they will want easy operability across borders. In this context, existing data protection and privacy approaches and laws provide the launchpad for AI-driven filtering and recommendations; jurisdictional approaches to protection of personal data will need more harmonisation. A common minimum standard, easily communicated and reliably applied, with due attention to data localisation mandates, will be essential. The government fora for agreeing on such transnational standards are not obvious, while asking the owners of apps and platforms to set them raises significant anti-trust issues.
Existing data protection and privacy approaches and laws provide the launchpad for AI-driven filtering and recommendations; jurisdictional approaches to protection of personal data will need more harmonisation.
Second, apps that rely on biometric data are hard to imagine being easily reconciled across jurisdictions. The US collects massive biometric data for everything, from passport and driving permits to trusted traveller programmes for airline travellers, to corporate and public sector employment background checks and records. Yet, the US Internal Revenue Service (IRS) recently had to abandon requiring biometric facial recognition for taxpayers to access their records: Data capture and retention would have been done by a contractor, not the IRS itself, but the public uproar seemed to have a broader privacy and civil liberties concern beyond fear that the contractor’s database would be hacked. “Biometrics aside, the most important step toward implementing a system for verifying identity that protects privacy, boosts usability, and provides meaningful security is to eliminate the use of password-based authentication.” That era is far away. After academic researchers showed racial and gender bias in facial recognition software, dozens of American jurisdictions banned its use by their local law enforcement agencies. By contrast, China massively uses facial recognition technologies to track its citizens (including Uyghur minorities), to identify and shame jaywalkers in Hangzhou and Shanghai, and to grant foreign visitors access to museums by comparing their imag es at the security check/ticket window to those taken at passport control when they entered the country. By contrast, the EU is moving towards a legislated bright line prohibiting the use of facial recognition, with its parliament debating a ban on the police use of facial recognition in public places, as part of its AI Act. The globalisation of digital commerce will eventually require convergence across jurisdictions because ease of business requires it, and users want to trust and use platforms. Even before AI/ML-based platforms, services, or apps are widespread, the ongoing debate on data protection and personal protection has not been easy and is far from settled. The next debates—whether framed to protect individual rights or to expand the role of the state—are unlikely to be less fraught.
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Contributor

Paul M Cadario

Paul M Cadario

Paul M Cadario is Distinguished Fellow in Global Innovation at the Munk School of Global Affairs and Public Policy University of Toronto.

Read More +