As technology is growing by leaps and bounds, and its interplay with democracy has magnified, is it time to place regulations on digital democracy?
In the early 2000s, ‘digital democracy’ or ‘e-democracy’ signified how digital technologies could affect democracy, potentially leading to deeper civic engagement in governance. This idea seemed to be actualised during the Arab Spring, where social media became an important tool for mobilising citizens. The Washington Times, for instance, carried the term “Twitter Revolution” to describe mass protests following the 2009 presidential elections in Iran: “The spirit of liberty,” declared the editorial, “finally arrived at Tehran’s Freedom Square”.
This techno-optimism has dulled. Just as Iran’s Supreme Leader led a brutal crackdown on the protests, killing dozens, the same digital tools that facilitated these revolutions began to be used in ways antithetical to democratic values, by authoritarian leaders and democratic ones alike.
Digital authoritarianism describes a model of the internet that allows states to censor online speech on arbitrary grounds, using nebulous justifications like national security and social harmony.
2021 saw the resurgence of digital democracy in another avatar, as a narrative tool that pit democratic nations in an existential battle against ‘digital authoritarianism’. Digital authoritarianism describes a model of the internet that allows states to censor online speech on arbitrary grounds, using nebulous justifications like national security and social harmony. It enables widespread surveillance of citizens, as well as other forms of censorship including the use of “cyber troops”, organised state-sponsored groups that mass report accounts and harass individuals and communities. This model also has an outward-facing element to complement the above inward-facing one, seeking to weaken democratic systems. Increasingly, these authoritarian threats are said to extend beyond online spaces to the basic physical infrastructure the internet operates on. In 5G, for instance, there are growing calls to partner with trusted telecom suppliers from like-minded countries. The 5G Clean Network announced by the Trump administration and PM Modi’s call for trustworthy supply chains are examples of this shift.
In the past year, President Biden’s Summit of Democracies, existing arrangements like the G7 and the Quad, as well as forums like the Copenhagen Democracy Summit, Forum 2000, the Freedom Online Conference, and many others, saw new efforts to rally democracies to “defend” democracy against emerging digital authoritarian threats. Discussions captured a wide range of issues, including data protection and privacy, media freedoms, digital literacy, and securing connectivity infrastructure.
The Supreme Court of South Korea also considers a range of offensive or misleading speech acts during an election on social media a crime.
That said, the practice of democracy varies considerably based on historical and socio-cultural contexts. Take free speech for example; South Korea’s Public Official Election Act 1994 (공직선거법) is a remnant of the country’s democratic transition and turbulent political environment of the time. Over the decades, the Act has added guidelines on online election coverage and prohibits content that may sway the outcome of an election. The Supreme Court of South Korea also considers a range of offensive or misleading speech acts during an election on social media a crime.
Furthermore, each state, as President Biden delicately put it in his opening statement at the summit, “faces unique challenges and many of the specific circumstances are different”. This, in turn, has led to a curious problem: There is a clear conceptualisation of digital authoritarianism, but a relatively weak unifying thread for digital democracies. This is further complicated by the authoritarian tinge in how many democracies operate in and regulate online spaces. When do “reasonable restrictions” become undemocratic? When does “self-reliance” become isolationism?
Chart: Facebook and Twitter account takedown requests by country, Source: Oxford Internet Institute
The chart above outlines the country of origin as well target countries of cyber troop accounts taken down by Twitter and Facebook between January 2019 and November 2020. Non-democracies like Russia, Iran, Saudi Arabia, and China show massive outward-facing cyber troop activity, but there are many—including democracies like Spain and India—that show domestic cyber troop activity.
These differences, however, are unlikely to impede cooperation, at least in the short term, because the new face of democratic cooperation is not just about common beliefs and values: It is a strategic tool. Biden’s Democracy Summit guest list was curated based on the US’ own national security, economic and strategic interests, and the government of Pakistan, for instance, declined the invitation due to its own geopolitical interests viz China. Moving into 2022, forums for democratic cooperation in the digital sphere must tackle some open issues and questions.
First, developing Asia, Africa, and Latin America are still underrepresented, including in terms of civil society participation, at many of these democratic initiatives. While there are efforts to address this, having a seat at the table is only the first step. Diversified sources of funding for research on the practice of and challenges to digital democracy in countries in these underrepresented geographies is crucial as well.
Second, the nature of digital spaces is such that states are not the only major actor in terms of agenda-setting, influence, and power. Data flows as they exist benefit the incumbents: Data flows out, enriching a handful of powerful players, mostly tech giants, based in China and the US. This has led to a justified fear that developing and underdeveloped nations will become data suppliers and technology consumers, perpetuating patterns of previous industrial revolutions. The creation of technologies in public interest that are adapted to the unique interest of nations and communities, therefore, requires respecting local data ownership frameworks.
In the coming years, without the necessary speed breaks, these operations and products will be supercharged by emerging technologies such as quantum and AI. Is it time, for instance, to establish a global code of conduct for ethical digital surveillance?
Third, there are also two sets of concerns about platform moderation (or lack thereof). One, many platforms dedicate inadequate resources to local language moderation—automated or manual—in a way that accounts for cultural nuances. Two, content moderation is often based on standards that do not align with the laws of the country the user is located in, giving rise to tough questions regarding sovereignty and free speech. Should Big Tech platforms be able to take decisions that challenge sovereign laws? What should they do when said laws conflict with their own policies on free speech? Are Big Tech companies complicit in promoting the rise of restrictive online regimes? Any conversation or set of policy choices regarding digital democracy, therefore, necessarily includes Big Tech.
Finally, what should constitute a common baseline for the practice of digital democracy? The widespread use of Pegasus spyware by democratic states demonstrates the need for clear parameters for surveillance operations, but also ensuring that surveillance products produced by private companies are subject to oversight as well. In the coming years, without the necessary speed breaks, these operations and products will be supercharged by emerging technologies such as quantum and AI. Is it time, for instance, to establish a global code of conduct for ethical digital surveillance?
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.