Expert Speak Digital Frontiers
Published on Oct 21, 2019
Digital democracy: Old problems on new devices? There is widespread belief that the internet and social media strengthen individuals’ voices and have a democratizing effect. This paper looks at four potential threats to liberal democracy in the context of digitization. First, the poser for this paper suggests that digital technologies and the pervasiveness of corporate algorithms have led to a shift in power away from states. Yet many states are leveraging digital tools to exert increasing control. Second, as large technology companies gain unprecedented market and political power, they are also becoming dominant conduits for the flow of information while having little to no responsibility for the content that they host. Third, the echo chambers resulting from platform structures are threatening to deepen social fissures, replicating and creating self-affirming communities shielded from opposing views. Finally, the algorithms increasingly used in public and private sector decision making are opaque, with little transparency of their inner workings or accountability for their outcomes. A rising number of countries are exercising extreme control over the flow of information. This takes place through bans and the disruption of internet and website access, the denial of digital anonymity, restrictions on and the manipulation of content, or the spread of disinformation and propaganda. Ensuring a free and open internet is critical for realizing its democratizing and emancipatory benefits. In 2018, there were an estimated 196 internet shutdowns in 25 countries. This number is on the rise, from 75 in 2016 and 106 in 2017.<1> The official justifications for shutdowns in 2018 were overwhelmingly cited as safety, followed by national security, action against fake news and hate speech, and school exams.<2> The countries that used this measure most in 2018 include India, Pakistan, Yemen, Iraq and Ethiopia. India’s shutdown of the internet in Kashmir in August 2019 was the 51st shutdown in the country this year.<3> Besides the significant social impacts of such internet blackouts, the economic impacts are estimated to have cost the economy 3 billion US Dollars between 2012 and 2017.<4> In addition to access to the internet, anonymity online is critical for protecting individual freedom of expression and the right to privacy. Globally, states are implementing measures that weaken anonymity including bans on the use and dissemination of encryption technologies. Pakistan, for instance, implemented the 2016 Prevention of Electronic Crimes Act, which prohibits the use of encryption tools that provide anonymity.<5> Some countries are introducing licensing and registration requirements. Examples include Vietnam, which in 2015 established the Law on Network Information Security requiring companies trading in civil encryption goods to obtain special business licenses.<6> Similarly, Malawi introduced a registration requirement for companies providing encryption services, as well as a requirement of disclosing the technical details of the encryption technologies.<7> Further, several countries including the United States, the United Kingdom and Australia are attempting to weaken encryption tools through the creation of ‘backdoors’. Several countries mandate the localization of personal data and the local storage of encryption keys.<8> The debate around encryption and the dichotomy between privacy and security remain unresolved. Encryption policies must strike a balance between national security and individual freedoms. Disinformation campaigns and content manipulation by state and non-state actors are increasing. State propaganda is often fabricated and disseminated using paid content contributors and bots.<9> 32 of the countries studied in the Freedom House 2018 report were found to have pro-government commentators manipulating online discussions.<10> China is believed to have hired nearly 2 million ‘pseudo-writers’ to contribute deceptive content to social media sites. A recent study estimates that these authors fabricate and publish nearly 500 million comments a year.<11> Their main objective is to strategically distract social media users from contentious topics.<12> Influence campaigns across borders by both state and non-state actors are threatening the legitimacy and trust in democratic systems. State-sanctioned influence campaigns include efforts such as defamation (delegitimizing public figures), public persuasion (trying to influence public opinion), and polarization (leveraging social and political divides, and undermining confidence in democratic institutions).<13> Recent research identified 53 foreign influence efforts (FIEs) in 24 target countries between 2013 and 2018.<14> More than half of the identified efforts were by Russia. The Russian online influence campaign during the American presidential election in 2016 is one example of this. Most of the remaining efforts were by China, Iran, and Saudi Arabia.<15> Popular social media platforms such as Facebook and Twitter have repeatedly been used in such efforts. While technology equips states with new levers of control, technology companies such as Facebook, Google, Apple and Amazon are also gaining political and market power, and regulators are struggling to keep pace. Large technology companies have become prominent arbiters of the flow of information. Two thirds of Americans get at least part of their news from Facebook.<16> Technology companies have largely been able to eschew liability for the content that they host. The business models of big technology companies rely on targeted advertisements, which require the collection of unprecedented amounts of information about their users. This model favours content that spreads quickly, in many instances this is malicious, false and harmful content. According to a recent study of 126,000 news stories on Twitter posted between 2006 and 2017, it took true tweets six times as long to reach 1,500 people as false tweets.<17> The study found that human behaviour was the leading cause of the spread of false information.<18> New policies aimed at holding platforms liable for the content on their sites are a step in the right direction. France’s Rapid Response Law, which requires technology platforms to cooperate with law enforcement in the removal of false information. Germany’s Network Enforcement Law mandates companies with 2 million or more users to remove content that is deemed to be against German law within 24 hours.<19> Moreover, the U.K.’s Mandatory Duty of Care legislation will hold firms accountable for hosting harmful content.<20> Fake-news legislation introduced in a number of countries including Malaysia have been used to silence dissent. Care must be taken that new legislation is aimed at improving the flow of true information online, while protecting individual freedoms. Such efforts must also address the human behaviour aspects of the spread of misinformation online. Individualized advertising and the network structure of social media risk creating echo chambers. Technology companies and social media platforms filter content that they believe a user does not want to see. Users are therefore exposed primarily to opinions that they agree with.<21> While this keeps users engaged on a site, it also poses the risk of polarization, particularly around political issues. Individuals are organizing around likeminded people online, shielded from opposing perspectives.<22> This contradicts the open discourse between different opinions which lies at the heart of democracy. It remains unknown, however, to what extent these echo chambers are replicating offline communities. If we want to break through these virtual echo chambers, we need greater awareness of how to engage with opposing viewpoints online. It is believed that the internet has empowered individuals by creating more avenues for political participation and political voice. A critical part of political voice is being heard.<23> The complex network of links and search engine algorithms mean that online traffic coalesces around a few dominant sources, not unlike traditional media.<24> While people can write blogs to express their political views online, that does not mean that they are being read.<25> Importantly, not everyone has the needed skills to participate in online discussions, let alone shape democratic discourse. The space for free speech online and offline is under threat everywhere by both the left and the right. Alarmingly, 61 percent of college students in the United States report that their campus climate prevents people from speaking freely. Further,37 percent of respondents report thinking it is okay to shout down people with opposing views, and even more worryingly 10 percent of respondents report that using violence to do so is acceptable.<26> The tendency of the extreme left and right in the United States to prevent voices that they find offensive from being heard is counterproductive. In many instances, differing views are not only seen as wrong, but increasingly they are seen as ‘evil’.<27> Open dialogue and debate is needed and a minimalist approach to regulating speech should be taken, with the exception of the incitement of violence.<28> As individuals generate ever increasing amounts of data online, machine learning is enabling the processing of vast amounts of information. Algorithms are permeating new areas of our lives and are increasingly being used in decision-making processes. In the public sector, algorithms are used to make decisions such as tuition and financial aid, criminal justice, and public housing eligibility. In the private sector, examples of algorithmic decision-making include assessment of insurance and loan eligibility. The outcomes of such decisions have significant implications for individuals, organizations and communities. Algorithmic decision-making is often favoured for its supposed objectivity, efficiency and reliability. Yet, the knowledge fed into these systems, the assumptions and values embedded in the data through collection, and the models risk replicating human bias.<29> Machine learning decision systems modelled on historic data also risk re-enforcing discriminatory biases.<30> Greater transparency and accountability are needed when it comes to the application of algorithms.<31> A pertinent example is the use of algorithm-based risk assessment tools in the United States criminal justice system. COMPAS - Correctional Offender Management Profiling for Alternative Sanctions - has been used in assessing the risk of criminal recidivism and thus for determining eligibility for parole. Research shows that COMPAS correctly predicted the rate of recidivism just 61 percent of the time.<32> Researchers also found that COMPAS calculated a higher false positive rate of re-offence for black people. The opposite was true of whites, who were more likely to be labelled as low risk and then go on to commit another crime.<33> The implications of algorithmic bias on individual lives and society are significant. The use of algorithms and machine learning are on the rise in the private and public sectors. This means that our lives, our opportunities, and risks are increasingly impacted by algorithms which the general population does not understand. Therefore, greater transparency and accountability for the bases of algorithmic decision-making is crucial. This will require greater explainability, validation and monitoring, legislative change, and increased public debate.<34> It might mean greater disclosure of human involvement in algorithmic design to expose inbuilt assumptions, as well as create more individual accountability. Transparency and monitoring of data would mean providing information on the accuracy, completeness, timeliness, representativeness, uncertainty and limitations of data used.<35> Finally, inferences drawn from the outcomes such as the margin of error, the rate of false positives and false negatives, and the confidence values can and should be disclosed.<36> Democracy is not only about individual voice or decision-making by majority, it is just as much about the rule of law, representative democracy, limiting the power of individuals, and protection of minority rights. With that in mind, we must continue to assess the impacts of digital transformations on multiple aspects of democracy and democratic processes. This paper looked at four such challenges, including the exploitation of digital tools by states, the rising power of technology companies, the isolationist impacts of individualized social-media and news feeds, and the applications of algorithmic decision-making. Finally, we must consider the challenges that the digital domain presents for liberal democracy as both unique, and as extensions and replications of existing issues.
This essay originally appeared in Digital Debates — CyFy Journal 2019.
Endnotes <1> “The State of Internet Shutdowns”. Access Now. July 209. <2> Ibid <3> Ibid <4> Rajat Kathuria, Mansi Kedia, Gangesh Varma, Kaushambi Bagchi and Richa Sekhani. “The Anatomy of a Blackout: Measuring the Economic Impact of Internet Shutdowns in India”. Indian Council for Research on International Economic Relations. 2018. 10. <5>Encryption and Anonymity Follow-up Report". United Nations Human Rights Special Procedures. Research Paper 1. 2018. <6> Ibid <7> Ibid <8> Ibid <9> Susan Morgan. “Fake news, Disinformation, Manipulation Tactics to Undermine Democracy". Journal of Cyber Policy 3 No. 1. 2018. <10> Freedom of the Net: The Rise of Digital Authoritarianism Report. Freedom House. 2018. 22. <11> Soroush Vosoughi, Deb Roy and Sinan Aral. “ The Spread of True and False News Online.” Science 359. March 2018. 3. DOI: 10.1126/science.aap9559 <12> Gary King, Jennifer Pan, and Margaret E. Roberts. “How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, not Engaged Argument.” Harvard University. April 2017. <13> Soroush Vosoughi, Deb Roy and Sinan Aral. “The Spread of True and False News Online.” Science 359. March 2018. 3. DOI: 10.1126/science.aap9559 <14> Diego A. Martin and Jacob N. Shapiro. “Trends in Online Foreign Influence Efforts". Princeton University. 2019. 3. <15> Ibid <16> Elisa Shearer and Jefrey Gottfried. “News Use Across Social Media Platforms 2017”. Pew Research Centre. September 7, 2017. <17> Soroush Vosoughi, Deb Roy and Sinan Aral. “The Spread of True and False News Online.” Science 359. March 2018. 3. DOI: 10.1126/science.aap9559 <18> Ibid <19> Soraya Sarhaddi Nelson. “With Huge Fines, German Law Pushes Social Networks to Delete Abusive Posts”, Morning Edition: National Public Radio. Oct 2017. <20> Press Release. “UK to Introduce World First Online Safety Laws.” Department for Digital, Culture, Media & Sport, Home Office. April 9, 2018. <21> Kiran Garimella, Aristides Gionis, Gianmarco De Francisci Morales and Michael Mathioudakis. “Political Discourse on Social Media: Echo Chambers, Gatekeepers, and the price of Bipartisanship.” 2018. 913. 10.1145/3178876.3186139. <22> Nick Funnell. “Bubble Trouble: How Internet Echo Chambers Disrupt Society”. The Economist. Year Unknown. <23> Mathew Hindman, The Myth of Digital Democracy. Princeton University Press. 2009. 13. <24> Ibid, 17. <25> Ibid, 16. <26> Jeffrey M. Jones “More U.S. College Students Say Campus Climate Deters Speech”. Gallup. March 2018. <27>The Global Gag on Free Speech is Tightening”. The Economist. August 17, 2019. <28>The Global Gag on Free Speech is Tightening”. The Economist. August 17, 2019. <29> Reuben Binns. “Algorithmic Accountability and Public Reason.” Philosophy and Technology. 31, Issue 4. 2018. 546. <30> Ibid <31> Ansgar Koene et al., “A Governance Framework for Algorithmic Accountability and Transparency”. European Parliamentary Research. PE625.262. April, 2019. 1. <32> Angwin J et al., “Machine bias: There’s Software Used Across the Country to Predict Future Criminals and it’s Biased Against Blacks.” ProPublica. May 23, 2016. <33> Tom Douglas. “Biased Algorithms: Heres a More Radical Approach to Creating Fairness”. The Conversation. 2019. <34>Understanding Algorithmic Decision-Making: Opportunities and Challenges”. Panel for the Future of Science and Technology, European Parliamentary Research Service. March 2019. <35> Nicholas Diakopoulos. “Accountability and Transparency in Algorithmic Decision Making.” Computations of the ACM. 59, No. 2 February 2016. 60. DOI:10.1145/2844110 <36> Ibid
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Contributor

Terri Chapman

Terri Chapman

Terri Chapman is Programme Manager at ORF America. Her research interests include urban and regional development welfare analysis and inequality.

Read More +