Four years ago, I stood in the darkened operations center in front of a wall of blinking screens, arms crossed and squinting at video footage on one of them.
“Is this the guy?” The commander asked me for the second time, signaling toward the figure on the screen. “Kara, is it him?” I looked over and reviewed a mental checklist of the individual’s pattern of life over more than a decade. I weighed this against his latest movements, reflected on the screen in real time. The commander took a step toward me and started again, “Kara. We are running out of time. Is this our guy?”
I had a decision to make.
Using a machine to determine the validity of the target and take action is a nonstarter. But not everyone agrees on the details. Though the machines I dealt with that day were only semi-autonomous, it is not difficult to imagine a world where fully autonomous weapons are programmed to make a lethal decision. Institutions, countries, industry, and society must choose when and how to govern this technology in today’s world, where semi-autonomous weapons systems are no longer cutting-edge instruments of violence. Central to this is the reality that lethal autonomous weapons systems (LAWS)-related technologies are outpacing a cumbersome, incremental diplomatic process. This is causing non-traditional actors, such as the private sector, to play an increasingly significant role in norms building. Against the backdrop of an era of strategic competition, the international community must account for these new stakeholders and their potential impact on international peace and security.
United Nations GGE: Current state of play, key actors, and objectives
The constantly evolving nature of foundational LAWS technology will inform attempts to govern its use. Technologies that underpin these systems –– like artificial intelligence (AI) and robotics –– are advancing rapidly. From automatic parallel parking to the US Navy’s Aegis combat weapons system, uses of autonomy the breadth of civilian and military life. This reality shows no signs of slowing down. Corporations and governments alike are dumping more and more resources into these technologies for a variety of uses. For instance, Russia’s private sector AI investments will likely hit US$500 million by 2020, and just last year corporations like Google snapped up AI start-ups for hundreds of millions of US dollars, with a total of US$21.3 billion spent in AI-related mergers and acquisitions in the United States.<1> Similarly, China is aiming for its AI industry to be worth US$150 billion by 2030.<2> In explicit defence-related investments, the US Defense Advanced Research Projects Agency announced a US$2 billion effort to develop the “next wave” of AI technologies in 2018.<3>
While this technology is diffuse across a range of industries, it can also be purposed for war. In April of last year, former Alphabet Executive Chairman Eric Schmidt testified to US Congress that AI will “profoundly affect military strategy.”<4> Furthermore, as former US Vice Chairman of the Joint Chiefs of Staff General Paul Selva told a DC-based think tank in August 2016, the “notion of a completely robotic system that can make a decision about whether or not to inflict harm on an adversary is here. It’s not terribly refined. It’s not terribly good. But it’s here...”<5> The superhuman reaction time of such decision-making machines could also increase the risk of accidents and prove deadly on a catastrophic scale, especially if the pace of future battle exceeds human decision-making capability.<6>
In light of such a future, attempts at governance under the framework of the 1980 Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons which may be deemed to be excessively Injurious or to have indiscriminate effects (CCW) are underway. At the United Nations Office in Geneva in November 2013, states parties anticipated a need to examine the potentially destructive effects of this technology during a meeting of the High Contracting Parties to the CCW.<7> A few years later, the CCW instituted a series of Group of Governmental Experts (GGE) meetings for further substantive discussions, beginning in 2017.<8> This “expert subsidiary body of the CCW” signaled a rise in priority of the “questions related to emerging technologies in the area of LAWS.”<9> The latest CCW meeting of High Contracting Parties convened again in Geneva in November this year, when they decided to hold seven more days of meetings in 2019.<10>
Diplomatic progress is duly open-ended, with a plurality of actors taking up various mantles on LAWS governance. Key among these are states, non-governmental organisations, international organisations and consortiums, industry, and academics. As a general overview meant only to demonstrate potential mechanisms of governance as they exist today, these players can be broadly categorised into states and other actors:
States: States include all permanent members of the UN Security Council and additional countries, for a total of 125 member states (High Contracting Parties to the CCW), with each country typically acting in its own security interests. States, and groupings of states like the Non-Aligned Movement (NAM) and the African Group, coalesce around multiple objectives, such as the assertion that no new legally binding measures on LAWS are needed, as current international law suffices. (Notably, Russia, the United States, Israel, France, and the United Kingdom all reject a ban.)<11> Other states, like Germany, support a non-legally binding political resolution, exemplified in their joint call with France for a “political declaration” at the April 2018 GGE meetings in Geneva.<12> Then there are those countries that advocate for a legally binding treaty to ban fully autonomous weapons, like Pakistan, Ecuador, Egypt, the Holy See, Cuba, Ghana, Bolivia, Palestine, Zimbabwe, Algeria, Costa Rica, Mexico, Chile, Nicaragua, Panama, Peru, Argentina, Venezuela, Guatemala, Brazil, Iraq, Uganda, Austria, Colombia, and Djibouti.<13> Still other states require more information to make definitive judgements on LAWS governance.
Other actors: Non-governmental organisations and activists who approach potential LAWS governance from a humanitarian perspective, along with international organisations, entities, and independent actors, with a variety of corresponding and contrasting motives, comprise the other broad set of participants. For example, the NGO Campaign to Stop Killer Robots monitors policy developments on the call for a ban and helms the general movement.<14> Agencies such as the International Committee of the Red Cross, UN Institute for Disarmament Research, UN Office for Disarmament Affairs, entrepreneurs, “industry associations,” academics there to observe, ethicists attending to inform, and roboticists all represent a plurality of agendas and interests.<15>
Uniters or dividers: Principles and norms
While certain principles unite actors in attempts to establish norms (or “standards of appropriateness”) ––like involving a human in lethal decision-making –– key impediments to this cohesion will frustrate any process to codify mutual restraint into law.<16>
The independent establishment of a AI principles and ethics by private companies could offer standard-setting measures or lay the groundwork for new principles to work into LAWS governance.
One of these uniting principles is that international humanitarian law (IHL) applies to the use of LAWS. This effectively provides the overarching framework for CCW meetings.
More explicitly, ten possible “guiding principles” were enumerated in the consensus document of recommendations from the August 2018 GGE meetings. Key among these is the affirmation of the importance of human responsibility “on the use of weapons systems,”<17> articulated as follows: “Human responsibility for decisions on the use of weapons systems must be retained since accountability cannot be transferred to machines. This should be considered across the entire life cycle of the weapons system.”<18>
This consensus on some form of human involvement in the decision to take a life can encourage countries to collaborate on more specific questions that are engendered. France and Germany, for instance, aim to work together on the imperative to define meaningful levels of human control and “explore” the field of human-machine interaction.<19> Even as dueling ideas of “meaningful level of human control” versus “appropriate human judgement” versus “appropriate human involvement”<20> can dictate the agendas of future meetings, given contested degrees of gradation concerning human involvement, but virtually no one in the diplomatic process argues that humans are unnecessary, somewhere “in/on the loop.”
Therefore, the human factor, along with the overarching principles of IHL,<21> can serve as starting points to refine and unite future collaborative efforts on governing this space.
Conversely, there are areas of contention that divide actors involved in the GGE meetings and thus throw up formidable obstacles to effective governance. Even commonly recognised objectives can act as divisive forces in attempts to govern, such as the need to establish a common definition of LAWS in order to begin thinking about implementation and restraint. While not prohibitive, this has been a glaring hurdle to the development of the normative process from which legal analysis and procedures can flow. The Center for a New American Security’s Paul Scharre contends that people fall into multiple schools of thought on the technical sophistication of autonomous weapons. Some think of autonomous weapons as agile and adaptive systems that use machine learning, others see them as any weapon that uses any type of autonomy, and yet others think of machines with the same cognitive abilities and intelligence of humans.<22> Further, some GGE participants will not even agree on the existence or the utility of a definition. According to Scharre’s account of an interview with Human Rights Watch Arms Division Director Steve Goose, a working definition would actually hinder efforts to restrict autonomous weapons by jumpstarting a “conversation of potential exceptions.”<23> By this logic, creating too limiting a definition may inadvertently exclude possible subjects or components of legitimate regulation.
A second speed bump for CCW is disagreement on the morality of using or not using such weapons. For example, a general moral polarity exists more visibly concerning one element of LAWS safety: individuals and groups cannot seem to agree if they will make warfare safer (due to their precision) or more dangerous. For instance, United States Army Training and Doctrine Command’s Tony Cerri asked an audience in 2018: “Is it immoral not to rely on certain robots to execute on their own … given that a smart weapon can potentially limit collateral damage?<24>
A third factor that inhibits the potential for effective governance through the existing process is the fact that the CCW’s hallmark is its status as a consensus-based organisation.<25> The High Contracting Parties have to agree to fund the endeavour and to participate in the next meeting itself, which allows any one player to impede the process. While the benefits of a consensus on LAWS governance are legion, this structure also creates a higher bar for agreement and implementation. Something as small as Russia’s insistence on reducing the 2019 GGE meetings from ten to seven days at the November 2018 Meeting of the High Contracting Parties to the CCW highlights this hurdle.<26>
Where do we go from here? New players in the state-led system
Given this backdrop, elements of the private sector can be leveraged strategically in the norms-building conversation –– not as new leaders, but as an option for maintaining relevant governance frameworks despite a shifting technological landscape.
As with most emerging tech, the technology that underpins LAWS is far outpacing efforts to govern it. In certain instances, uneven rates of progress between new developments and accompanying guidelines can result in an outdated policy apparatus failing to keep up with malicious use. Recent examples include the reported use of facial recognition at music venues without individual consent and commercial drone near-misses with both military and civilian aircraft.<27> With LAWS, the uniquely revolutionary nature of AI may alter the institution-building processes brought to bear so far. Its myriad components and applications extend well beyond military applications to social, economic, humanitarian, diplomatic, and medical uses. The potential wide-ranging and salutary societal effects of continued AI development are widely acknowledged beyond national borders, as articulated by the Russia Federation position paper to the CCW in 2017:
“The difficulty of making a clear distinction between civilian and military developments of autonomous systems based on the same technologies is still an essential obstacle in the discussion on LAWS. It is hardly acceptable for the work on LAWS to restrict the freedom to enjoy the benefits of autonomous technologies being the future of humankind.”<28>
This diffuse nature of AI, the growing concentration of engineering talent in the private sector, and their demonstrated technical breakthroughs and influence changes the impact and nature of stakeholders, placing a greater emphasis on today’s purveyors of AI tech development –– the civilian private industry.<29>
For instance, in 2017 AI company DeepMind’s Alpha Zero algorithm learned three different strategy games, including chess, without any training data.<30> In 2018, Chinese company Tencent made similar progress when its FineArt software defeated the human champion of another strategy game.<31> Private company SpaceX achieved breakthroughs through its reusable rockets, with implications for blending commercial and defence technologies. Due to its role in developing the world’s next transformative technology faster, the civilian private sector has already begun to articulate standards for the use of these technologies, absent any overarching strictures already in place.
Such a change also reflects a deeper trend of private actors gaining more influence in national security discourses, as demonstrated from the United States to South Korea in 2018. Notably, a cadre of Google engineers voicing concerns over the use of object recognition algorithms for the Pentagon’s Project Maven, and the subsequent Google decision to discontinue its contract, demonstrates the pull of these non-traditional actors. Korean AI researchers effective boycott of Korean University KAIST, until it pledged not to develop “killer robots,” is a related demonstration of influence.<32> The Future of Life Institute’s open letters to the CCW, consolidated by University of New South Wales Professor Toby Walsh, also highlight this heightened involvement.<33> Influential AI leaders like DeepMind’s Mustafa Suleyman and Tesla CEO Elon Musk’s signatures on his 2017 Open Letter to the CCW, imploring the High Contracting Parties to “find a way to protect us all” from the “dangers” of autonomous weapons, portend future private sector engagement.<34> As such, the independent establishment of AI principles and ethics by companies like Google and Microsoft could offer standard-setting measures or lay the groundwork for new principles to work into LAWS governance.<35> Even if these principles are not directly applicable to the use of weapons, or their sources are dubious (i.e., the poor track record of US tech company self-regulation), giving these companies a seat at the table in the norms-building processes would leverage the lessons learned from their technological breakthroughs and attendant standard-setting measures.
The combination of the disruptive impact of AI and the rise of private sector engagement can further influence the norms-building process, and even the common framework CCW parties take for granted, through “morally responsible engineering” and non-traditional private partnerships and consortiums.<36>
In the first case, civilian industry’s human capital standouts can volunteer to contribute to the design stage of normative frameworks. As demonstrated above, civilian industry employees are increasingly seeking involvement in global ethics debates, and using their expertise to form the “technical architecture” of algorithms that underpin autonomous weapons systems is one way to keep up with the accelerating pace of unregulated technological advancement. As Harvard researchers noted in 2016:
“Programmers, engineers, and others involved in the design, development, and use of war algorithms might take diverse measures to embed normative principles into those systems. The background idea is that code and technical architectures can function like a kind of law.”<37>
Civilian technical talent can help uniformed researchers inject “ethics” early on in the algorithms’ design phase to help preserve norms as the technology develops –– like the importance of human control in lethal decision-making. In other words, civilian engineers can work side-by-side with AI scientists and government personnel to promote Georgia Tech roboticist Ronald Arkin’s concept of “embedding ethics” into the design of the system to potentially constrain lethal action.<38> To note in this regard is the 2018 EU’s high level expert group on AI, to be released March 2019, that will use this “ethics by design” principle as a test case for the rest of the international community.<39> The CCW’s 2018 Guiding Principles already offer a general blueprint for these efforts, which could benefit from private sector engagement: “Risk assessments and mitigation measures should be part of the design, development, testing and deployment cycle of emerging technologies in any weapons systems.”<40>
In short, the engineering cadre responsible for the design can use the CCW’s two key principles of IHL and human responsibility to determine the best “programming policies” as early as possible in the algorithm’s life cycle, prior to fielding. Speculation that developers can programme certain narrow uses of AI to comply with an agreed-upon code of ethics — whether the IHL, Geneva Convention, or a set of practices or standards agreed upon by a group of states operating under a common normative framework (e.g. NATO) — merit further research in both defense industry and civilian labs.
Secondly, the private sector can also help governance keep up with LAWS-related technology by examining, and perhaps even replicating, the efforts of non-traditional partnerships between private companies, interactions and forums that are on the rise and which have the potential to influence states and their own approaches to developing regulations, if successful. For example, Microsoft’s Brad Smith’s proposal for a Digital Geneva Convention urges governments to create “new international rules to protect the public from nation state threats in cyberspace.”<41> Groups of companies like Siemens, IBM, and T-mobile devised and signed off on a charter to set standards to counter hacking attacks in early 2018, with calls for governments to take responsibility for digital security as well.<42> The rise of digital terrorist propaganda on different platforms and a consortium designed to prevent the phenomenon between major technology companies gave way to a similar effort to counter computational propaganda in Europe, with companies like Facebook and Twitter signing off on a voluntary code of conduct, drawn up by the European Union.<43> In some cases, these companies brought in and trained smaller tech companies, international organisations, and NGOs to disrupt threats with new technology.<44> Already, different forms of potentially harmful technologies are evolving quicker than agreements on their ethical use, like the gene editing tool CRISPR and facial recognition, spurring similar self-policing efforts to both stem the tide and set new standards of “appropriateness” with government buy-in.<45>
The LAWS norms-building process must make room for the rise of such agile, unconventional policy responses. It should account for increasingly significant players — particularly in a crowded milieu of great power competition and multipolarity as some harness emerging technologies quicker than others and aggravate security calculi.
While the private sector can offer blueprints to help govern global spaces, the current GGE process is working to acknowledge this impact in constructive ways. According to the Campaign to Stop Killer Robot’s Mary Wareham, AI scientists, experts, engineers, roboticists, and computer scientists are already “deeply involved” in the UN GGE meetings.<46> Outgoing Chair of the GGE meetings, Amandeep Gill, rightly highlighted the necessity of keeping policy “tech-neutral” and in line with technical developments to avoid constant revision, and of opening the conversation to multiple stakeholders, including industry.<47>
LAWS technology is evolving so fast, and with such broad implications for general society, that a serious recalibration of current institution-building procedures is warranted. The existing state-led system must make allowances for the impact of a formidable engine of these advances –– the civilian private sector. Its role in the design and development of technology used in or adapted for armed conflict will matter increasingly in wartime scenarios, such as the one I experienced on deployment four years ago. In fact, the key to a new framework may alone rest on private sector developers who deliver the next steps in AI progress. The earlier we realise this, the better.
This essay originally appeared in The Raisina Files
Endnotes
<1> Michael Horowitz, Elsa B. Kania, Gregory C. Allen, and Paul Scharre, “Strategic Competition in an Era of Artificial Intelligence”, CNAS and “Google Leads in the Race to Dominate Artificial Intelligence”, The Economist, December 7, 2017.
<2> Ibid.
<3> “DARPA Announces $2 Billion Campaign to Develop Next Wave of AI Technologies”, Defense Advanced Research Projects Agency.
<4> “Statement of Dr. Eric Schmidt to the US House Armed Services Committee on April 17, 2018”, US Congress, April 17, 2018.
<5> Aaron Mehta, “No Terminators, but Autonomous Systems Vital to DoD Future”, Defense News, August 8, 2017.
<6> Paul Scharre, “A Million Mistakes a Second”, Foreign Policy, September 12, 2018.
<7> Meeting of the High Contracting Parties to the CCW, “Report of the 2014 informal Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS)”, CCW/MSP/2014/3, November 13-14, 2014. Organisations interested in the humanitarian impact of these weapons, like the International Committee of the Red Cross (ICRC), attempt to define these systems with an eye toward regulation. The ICRC categorises LAWS as “weapons that can independently select and attack targets, i.e. with autonomy in the ‘critical functions’ of acquiring, tracking, selecting and attacking targets.”
<8> Fifth Review Conference of the High Contracting Parties to the CCW, “Report of the 2016 Informal Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS)”, CCW/CONF.V/2, December 12-16, 2016.
<9> Patrick Tucker, “Russia to the United Nations: Don’t Try to Stop Us From Building Killer Robots”, Defense One, November 22, 2017.
<10> Meeting of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, “Final Report”, CCW/MSP/2018/CRP.1, November 21-23, 2018.
<11> Elsa B. Kania, “China’s Strategic Ambiguity and Shifting Approach to Lethal Autonomous Weapons Systems”, Lawfare, April 20, 2018.
<12> “Statement of France and Germany to the Meeting of the Group of Governmental Experts on Lethal Autonomous Weapons Systems Geneva, 9 to 13 April 2018”.
<13> “Country Views on Killer Robots”, Campaign to Stop Killer Robots, April 13, 2018.
<14> Ibid.
<15> Sono Motoyama, “Inside the United Nations’ Effort to Regulate Autonomous Killer Robots”, The Verge, August 27, 2018. This is a non-exhaustive list meant to illustrate the variety of participants in the CCW framework.
<16> Martha Finnemore and Kathryn Sikkink, “International Norm Dynamics and Political Change,” International Organization 52, no. 4 (1998): 887-917. doi:10.1162/002081898550789
<17> GGE of the High Contracting Parties to the CCW, “Report of the 2018 session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems”, CCW/GGE.1/2018/3, April 9-13 and August 27-31, 2018.
<18> Ibid.
<19> See note 12.
<20> Paul Scharre, Army of None: Autonomous Weapons and the Future of War (New York: W.W.NORTON, 2018), 347 and 358.
<21> See note 17. This paper uses the recommendations for Guiding Principle #1, that IHL applies as a default consensus position for the CCW meetings.
<22> See note 20, 346-347.
<23> Ibid, 349.
<24> Haiyah Kofler, “Exploring the Humanity of Unmanned Weapons,” C4ISRNET, October 10, 2018.
<25> See note 17. “The Group shall conduct its work and adopt its report by consensus which shall be submitted to the 2018 Meeting of the High Contracting Parties to the Convention.”
<26> See note 10.
<27> Sopan Deb and Natasha Singer, “Taylor Swift Said to Use Facial Recognition to Identify Stalkers,” The New York Times, December 13, 2018; Craig Whitlock, “FAA Records Detail Hundreds of Close Calls between Airplanes and Drones,” The Washington Post, August 20, 2015.
<28> GGE of the High Contracting Parties to the CCW, “Examination of various dimensions of emerging technologies in the area of lethal autonomous weapons systems, in the context of the objectives and purposes of the Convention”, CCW/GGE.1/2017/WP.8, November 13-17, 2017.
<29> Rachel Feintzeig, “U.S. Struggles to Draw Young, Savvy Staff,” The Wall Street Journal, June 11, 2014.
<30> Paul Scharre and Michael Horowitz, “Artificial Intelligence: What Every Policymaker Needs to Know”, CNAS.
<31> Tom Simonite, “Tencent Software Beats Go Champ, Showing China’s AI Gains,” Wired, January 23, 2018.
<32> Andrea Shalal and John Stonestreet, “AI Researchers End Ban after South Korean University Says No to ‘Killer Robots,’” Reuters, April 9, 2018.
<33> “Killer robots: World’s top AI and robotics companies urge United Nations to ban lethal autonomous weapons”, The Future of Life Institute, August 20, 2017.
<34> “An Open Letter to the United Nations Convention on Certain Conventional Weapons”, The Future of Life Institute.
<35> Sundar Pichai, “AI at Google: Our Principles”, Google, June 7, 2018, and “The Future Computed: Artificial Intelligence and Its Role in Society”, Microsoft Green Blog, March 8, 2018.
<36> “Autonomous Weapon Systems: The Need for Meaningful Human Control”,
Adviesraad Internationale Vraagstukken Advisory Council on International Affairs.
<37> Dustin A. Lewis, Gabriella Blum, and Naz K. Moirzadeh, “War-Algorithm Accountability”, Harvard Law School Program on International Law and Armed Conflict, 95.
<38> Ronald C. Arkin, “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture”, Proceedings of the 3rd International Conference on Human Robot Interaction - HRI 08, 2008.
<39> “European Commission Coordinated Plan on Artificial Intelligence”, European Commission, December 7, 2018.
<40> See note 17.
<41> “Advancing a Digital Geneva Convention to protect cyberspace”, Microsoft Policy Papers.
<42> Oliver Sachgau and Jackie Simmons, “Siemens Teams With Airbus to IBM in Cyberattack Defense Plan,” Bloomberg Business, February 16, 2018.
<43> “EU Steps up Fight against ‘Fake News’ Ahead of Elections,” AP News, December 5, 2018.
<44> “Progress on Hash-Sharing and our Partnership Structure”, Global Internet Forum to Counter Terrorism.
<45> See note 27 and Antonio Regalado, “Exclusive: Chinese scientists are creating CRISPR babies”, MIT Technology Review, November 25, 2018.
<46> Mary Wareham, Paul Scharre, Elsa B. Kania and Kara Frederick, “Analysis on UN Certain Coventional Weapons Convention”, CNAS, June 1, 2018.
<47> See note 15.
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.