The Moral and Political Dangers of Autonomous Weapons

Ryan Jenkins

The state of the art in robotics and artificial intelligence continues to advance at an accelerating clip, surprising experts and futurists. Increasingly complex and in telligent machines are changing the texture of human life as they are insinuated into more spheres of activity, from manufacturing to law enforcement to stock trading.

Military applications have historically been one of the greatest drivers of innovation. We can trace the history of warfighting—and especially its recent history—as a long arc of removing the warfighter more and more from harm’s way.1 The so-called “drones” in America’s arsenal represent the latest in this progression, though, importantly, they still require a human to make the potentially lethal decision to engage a target.

However, robotics and artificial intelligence could soon combine in lethal autonomous weapons: robots that are deployed to identify and engage humans without direct human oversight. This could represent the culmination of the historical trend: removing warfighters not just from the field of battle but from having a hand in lethal decisions altogether. If autonomous weapons become feasible, the militaries of the world will probably find their strategic and economic benefits irresistible.

It is high time that we ask ourselves: Is there something fundamentally wrong with delegating the task of killing to a machine? While autonomous weapons have been the stuff of science fiction for decades, they have only recently come under intense philosophical scrutiny. Thankfully, much of that discussion has spilled from the academy into public fora. These are questions that demand a public hearing, before we outsource what is perhaps the most significant decision a human being can make: taking a life. Abdicating our responsibility to investigate the arguments on either side of the question would be unforgivable.

Making progress in complicated moral debates depends on untangling the controversial principles at issue and the human costs at stake. This is hindered by the fact that the basic concepts involved are often conflated or sloppily applied. Here is a discussion where the expertise of society’s professional ethicists—in this case, philosophers of technology, morality, and war—are called upon. We can begin to plumb the depths of our consciences, to interrogate and clarify the moral commitments we hold dear, to determine which course of action is congruent with the hopes and goals of the human race. Among the most valuable discoveries we can make is that we are wrong about what we thought we believed. We discover this when it turns out that our beliefs have unacceptable implications. And when the implications of a belief are unacceptable, we must extirpate the original commitment that generated them. Only then can we hope to make progress and reach some agreement on which course of action is acceptable and which world we want to bring about.

Could a Killer Robot Ever Behave as Well as a Human Soldier?

The idea that machines can be conscious has captivated the public imagination recently, with talk of artificial agents,2 the prospects of duplicating a human brain in a machine3 or uploading our minds into mainframes,4 or the claim that we could be living in a computer simulation.5 All of these fantastic ideas rely on the claim that machines themselves could be conscious. Unfortunately for the starry-eyed dreamers who entertain these ideas, the majority of philosophers who focus on the nature of the mind are skeptical about these possibilities.

If they are right, then robots lack the ability to appreciate, feel, respect, discern, intuit, and the like. (It would be appropriate to use these words to describe the behavior of robots only in a metaphorical sense.) We may be comfortable using these words in this context, but surely it cannot mean the same thing for a robot to detect an object in front of it that it means for a human to do the same, considering the way humans spontaneously associate objects with the rich constellation of concepts, meanings, uses, opportunities, and so on that they represent.6

The moral domain is a paradigm of an area of life where judgment, emotion, and intuition are necessary.7 Could robots reliably navigate this domain, as controversial and subtle as it is?

Brian Orend, a military ethicist, has said, “Though war is hell, we try to make it at least a rule-bound hell.”8 Violence in war is governed by three moral principles: violence must be inflicted (1) only when it is necessary; (2) only in ways that are proportionate to the value of the end being sought; and (3) only in ways that discriminate between those who are liable to be harmed and those who are not (roughly, combatants and civilians).

Could robots ever reliably stay within these constraints? Take, for example, the requirement to discriminate between those who are liable to harm and those who are not: this is not as straightforward as distinguishing between people wearing a uniform and people who are not. Whether someone is a legitimate target depends on whether he or she is an active part of hostilities or retreating or surrendering. It depends on complex causal factors, such as whether he or she poses a threat, and it requires the ability, it seems, to read the intentions of humans. This is difficult enough for human soldiers—and it seems like exactly the kind of task that computers will never be able to perform.

On a similar note, many philosophers have long held that acting morally requires a kind of knowledge,9 but not the knowledge that might be found in a book or a list of simple instructions. Instead, navigating the gnarled moral dilemmas of everyday life—not to mention matters of life and death, such as those faced in war—is akin to a craft. It involves knowledge of certain basic principles, sure—a woodworker has to know how his or her tools work and how to maintain them, and a poet has to know the grammar and syntax of his or her language. But these crafts also involve an essential element of artistry, intuition, and feeling. It is the feeling an artist has of being in “flow” or the intuition a native speaker has when a word sounds wrong in a sentence, even though he or she cannot articulate why.10

This view is traceable to Socrates, the progenitor of the entire project of moral philosophy. And it is now accepted by ethicists that belong to many otherwise disparate and fiercely warring factions. It is called the “non-codifiability thesis,” after the idea that morality cannot be codified in an exhaustive list of rules that an intelligent person could follow in any context. If machines can only follow programmed lists of instructions and are bereft of the capacity to feel or intuit, then it’s altogether mysterious how they could behave like a careful and ethically sensitive human would. If that’s true, then autonomous weapons probably should not be trusted to perform as well as human soldiers in battle.11

Does this imply that we could never trust robots to wage war on our behalf? It still seems possible that robots could at least go through the motions of morality. Computers have already demonstrated an ability to outperform humans in tasks that are sophisticated and contextual. For example, consider AlphaGo or Watson, the artificial intelligences created by Google and IBM, respectively, that were able to outperform humans in domains where we once thought we held an “indomitable superiority”—and they have done this much sooner than was anticipated.12 Concerns, then, about whether robots can really understand or appreciate the significance of what they are doing are beside the point.

The history of artificial intelligence is littered with failed predictions about what computers will never be able to do: read human faces or emotions, recognize objects, trade stocks, drive cars, play chess, and so on. It is plausible that computers will one day be able to pass a moral Turing test, impersonating the ethical judgments and reasoning of humans.13 But even if robots are well-behaved, say, by killing only the “right” people, might there be something wrong with the motives of these killings or the reasons why they’re carried out?

Are Autonomous Weapons ‘Disrespectful’?

A robot’s lack of a mental life is a double-edged sword. For one, robots could not become jealous, angry, confused, fatigued, racist, and so on.14 These emotions cloud a human’s judgment and corrupt his or her actions. If we could strip a human of these emotions, it would improve his or her ability to wage wars in ways that are admirable and just. Since robots are without any of these capacities to begin with, they start with a certain innate advantage over humans. Some writers have found this to be a cause for optimism.

However, philosophers have found this emotionlessness problematic at the same time. Aside from lacking the emotion or intuition discussed above, robots cannot express mental attitudes that we think are important in our moral lives. We commonly think that whether some action is right depends on the attitudes that we hold while performing it. There is a big difference between giving a woman flowers to cheer her up and giving her flowers in order to make her boyfriend jealous, although both actions appear identical from the outside.15 One of these actions shows an attitude of respect or sympathy; the other, covetousness.

Take an example more clearly related to the conduct of war: robots cannot express respect for a person, which has been a hallmark of one strand of moral philosophy for hundreds of years. Some philosophers have argued persuasively that the most important moral principles of waging a just war derive from respect.16

The philosopher Thomas Nagel set the stage for decades’ worth of discussion and introspection about the ethics of war with his seminal paper, “War and Massacre.” Nagel argued that the difference between these two actions depends on our ability to identify our adversaries and justify to them why they have been targeted. Nagel writes, “whatever one does to another person intentionally must be aimed at him as a subject . . . [and] should manifest an attitude to him rather than just to the situation.”17 This is the difference between aiming at a soldier because he or she is a soldier and spraying bullets in the general direction of an enemy, killing whomever you might hit. Deploying autonomous weapons might be “disrespectful” in this way, since they cannot manifest the appropriate attitude toward our enemies.18 After all, they have no attitudes at all.

On the other hand, it may strike some as incredible that soldiers “treat their enemies with respect,” even as they line them up in their crosshairs. It is even less plausible that we treat our enemies with respect when launching cruise missiles at unknown individuals from hundreds of miles away. Here, there seems to be no acknowledgment of the humanity of our adversaries, and yet war has routinely been fought this way for decades (or centuries, if we go back further to the advent of long-range artillery). It is difficult to see how we could respect our adversary without knowing who, in particular, he or she is.19 And yet, most of us think that these methods of waging war are morally acceptable. If that is true, and if autonomous weapons are no less respectful, then this argument based on respect can’t ground a serious moral objection to them.

Moreover, imagine that robots could one day carry out wars in ways that satisfy the rules of war better than typical human soldiers. In that case, it seems positively disrespectful to deploy humans instead of robots to do our fighting. How could it be considered respectful to wage war in a way that will predictably lead, for example, to greater carnage or more civilian deaths? Opponents of autonomous weapons may find themselves in the uncomfortable position of defending inferior human soldiers if robotics and artificial intelligence advance far enough.

There is no way for these discussions to advance without the international law community confronting its moral foundations. Is the tradition of military ethics and military law ultimately motivated by reducing the body counts in war and protecting civilians from harm? Or is its allegiance to “respect for one’s adversary” stronger? If the law is a project of structuring interactions in a community so that people with different beliefs and values can coexist and flourish, then it is not obvious where to strike the balance between respectful attitudes and total social well-being (for instance, in terms of lives lost to war). People value both, and they can clearly conflict.

The Political Dangers of Autonomous Weapons

The worries discussed above are intuitive and may have an uncanny familiarity among non-philosophers. This is evidence that in their discussions, philosophers have done a decent job of capturing the widespread but often nebulous repulsion felt when contemplating “killer robots.” But it remains difficult to pin the worries down, and it’s ultimately not obvious that they are justified. They each face serious challenges and rely on adjudicating more fundamental disagreements. But before settling these disagreements, we should consider another possibility: that the most insidious danger of developing and deploying autonomous weapons might be something else altogether, what we might call the political dangers of autonomous weapons.

It may be a distraction to consider whether there is something wrong with the act of killing by machine, as if that act were performed in a vacuum. In fact, it is not. The most worrisome consideration could be autonomous weapons’ larger place in the sociotechnical world we have built. These weapons allow—or demand—that decision-making power be concentrated into fewer hands as part of rigid command and control. They require that new secrets be kept from the public to safeguard their operation and implementation—including, for example, kill lists whose contents and criteria are classified. The deployment of autonomous weapons could enlarge the sphere of choices that are insulated from public scrutiny, including matters of national security. But the public has a very strong claim to understanding whom the government is killing and why—in our name and with our tax dollars.

Each invention is a vote for a new kind of world. In our technophilic society, we often focus too closely on costs and benefits in terms of simple economic efficiency or, in this case, lives lost versus lives saved. But innovations cannot be fully justified by such an austere calculation. We should direct our attention equally to the implications that such technologies have for “the form and quality of human associations,”20 including the distribution of power in a society, how our institutions serve or hobble their people, and whether freedom and flourishing are enhanced at home and abroad.

Nor do you have to be a paranoid liberal, bleeding-heart pacifist, or moral philosopher to appreciate the implications of ordering life in America around the pursuit of more efficient ways to inflict harm. You could, for example, be World War II hero and Republican president Dwight D. Eisenhower: “This conjunction of an immense military establishment and a large arms industry is new in the American experience. The total influence—economic, political, even spiritual—is felt in every city, every statehouse, every office of the Federal government. . . . Our toil, resources, and livelihood, are all involved. So, is the very structure of our society.”21

They say there have been two revolutions in warfare:22 First, there was gunpowder, which made warfare a less personal and intimate affair and also democratized warfare, forcing leaders to rely on hundreds of thousands of people to wage war. By World War I, war had become depersonalized mass killing.

World War II was punctuated by the introduction of the second revolution in warfare, the atom bomb. This invention transformed war into an existential threat to human existence and dictated the path of the international community for the next fifty years, where every human being lived under the nightmare of mutually assured destruction. (Then again, why were we so worried with fifty thousand nuclear weapons protecting us?)

Autonomous weapons could usher in the third revolution in warfare, centralizing state power while anonymizing and spreading the capability to inflict harm. The advent of targetable, unaccountable, relatively costless, and widely available lethal violence could be one of the worst inventions in the history of humanity. We have good reason to think their use will make war more likely.23 And they could radically destabilize the political order and introduce wanton violence and random terror in ways that are unprecedented.

Developing and deploying technology is an endorsement of a particular future. We have seen how, once such weapons are invented, they are appropriated by larger forces and used in ways we did not intend or anticipate. (Whatever your party affiliation, you do not have to reach far into America’s past to find a questionable use of military force.) The use of new weapons sets precedents that fill a normative void before robust and carefully considered laws can arrive on the scene. The choices of technologies become fixed in capital and material costs, and our institutions and policies are reshaped around a commitment to their retention and use. In this way, technological choices can have an inertia that is formidable, if not unstoppable. It is for this reason that the turning point for considering the adoption of technology is before it is introduced. There is rarely any going back. Are we sure we want this future?

Notes

  1. Bradley Strawser, “Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles,” Journal of Military Ethics 9, No. 4 (2010): 342–68.
  2. Luciano Floridi and Jeff W. Sanders, “On the Morality of Artificial Agents,” Minds and Machines 14, No. 3 (2004): 349–79.
  3. Stan Franklin, Artificial Minds (Cambridge, Mass.: MIT Press, 1997).
  4. Nick Bostrom, “The Future of Humanity,” in New Waves in Philosophy of Technology, eds. Jan Kyrre Berg Olsen, Evan Selinger, and Søren Riis (London: Palgrave Macmillan, 2009).
  5. ———, “Are We Living in a Computer Simulation?” The Philosophical Quarterly 2003.
  6. Robert Hanna, “Kant and Nonconceptual Content,” European Journal of Philosophy 2005; Maurice Merleau-Ponty, Phenomenology of Perception, trans. Donald Landes (London: Routledge, 2013).
  7. Peter Asaro, “On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-making,” International Review of the Red Cross 94 (2012): 687–709.
  8. Brian Orend, The Morality of War (Basingstoke, UK: Broadview Press, 2013).
  9. John McDowell, “Virtue and Reason, “ Monist 62, No. 3 (1979): 331–50.
  10. Hubert Dreyfus, What Computers Still Can’t Do: A Critique of Artificial Reason (Cambridge, Mass.: MIT Press, 1992).
  11. Duncan Purves, Ryan Jenkins, and Bradley Strawser, “Autonomous Machines, Moral Judgment, and Acting for the Right Reasons,” Ethical Theory and Moral Practice 18, No. 4 (2015): 851–72.
  12. Ryan Jenkins and Duncan Purves, “Robots and Respect: A Response to Robert Sparrow,” Ethics & International Affairs 30, No. 3 (2016): 391–400.
  13. George Lucas, “Engineering, Ethics, and Industry: The Moral Challenges of Lethal Autonomy, “ in Killing by Remote Control: the Ethics of an Unmanned Military, ed. Bradley Strawser (Oxford, UK: Oxford University Press, 2013).
  14. Ronald Arkin, “The Case for Ethical Autonomy in Unmanned Systems,” Journal of Military Ethics 2010.
  15. Agnieszka Jaworska and Julie Tannenbaum, “Person-Rearing Relationships as a Key to Higher Moral Status,” Ethics 2014.
  16. Rob Sparrow, “Robotic Weapons and the Future of War,” in New Wars and New Soldiers: Military Ethics in the Contemporary World, eds. Paolo Tripodi and Jessica Wolfendale (Basingstoke, UK: Ashgate, 2011); Thomas Aquinas, Summa Theologic, 2nd ed. (Fathers of the English Dominican Province, 1920).
  17. Thomas Nagel, “War and Massacre,” Philosophy and Public Affairs 1972: 123–44.
  18. Rob Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, No. 1 (2007): 62–77.
  19. Jenkins and Purves, “Robots and Respect.”
  20. Langdon Winner, “Do Artifacts Have Politics?” Daedalus (1980): 121–36.
  21. Dwight Eisenhower, “Farewell Radio and Television Address to the American People,” January 17, 1961, Delivered from the President’s Office at 8:30 PM,” Interventionism, Information Warfare, and the Military-Academic Complex, 2011.
  22. This idea is not originally mine, but I can’t trace the phrase to its original source. The claim that autonomous weapons are the third revolution in warfare was used perhaps most famously in an open letter from the Future of Life Institute and signed by Stephen Hawking, Elon Musk, Steve Wozniak, and others (Autonomous Weapons: An Open Letter from AI & Robotics Researchers, Future of Life Institute, July 28, 2015, http://futureoflife.org/open-letter-autonomous-weapons/).
  23. Leonard Kahn, “Military Robots and the Likelihood of Armed Combat,” in Robot Ethics 2.0, eds. Patrick Lin, Ryan Jenkins, Keith Abney, and George Bekey (Oxford, UK: Oxford University Press, forthcoming).

Ryan Jenkins

Ryan Jenkins is an assistant professor in philosophy at California Polytechnic State University at San Luis Obispo. His work focuses on applied ethics and normative ethics.


“It may strike some as incredible that soldiers ‘treat their enemies with respect,’ even as they line them up in their crosshairs.”

This article is available to subscribers only.
Subscribe now or log in to read this article.