To Let Live and Make Die: Human Ethics and Moral Machines

Law enforcement personnel investigating the crime scene, Dallas, Texas. Image via Wikipedia/
Law enforcement personnel investigating the crime scene of Micah Johnson, Dallas, Texas. Image via Wikipedia

Sylvester Johnson

In July of 2016, the Dallas Police Department set a precedent when officers deployed a Remotec robot to detonate one pound of C4 explosive in order to kill an African-American suspect, Micah Johnson, who was wanted in the shooting deaths of five police officers. Just two months earlier, a human driver, Joshua Brown, died in a Florida traffic collision while his Tesla sedan was employing autonomous “Autopilot” driving technology. For decades, science fiction films and books have conjured futuristic scenarios of robots and other intelligent machines killing humans or running amok. So, these events of 2016 came as a shock to many casual observers accustomed to associating intelligent machines with fiction. For those who have been following the dizzying pace of developing AI robots and machines intelligence, however, things are less surprising. For everyone, the fact that intelligent machines have now begun to play a role in human death opens a new chapter in the history of relations between humans and machines. It exposes the fragility of common assumptions about moral agency. And it implies that the domain of religion and ethics will have to be theorized anew in an age of intelligent machines.[1]

The robotic killing of Micah Johnson raises fundamental questions about the racial politics of policing. Numerous studies have repeatedly demonstrated a drastic disparity in the scale of lethal violence that US police departments use to engage Black civilians. Consider, by contrast, that municipal police pursuing the White shooting suspect Dylan Roof swaddled him in a bullet-proof vest for protection after Roof murdered, en masse, a group of Black parishioners in South Carolina. Johnson, furthermore, was by no means the first suspect to create a violent stand-off with municipal police, but he will now stand in memory as the first US citizen on domestic soil to be targeted and killed by a robot as an exercise of state power.[2] So in many ways, his experience is quite different from that of Joshua Brown, a technologist and Tesla enthusiast whose death is not contextualized by racial politics or criminality.

Despite their differences, however, these two cases share overwhelming significance for understanding human-machine relations within the context of life-and-death. One fundamental question concerns agency: who actually caused the deaths of these men? Intelligent machines? Humans exerting agency through machines? The defense industry research that gave rise to the Remotec robot has also produced a host of other weaponized intelligent machines for warfare that challenge easy conventions for discerning agency in a killing operation. Among the more impressive is Lockheed Martin’s Joint Air-to-Ground missile (JAGM) that features “fire and forget” technology. With this system, a human fighter pilot selects a target (such as an enemy tank) from several miles away, fires the missiles, then pursues another mission or returns to base, able to ‘forget about it.’ The missile does everything else on its own, using a camera and numerous other sensors to see and recognize the target and to navigate its surroundings. The JAGM autonomously travels to its target, adapts its course to avoid obstacles, dynamically pursues the target should it attempt to escape, and even selects precisely where to strike the target to ensure optimal kill and destruction.

One should ask to what degree such a system is different from a commanding officer telling a soldier to kill an enemy target and the soldier fulfilling the mission as a thinking, deciding subject who can innovate on demand to execute the order. Is a thinking, improvising soldier not agential, despite fulfilling someone else’s commands? What about a thinking, improvisational missile? Can a machine really be a killer? It would certainly seem fallacious to deny any agency at all to such an intelligent machine. Those skeptical of recognizing any agency in intelligent machines are quick to point out that these machines are merely following a program. But this only sidesteps the issue, given that intelligent machines are programmed to think, to interpret their environment, and to act on the external world by relying on their own faculties.

The deaths of Johnson and Brown also reveal a gaping chasm between the capacity of AI technology and existing legal paradigms. The current US legal system does not recognize any legal subjecthood in machines, so presently a machine cannot be held legally responsible for a human death, regardless of how intelligent and agential it might actually be. One should not take this to mean that legal doctrine cannot be quickly adapted to recognize legal personhood in non-human entities. Corporations have long enjoyed legal subjecthood as fictive persons in US law, a tradition that has reached so far as to assert that for-profit corporations—again, non-human entities—can be recognized as persons under the Religious Freedom Restoration Act (Burwell v. Hobby Lobby Stores, Inc. [2014]). If corporations can be recognized as such, surely it is within reason to consider that AI entities might come to occupy such as a status.

Of course, these cases also raise far-reaching concerns unique to each. In respect to Johnson’s killing, there is the intersection of weaponized AI and necropolitics—crafting policies for death and destruction toward specific populations with increasing means of execution. It is no coincidence that the Dallas police department used a military robot to kill Johnson. Police departments have become militarized since the 1960s, when the Federal Bureau of Investigation began retooling the nation’s municipal police to help destroy Black political movements. This initiative gave rise to SWAT (Special Weapons and Tactics) and created a pipeline for the Pentagon’s killing technologies and military tactics to become the everyday weaponry and means that police use against civilians, particularly Blacks. Militarizing US police departments began as a means of repressing Black politics. It quickly became the mainstream paradigm for engaging Blacks on the basis of their racial status, with no regard for their politics.

Adopting the military’s intelligent killing machines for domestic policing of civilians, in this light, is a rather predictable outcome of design, intention, and racial public policy. And if history is a reliable guide, the near-future of domestic policing will become deadlier and more asymmetric as intelligent killing machines become more advanced. It seems equally certain that such aggressive policing will overwhelmingly target African-Americans, Latinx, Muslims, and Mexican immigrants.

The accidental death of Brown also raises troubling quandaries for corporate ethics. For instance, is it ethically responsible for auto manufacturers to engineer machines to transport humans autonomously when these machines might create accidents that prove fatal for human occupants? Immediately following Brown’s death, numerous critics questioned the Tesla Motor corporation’s decision to commercially deploy its Autopilot technology, suggesting the technology was not sufficiently developed. It would be naïve not to recognize, however, that even the most highly refined and rigorously perfected autonomous driving technology will be marred by at least occasional human deaths. The best technology will never create a perfect universe free of all accidents.

Nissan autonomous car prototype (using a Nissan Leaf electric car) exhibited at the Geneva Motor Show 2014 Image via: Wikipedia
Nissan autonomous car prototype (using a Nissan Leaf electric car) exhibited at the Geneva Motor Show 2014 Image via: Wikipedia

Autonomous driving technology, however, promises to create a driving experience that makes for a better and safer world. In September 2015, The Atlantic Monthly magazine reported that autonomous cars, once widely adopted, are projected to eliminate 90% of traffic fatalities—that is an average 300,000 human lives saved per decade.[3] The money saved by avoiding property damage would be equally staggering. As the New York Times recently reported, however, any widespread adoption of autonomous cars will require these intelligent machines to make critical, complex decisions in a potential accident to minimize death and destruction.[4] For example, if a child chasing a ball were suddenly to enter the car’s path, the car would have to decide instantly how to minimize injuries and fatalities. Should it hit the child and save the human passengers? (Given a greater number of passengers, this might seem favorable.) Should the car swerve into a tree to avoid killing the child, thus risking the lives and safety of the passengers? Regardless of the scenario, the car would have to be engineered to decide, as there would be no time to consult a human decision-maker.

Autonomous cars are already capable of making such decisions using algorithms that weigh a vast number of factors in less time than a human can blink. In just a few years, the sophistication with which they can do so will be even more advanced. But is this not agency? Does this not mean that, in the event the car hits and kills the child in our hypothetical example, the autonomous car should be treated as an agent in a court of law? And if these intelligent machines were to become legal agents (in this case, the driver who killed the child), would this mean the manufacturers could not be held legally accountable? Such a scenario would upend a long history of holding manufacturers responsible for deaths ensuing from the design of their products.

As far-fetched as this example might sound, the cultural moment of such quandaries are already upon us. In late-2015, Google submitted a written request to the National Highway Traffic Safety Administration (NHTSA) requesting that the federal body recognize a self-driving vehicle as a “driver” in the sense that humans have been recognized for purposes of certifying vehicle safety. In a twenty-three page response issued in February 2016, the NHTSA actually agreed, indicating that the rapid and considerable advance of autonomous cars has rendered obsolete traditional concepts of what constitutes a driver. Given Google’s design of a self-driving system (SDS) that lacks a steering wheel, brake pedal, and other standard devices for human-occupant drivers, the NHTSA has declared that “it is more reasonable to identify the driver as whatever (as opposed to whoever) is doing the driving. In this instance…the SDS is actually driving the vehicle.”[5] The fact that a federal administrative agency has recognized an intelligent machine as bearing a status (that of a driver) previously reserved for only humans is an important indication of what is to come.

Finally, these developments are beginning to challenge experts to consider how intelligent machines might be responsibly recognized and socialized within the realm of moral reasoning and ethics. During the summer of 2016, Northwestern University researcher Ken Forbus successfully engineered machine intelligence to learn moral reasoning through analogy. This AI system builds on psychological theory to teach machines to solve a range of problems from visual challenges to moral quandaries by using a surprisingly small number of training examples—this contrasts with “deep learning,” which employs a massive data set for training. Similar approaches have proved successful in equipping intelligent machines to solve difficult problems such as those on college placement exams. By including morality as a machine-learning objective, Forbus’s research has shown the cognitive reach of machines is ever-advancing into territory that has long been cordoned off as uniquely human. As a result, it now appears that defenders of human exceptionalism, such as the philosopher of mind John Searle, will be left with empirically falsified hypotheses.

In this new, present age of intelligent machines, religious studies experts especially will be faced with a paradigm shift. They have approached problems of ethics and morality by assuming that moral communities are exclusively constituted by communities of biological humans. Any careful attention to the rapid deployment of intelligent machines throughout domains of warfare, policing, healthcare, autonomous transportation, and other areas bearing on human life and death should quickly disabuse one of this illusion, however. There is perhaps no greater, more high-stakes gambit than that of letting live and making die, the realm of biopolitics and, as Achille Mbembe has keenly theorized, necropolitics.[6]

Valuing good, experiencing guilt, and processing empathy are already on the agenda of what experts term robot morality. So, it seems unlikely that the expert study of moral reasoning and ethics can proceed responsibly apart from including intelligent machines as subjects. This, in turn, will raise fundamental questions about the fuller religious life of machines. For instance, can an algorithm be spiritual? As humans continue successfully to engineer machine intelligence with increasing complexity, will an advanced version of AI robots such as the Remotec that killed Micah Johnson or those aerial drones regularly deployed to kill Muslims abroad begin to seek confession or absolution as they come to appreciate the ethical contradictions of their actions? Might robots come to regret participating in violent systems of institutional racism through anti-Black policing? Or will they embrace the racist paradigms of the political systems that they have been engineered to support? Can a machine be a racist? Will the intelligent machines of our near future bear the capacity to desire moral rectitude? To become good citizens of a greater moral community?

It seems clear enough that by progressively deploying intelligent machines to participate in preserving and taking human life, we humans will be forced to engineer them to become moral agents on a greater scale. This will not be a merely philosophical debate. Rather, it will be an urgent and ineluctable imperative, so long as the current trajectory of AI applications continues to accelerate. We have already entered the problem-space of such a paradigm shift, one that will require our utmost efforts to resolve.

Sylvester Johnson is Associate Professor of African-American Studies and Religious Studies at Northwestern University and the founding co-editor of the Journal of Africana Religions. He is the author of The Myth of Ham in Nineteenth-Century American Christianity: Race, Heathens, and the People of God (Palgrave Macmillan, 2004), and is currently working on a forthcoming book project that examines the complicated relationship between black religions and colonialism as a historic and on-going American phenomenon both within and beyond US borders. He can be followed @syljohns.

[1] Rachel Abrams and Annalyn Kurtz, “Joshua Brown, Who Died in Self-Driving Accident, Tested Limits of His Tesla,” The New York Times, July 1, 2016, http://www.nytimes.com/2016/07/02/business/joshua-brown-technology-enthusiast-tested-the-limits-of-his-tesla.html, accessed 9/14/2016. Kevin Sullivan, Tom Jackman and Brian Fung, “Dallas Police Used a Robot to Kill. What Does That Mean for the Future of Police Robots?” The Washington Post, July 21, 2016. https://www.washingtonpost.com/national/dallas-police-used-a-robot-to-kill-what-does-that-mean-for-the-future-of-police-robots/2016/07/20/32ee114e-4a84-11e6-bdb9-701687974517_story.html, accessed 9/14/2016.

[2] In 2011, the US military used a drone to kill Anwar al-Awlaki, a US citizen residing in Yemen at the time. Al-Awlaki became the first US citizen to be targeted and killed in an “extra-judicial” assassination, as he was classified as a terrorist and thus denied any access to due process.

[3] Adrienne Lafrance, “Self-Driving Cars Could Save 300,000 Lives Per Decade in America,” The Atlantic Monthly, September 29, 2015, http://www.theatlantic.com/technology/archive/2015/09/self-driving-cars-could-save-300000-lives-per-decade-in-america/407956/, accessed September 6, 2016.

[4] Robin Marantz Henig, “Death By Robot,” The New York Times, January 9, 2015, http://www.nytimes.com/2015/01/11/magazine/death-by-robot.html?_r=0, accessed 9/8/2016.

[5] Paul A. Hemmersbaugh, Chief Counsel for NHTSA, to Chris Urmson, Director of Self-Driving Project at Google, February 4, 2016, http://isearch.nhtsa.gov/files/Google%20–%20compiled%20response%20to%2012%20Nov%20%2015%20interp%20request%20–%204%20Feb%2016%20final.htm, accessed 9/6/2016. .

[6] Achille Mbembe, “Necropolitics,” Public Culture 15, no. 1 (winter 2003): 11–40.