Article Lead Image

The troublesome debate over the future of killer robots

There’s a movement to pull the plug on the machines before they advance any further.

 

Dylan Love

Tech

Posted on Sep 21, 2015   Updated on May 27, 2021, 10:55 pm CDT

How much jail time would Arnold Schwarzenegger’s robotic character serve for the 37 deaths in Terminator?

The question of whether robots should be allowed to kill people—and what the potential consequences might be—isn’t reserved for far-off post-apocalyptic regimes or 1980s sci-fi. It’s actively being discussed in the United Nations, and there’s a movement to pull the plug on the machines before they advance any further.

Like it or not, lethal autonomous weapons—robotic systems that identify and engage enemy targets upon activation without human control or intervention—are already here. In 2010, South Korea deployed a vicious self-targeting machine gun sentry turret called a Super aEgis 2 at its North Korean border. It’s capable of automatically locking onto a human-sized target in total darkness from 1.3 miles away, and while it has a manual fire mode that requires human permission to do any damage, it also has an autonomous mode for fully independent firing. Likewise, Russia dispatched a roaming apex predator—often described as a cross between Robocop and a Jeep—last year to patrol the country’s missile bases. It can sweep its territory for 10 hours at a time, travel up to 28 miles per hour, and “sleep” in standby mode for up to a week. When it identifies a target, it can fire its own 12.7-millimeter machine gun at any unwelcome visitor.

Like it or not, lethal autonomous weapons are already here.

Precise, unthinking, and unfeeling, such weapons boast a number of distinct advantages. They’re convenient yet complex instruments of death, capable of drastically reducing human error and defense casualties. But they also raise heady political and philosophical questions that our policymakers and the general public are only beginning to grapple with.

“The way the law is designed, the way we think of morality, we depend on human agents making choices and taking responsibility for them,” said Peter Asaro. He’s part of the Campaign to Stop Killer Robots, a nongovernmental effort to preemptively ban autonomous weapons; it’s comprised of nine similarly aligned nongovernmental organizations, including the International Committee for Robot Arms Control, Human Rights Watch and Mines Action Canada, a social justice organization for victims of automated weapons. “There’s definitely a line that’s crossed when you automate the targeting and firing decision process,” Asaro continued.

Anyone who’s lost valuable data due to computer error knows that a machine is not always acting in its owner’s best interests. Asaro says there’s simply no way for a computer to be able to identify when someone must die given certain circumstances. Consider driverless cars, which are being tested with similar questions of life-or-death dilemmas: Is it better for a driverless car to take evasive action from an accident despite the risk it poses to its passenger, or is it better for the machine to choose what’s best for those in the vehicle? How should a computer weigh one loss against another?

That hypothetical has been played out countless times since the introduction of “the trolley problem” in 1967. It’s an ethical thought experiment that asks if one should let a train continue unimpeded to kill five people on a track, or if one should intercede and switch the train to a different track that would only kill one person. It’s fraught territory, one without a definitive answer.

“In military terms, you have an objective, perhaps to take a city or build a bridge, and that’s subsumed by the overall strategy,” Asaro said. “The act of dropping a bomb is framed in the context of overall strategy, and you have to be able to evaluate what actions serve it best. ‘OK, there’s a school bus full of kids on the bridge, do I still take it out? Or can I just wait for kids to pass and then blow it up?’ To think that [a programmer] in an engineering suite has figured out every possible contingency is absurd.”

For Dr. Ron Arkin of Georgia Institute of Technology, the question isn’t “Should robots be allowed to kill people?” but rather “Should robots be cast to kill people?” Human involvement is crucial.

“Robots don’t have free will; they don’t make decisions as to whether or not to kill something,” Dr. Arkin, well-known in this sphere of debate as being in favor of using such robots, told the Daily Dot over the phone. “A robot is just a tool; there’s no moral agency to it.”

Arkin suggests we ought to consider lethal autonomous weapons as a wholly new class of weapon—one that is more precise and designed to act in accordance with the rules of warfare. “Why should this surprise us?” he writes in a paper titled “Ethical Robots in Warfare.” “Do we believe that human warfighters exhibit the best of humanity in battlefield situations?”

“A robot is just a tool; there’s no moral agency to it.”

The paper outlines Arkin’s case at length. Such robots are stronger than humans, he argues, can go places humans can’t, and are “smarter” than people in certain circumstances—consider IBM’s computer systems that have beaten humans at chess and Jeopardy. Robots completely avoid the element of human liability in times of stress and conflict. The way he sees it, armed robots are comparable to drones—strategic devices that are controlled (or have defined parameters set) by human actors.

Arkin blames pop culture for making a political issue out of a plot element. “Lethal autonomous weapons are getting conflated with the vision of Terminator-like robots, but that’s not reality of the situation and it’s not anytime in the near future,” he said. “These are machines acting at the behest of human beings.”

Even still, there’s a growing concern that the use of armed robots—again, not unlike drones—would dull us to the casualties of war, removing the attackers, both emotionally and physically, from the action.  

“I think [lethal autonomous weapons] make war too easy, at least for those conducting the remote war, since they aren’t confronted with its horror and hence may be more likely to engage in it,” said John Messerly, affiliate scholar for the Institute for Ethics and Emerging Technologies.

“To elevate machines capable of performing our ‘killing’ for us dehumanizes us far more than it makes the machine human.”

“There don’t seem to be any moral implications for the machine, but there are of course moral implications for the designers and users of the machines,” added Clark Summers, a retired U.S. Army colonel and current doctoral student of humanities at Salve Regina University. “To elevate machines capable of performing our ‘killing’ for us dehumanizes us far more than it makes the machine human.”

Debate on this topic amongst political leaders and policy makers is far from settled, as illustrated by the United Nation’s second annual Meeting Of Experts on LAWs in April. Austria’s ambassador to the U.N., Thomas Haznocji, raised the issue of how a robot, using only software and electronic sensors, might reliably distinguish friend from foe. This excerpt from his statement encapsulates the current state of the debate on killer robots:

“Today, clearly only humans are capable to distinguish reliably between civilians and combatants in a real combat situation, thereby ensuring observance of [International Humanitarian Law]. Whether technology will be able to create at some future point machines with an equivalent capability seems to be a matter of speculation at this stage. In any case, the blurring of the fundamental distinction between the military and civilian spheres, between front and rear, as an ever more prominent feature of modern warfare, does not make this an easy task.”

Here’s hoping the debate can keep up with the development of such lethal autonomous weapons moving forward.

Illustration by Tiffany Pai 

Share this article
*First Published: Sep 21, 2015, 9:00 am CDT