Article Lead Image

Self-flying planes won’t save us from another Germanwings disaster

Don't let a robot take the wheel.

 

Greg Stevens

Via

Posted on Mar 31, 2015   Updated on May 29, 2021, 4:46 am CDT

Whenever a human does something particularly stupid or atrocious, it doesn’t take long for people to speculate that computers could come to the rescue. It happened recently, after a Germanwings co-pilot flew an airplane full of passengers into a mountain because he was depressed

Stories emerged about the need for background checks, medical history checks, and tighter security in the cockpit, but technophiles everywhere were drawn to a more fully technological solution, as Peter Garrison mused in the Los Angeles Times: “If cars can be made self-driving—as supposedly they can—then airplanes too could perhaps be freed of the last layer of unreliability: the crew.”

Garrison rejects the idea of a fully automated passenger airplane—at least for now—simply because it would make people uncomfortable. “Would you not rather see a human crew?” he asks, rhetorically, concluding: “Most people would.” Yet full automation is the direction that futurist bloggers and technophiles have been eagerly predicting since before this recent airline tragedy. Even in the mainstream press, when the first Jetstream aircraft made its way across British airspace with no pilot on board in 2013, has given us eager headlines: “Pilotless Drones Might Be The Future Of Commercial Flying!”

So why shouldn’t we think of artificial intelligence as a superhero coming to our rescue? Machines can be programmed to make lightning-fast decisions without the flaws and unpredictability of human pilots. I mean, a computer program would never deliberately fly an airplane into a mountain because it was depressed, would it?

“I wouldn’t do that if I were you, Dave.”

On the other hand, the storyline of HAL9000, the psychopathic computer in the 1968 movie 2001: A Space Odyssey, is so deeply-rooted in American culture that you have almost certainly heard of HAL, even if you’ve never seen the movie.

A computer program would never deliberately fly an airplane into a mountain because it was depressed, would it?

In that movie, HAL was an artificially intelligent program designed to control the spacecraft and interact with the crew on a manned mission to Jupiter. HAL had some bugs in its programming; over time, its reasoning deteriorated to the point where it thought that killing off the crew was a really good idea. The notion that a “super intelligent machine” could use cold reasoning to decide to eliminate humans from the equation is a popular trope in science fiction and has even now become a real concern put forth by scientists and politicians when talking about artificial intelligence.

When the Machine Research Intelligence Institute put out a survey of “Catastrophic Risks of AI,” one of the serious proposals listed was that autonomous intelligent machines might just decide to snuff us on the grounds that we’re a waste of space. Recently, a group of artificial intelligence researchers wrote an open letter warning of things that we need to consider when developing A.I., and concerns about the value of human life were again at the top of the list.

“How should an autonomous vehicle trade off, say, a small probability of injury to a human against the near-certainty of a large material cost?” cautions the letter. That question is more complicated than whether or not A.I. might decide we’re all too stupid to live, but the underlying dilemma is the same: How can we be sure that autonomous machines will value human life and won’t behave like murderous psychopaths?

When people speculate about the wonders that will come when airline travel is freed from the flaws of human pilots, I can’t help but think of HAL. No, I’m not taking seriously the idea that someday Marvin the Robot Pilot will decide to fly himself into a mountain because he just can’t bear life. But it is interesting from a psychological standpoint to see how quickly we humans can flip back and forth between from viewing computers as the superhero come to rescue us and the supervillain out to destroy us, depending on our mood and circumstances.

Failing like humans do

Artificial intelligence isn’t advanced enough to be trusted to pilot our planes, not completely. Cockpits are highly automated already, and much of the basic mechanics of take off, flight, and landing are already guided almost entirely by computers. But pilots are there to make sure that everything happens when it should, and to take control when special circumstances arise, whether it’s the need to avoid an unusual weather pattern or a last minute change in scheduling. We are reassured to know that a human being can take the reigns at any moment if the situation called for it.

But science progresses, and the day will come when an artificially intelligent auto-pilot can handle any kind of last-minute changes and extreme circumstances. So, it’s worth taking a moment to think seriously about what that technology will look like.

How can we be sure that autonomous machines will value human life and won’t behave like murderous psychopaths?

We know that computers have faster reaction times than humans. This is one of the big selling points that Volvo has used to promote the idea of self-driving cars: In an emergency, a self-driving car may be able to avoid an accident that human reaction times are simply too slow to avoid.

We would also like to think that computers will not be distracted by things like fear and fatigue, but in this case, I’m not so sure. Our technology hasn’t advanced enough to know exactly how they will work, and if we don’t know how they will work, we don’t know how they will fail.

But there is a stronger reason that I think tired or scared robots are a real possibility. Let’s start by looking at an example of artificial intelligence that we have basically already solved: retrieving memories based on partial examples. When people first began seriously attacking the question of artificial intelligence, in the 1950s and 1960s, one of the biggest problems to solve was how to deal with distorted or incomplete information.

This is something that the human mind does effortlessly. When we look across a room, we can recognize our friend even if his face is turned partly away from us, and even if her face is partially hidden behind other people in a crowd. When we see an object, we don’t have to go hunting through our memory to try to figure out what it is: The memory is triggered almost instantly, just by the image itself.

Computer scientists call this “Content Addressable Memory” (CAM), meaning that a memory is triggered by a partial match between what we see and the memory itself. We see part of the face of our cat peeking around the corner, and that image triggers the thought of our cat. It is very different from traditional computer memory, Random Access Memory (RAM), where the computer has to keep vast look-up tables of where information is stored in order to get to it.

In the early days of computing, programs were very brittle: If you wanted to call up something from memory, you needed to know exactly what to type in, and you needed to type it in correctly. A single mistake, and the computer system would simply fail.

One of the early successes of artificial intelligence was learning how to program computers for Content Addressable Memory. We found ways to write programs that could take partial or distorted inputs—say, for example, a partially hidden face, or an image of a face that appears on a noisy CCTV screen—and find the closest match based on similarity. These systems could learn new patterns and try to figure out the rules to apply to new inputs based on previous examples. They are amazingly flexible, and they now form the core of much of the technology that we take for granted, from basic computer network operation to voice and facial recognition.

If we don’t know how they will work, we don’t know how they will fail.

But the academic researchers who studied the properties of these CAMs discovered another interesting thing about them, as well: When they fail, they fail in ways that are similar to the way human memory fails. The more memories it tries to store, the more likely it is to randomly make a mistake from time to time. When it tries to remember too many things that are similar to one another, it will get confused. It can even come up with “false memories” that resemble somewhat inputs that it was exposed to, but that it actually was never trained to learn.

The very characteristics that made CAMs wonderful—their adaptability and capacity to learn new things—are also the same characteristics that make them get confused and make mistakes. Content Addressable Memory is just one of many examples where the history of development of artificial intelligence has shown this pattern. As a general rule, the tricks and processes that make the human mind flexible and fast, also make it fallible.

This is why I’m skeptical of the robot superhero, undistracted and indefatigable, coming to save us from our tired and distracted human pilots. After all, a robot pilot will need to have something that simulates human “attention” in order to prioritize its inputs. When the day comes that we build that machine, who should we expect that it won’t have an “attention” that can fail in exactly the same ways that human attention can fail?

All of this is far in the future, to be sure. I’ve spoken, informally, to several pilots in the last few days about the Germanwings news story, and the fallibility of humans. All of them believe that the way to improve the system, for now at least, is to find some “perfect mixture of human and machine” that doesn’t rely too heavily on one or the other.

But we can still imagine the day when we may move to total automation, and when we do we should be realistic about what that will look like. Our robot pilots will be neither superheroes nor supervillains. Given what we know about the way artificial intelligence works, our robot pilots will be different from human pilots but will be destined to fail, from time to time, in ways that seem all too human.

Photo via Dean Hochman/Flickr (CC BY 2.0)

Share this article
*First Published: Mar 31, 2015, 12:30 pm CDT