Internet Culture

How ‘Terminator’ defines our fear of robots after 30 years

Thirty years later, we’re still afraid of Skynet. 

Photo of Gillian Branstetter

Gillian Branstetter

Article Lead Image

Thirty years ago this week, the first Terminator film gave us a handful of catchphrases (“I’ll be back”), a dozen action film clichés, and the indomitable and unpredictable kickstart to the careers of Arnold Schwarzenegger and director James Cameron.

Featured Video

But aside from the film’s explosive visuals and cornball acting, the Terminator also endowed us with the popular language necessary to discuss the real and coming probability of artificial intelligence. As we entrust our devices and software with more and more aspects of our daily lives, science fiction serves the perhaps unlikely dual roles of bellwether and doomsayer.

The war between man and machine that comprises the base of the Terminator series is not near to our present, nor is it a certainty. But popular fiction like the Terminator arms us with the ability to grasp the reality of superintelligent AI and how best citizens, scientists, and policymakers can approach the problem.

Before The Matrix, Terminator was possibly the most realized vision of machines turning on their creators in the popular psyche. Most world domination blockbusters up to 1984 were focused either inward (by zombies if you’re George Romero or by nukes if you’re Stanley Kubrick) or outward (by alien invasion a la Invasion of The Body Snatchers).

Advertisement

In the storyline of the Terminator universe, a future mankind develops Skynet, the superintelligent AI charged with manning the nuclear missile arsenal of the United States. Coming to the conclusion that mankind is a threat to Skynet, the program launches the entire fleet in an attempt to destroy life on Earth.

Yet a human resistance group forms, headed by John Connor. In an attempt to defeat the resistance, Skynet sends a T-800 unit robotic assassin (played with a perfect robotic lack of personality by Schwarzenegger) back in time to kill Connor’s mother before he is even born.

This vision of man’s creations rising against him is not at all original to Terminator. The first television program ever was a BBC production of the Czech play Rossum’s Universal Robots (which, incidentally, coined the term “robot,” from the Czech for forced labor, robota), which tells the story of humanity being overrun by a manufactured group of organic automatons. Even in Mary Shelley’s Frankenstein (published in 1818), the monster’s creator dwells on his fear that, were he to invent a mate for his fiendish creation, they might breed and their offspring would come to threaten human existence.

Promethean mistakes aside, this fear exists outside of sci-fi as well. Earlier this week, SpaceX and Tesla CEO Elon Musk voiced honest, unironic fears that superintelligent AI could be “the biggest existential threat” to humans and called its development “summoning the demon.”

Advertisement

In fact, Terminator director James Cameron himself swears “Skynet has already won,” adding anyone with a trackable smartphone is “toast.” Cameron is backed up by Light Reading CEO Stephen Saunders, who borrowed the Skynet analogy to forewarn of the coming “drone comm” wars between Google and Facebook.

And it’s more than chatter about surveillance. The Department of Defense’s recent research into creating AI with morality and ethics envisions a future when robots could be making decisions about who to kill or who to let live. While that might seem farfetched, such an issue might already be here: earlier this month, the U.S. Navy unveiled unmanned vessels that can swarm enemy ships—all without a human operator.

Perhaps more troubling is the idea of Big Dog, and not just visually. The load-bearing quadruped hopes to one day deliver injured soldiers from the battlefield to a medic. But what happens when it encounters two gunshot victims, one with a bullet in the chest and another with a shrapnel in the skull? Human battlefield medics have to make quick, morally complex decisions. We could soon be trusting robots with such responsibility, so it’s actually somewhat plausible they, like Skynet, might watch over our most powerful weapons.

To some extent, they already do. Back in 2007, Wired ran a report about “the Doomsday device” the Soviet Union developed during the Cold War. An automated system that would launch ICBMs after any such attack on the U.S.S.R., the software was famously overridden by Stanislav Petrov, an officer of the Soviet Air Defense who correctly saw its warnings as a false alarm and prevented faulty sensors from causing a nuclear war. By most accounts, the Doomsday device is still running in Russia to this day.

Advertisement

However, as we in the States have learned in the past year, not every soldier is a Petrov. Earlier this year, 20 percent of the officers in charge of the Air Force’s nuclear command were implicated in a widespread cheating scandal. In fact, even their superior officer was removed for several accounts of heavy drinking, drug use, and cavorting with a “suspect woman.”

With such ineptitude and corruption surrounding the tools to end the world—not to mention that the only man who could launch those weapons is currently approved by roughly two fifths of Americans—one can easily see where people at large might welcome the rational stillness of software over the flawed impracticality of humans.

Fortunately, however, there are several serious limitations between our current technology and anything resembling Skynet. Like most fiction about AI, the film never quite explains how Skynet went from servant of man to self-aware being, simply because no one knows how that could happen. While we can theoretically train computers to behave in any way we’d like them to, we cannot train them to be aware of anything (an aspect of consciousness metaphysicists call “qualia”), meaning we certainly can’t program judgement.

We can’t yet code morality into a drone because we don’t really understand what morality is in the language of our own brains. Until we understand how we make decisions—which is by no means a perfect art as it is—we’ll never be able to create a machine to do it for us. Put more simply by Jaron Lanier, “We don’t yet understand how brains work, so we can’t build one.”

Advertisement

But that doesn’t mean that Skynet or T-800s are never coming, and they might be here earlier than you assume. Ray Kurzweil, director of engineering at Google and the preeminent thinker on AI, predicts the decade of the 2020s will see a rise of artificial intelligence, even culminating in a “robot rights” movement by 2029.

According to the official Terminator wiki, the fictional Skynet would actually line right up with Kurzweil’s predictions, achieving sentience by 2021.

Skynet serves as an easy and obvious metaphor for our general fear of technology as a whole, highlighting the cold-hearted structure a superintelligent AI might try to enforce on humans as well as our own fragile ability to defend ourselves against our own creation. However, so does our current robotic reality. We hand over more and more of our lives to software simply to fill in the holes we’re either too busy or unable to fill, whether it’s responding to our email or shooting straight.

Works like Terminator not only give voice to these fears but highlight exactly what they could mean for humanity’s future. You don’t need to stock your bomb shelter quite yet; it is a conundrum researchers and even the military takes seriously. But it seems with increasing inevitability we will have to answer the questions the fictional creators of Skynet never even bothered to ask.

Advertisement

Photo via Max Keisler/Flickr (CC BY 2.0)

 
The Daily Dot