Article Lead Image

Stephen Hawking did not just predict the end of civilization

The professor's fear of a robot takeover has been greatly exaggerated.


Miles Klee


Posted on May 4, 2014   Updated on May 31, 2021, 9:16 am CDT

We’ve been cracking jokes and making movies about artificially intelligent machine overlords for decades at this point, and yet all it takes is one measured column on the subject co-authored by theoretical physicist Stephen Hawking to send the global populace into a tailspin of abject fear.

With all due respect to io9 and others who likewise reported on the op-ed, “Stephen Hawking Says A.I. Could Be Our ‘Worst Mistake In History’” a virally eye-catching headline—but the word “could” may as well be italicized. The actual essay in the Independent, with a byline that includes the prestigious names Stuart Russell, Max Tegmark, and Frank Wilczek in addition to Hawking’s, is about how we can’t anticipate the future of AI tech, let alone some robocalypse:

“The potential benefits are huge; everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list,” the scientists write with the cool, rational optimism common to society’s finest innovators, adding that the creation of true AI “would be the biggest event in human history.”

Then comes the perfectly reasonable warning:

Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation.

In other words, artificial intelligence will have unpredictable results, and probably—as with most epoch-defining new technologies—amount to a double-edged sword. But it’s quite a leap to get from “hey, maybe we shouldn’t let computers be in charge of the nukes” to “we’re all gonna die, and soon,” don’t you think? Especially when the authors here aren’t advocating for an absolute end to AI research, rather an increased focus on its potential long-term outcomes.

What am I saying, though. This is the Internet, where, for reasons it might pain us to explore, everyone would rather believe the sky is falling. Trending topics, ahoy! 

Photo by Chris Isherwood/Flickr (CC BY-SA 2.0)

Share this article
*First Published: May 4, 2014, 1:43 pm CDT