Stephen Hawking

He usually isn’t wrong…

None other than world-renowned theoretical physicist Stephen Hawking has chimed in on artificial intelligence’s potential to spell havoc for humankind.

“The primitive forms of artificial intelligence we already have have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race,” he told the BBC. “Once humans develop artificial intelligence, it would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

This is the second time this year that Hawking has earnestly addressed what laypeople might quickly dismiss as a story arc from a sci-fi film. In a Guardian op-ed from May, Hawking wrote that “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”

Hawking’s most recent comments were precipitated by questions about a recent upgrade in Hawking’s communication setup. A basic artificial intelligence system by Swiftkey (which already runs on many smartphones) helps Hawking write and speak more quickly by making educated guesses about the next word he intends to type. If Hawking and other AI doomsday-sayers (Tesla CEO Elon Musk is becoming notorious for sharing this point of view) are correct, then predictive text is only the beginning.

Andrew Stroup is an entrepreneur and roboticist who cofounded MegaBots, Inc., which is quite literally a fighting league for giant human-controlled robots. “We think Hawking, along with others, have a valid point regarding the potential downside of sophisticated, autonomous computing systems that are able to adapt and make decisions based on perceived optimal choices,” he said. “We’d say that since Hawking is considered the ‘smartest person on the planet,’ his statement is credible, but it really comes down to how companies and people utilize the technology that will dictate its future.”

Let’s assume a worst-case scenario: Hawking is correct and it’s only a matter of time until machines wipe us all out. While this may manifest itself in a number of different ways, Stroup tells us that it will all start from the same place. 

“The most likely path we’ve identified for AI systems wiping out mankind is when they’re given the job to optimize a task or function and they decide that humans are not the appropriate choice, are inefficient, or significantly hinder the optimization process and are deemed unnecessary and categorized as an expandable good or waste.” Whether robots blatantly kick in your front door and round up your family or a subtle, malicious software quietly takes control of every single plane in the sky will depend entirely on how we develop the technology we already have. It’s easy to see this as a choose-your-own-nightmare scenario.

They’ll either wipe us out as Hawking suggests, or live peaceably among us. 

Robotics academics generally don’t like to offer professional opinions on Terminator-type scenarios, but some already have. “I do not believe machines will surpass human intelligence,” robo-ethicist Ryan Calo told Business Insider. “Even if processing power continues to advance, we would need an achievement in software on par with the work of Mozart to reproduce consciousness.” A reassuring but surprising paper by Northwestern professor John McGinnis suggests that robots will get along just fine with humans, but only because we will model them to think and get along like us: “Artificial intelligence can be programmed to weigh human values in its decision making. The key will be to assure such programming.”

Unfortunately it seems that the jury is far too divided to definitively answer questions about the murderous tendencies of the machines of the future. They’ll either wipe us out as Hawking suggests, live peaceably among us per McGinnis’s paper, or remain the dumb machines we know them as today.

Steve Cousins, CEO of robotics company Savioke, is highly skeptical of any such robo-apocalypse scenarios and was quick to put them to bed for us. “Malevolence implies a conscious intention to cause harm,” he said. “We’re a long, long way from artificial intentions, much less consciousness.” 

Photo via  lwpkommunikacio/Flickr (CC BY 2.0)

Debug
MIT created a system that predicts what robots are thinking
MIT researchers hope it will become a testing system for robotics companies. 
From Our VICE Partners

Pure, uncut internet. Straight to your inbox.