- A lonely grandma sought family to spend Christmas with on Craigslist Saturday 5:45 PM
- Airbnb bans white supremacists tied to Iron March forum Saturday 5:07 PM
- Did a Twitter user really get tricked into naming baby ‘Jack Ingof’? Saturday 4:46 PM
- State of emergency declared in New Orleans following ‘cyberattack’ Saturday 4:12 PM
- Video shows boy getting beat up–mom says it’s because he wore MAGA hat Saturday 3:54 PM
- Billboard changing albums chart to count YouTube streams Saturday 2:43 PM
- TikTok’s 20 most popular songs of 2019 Saturday 2:14 PM
- Greek gods memes are flooding Reddit thanks to TV reboot rumors Saturday 1:47 PM
- Anti-impeachment protesters aimlessly fumble through halls of Congress Saturday 12:54 PM
- Everything we know so far about the Xbox Series X Saturday 12:17 PM
- ASMR YouTuber Life with MaK says she was branded a ‘Nazi’ by online smear campaign Saturday 10:46 AM
- Voters duped by fake ex-Bloomberg intern’s tweet about being fired Saturday 9:47 AM
- HBO’s ‘Watchmen’ and the fantasy of competence Saturday 8:00 AM
- Cómo ver Kamaru Usman vs. Colby Covington en el UFC 245 Saturday 7:00 AM
- ‘Penis fish’ memes erupt after worms wash up on California coast Friday 5:58 PM
Stephen Hawking thinks artificial intelligence will destroy us
He usually isn’t wrong…
None other than world-renowned theoretical physicist Stephen Hawking has chimed in on artificial intelligence’s potential to spell havoc for humankind.
“The primitive forms of artificial intelligence we already have have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race,” he told the BBC. “Once humans develop artificial intelligence, it would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”
This is the second time this year that Hawking has earnestly addressed what laypeople might quickly dismiss as a story arc from a sci-fi film. In a Guardian op-ed from May, Hawking wrote that “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”
Hawking’s most recent comments were precipitated by questions about a recent upgrade in Hawking’s communication setup. A basic artificial intelligence system by Swiftkey (which already runs on many smartphones) helps Hawking write and speak more quickly by making educated guesses about the next word he intends to type. If Hawking and other AI doomsday-sayers (Tesla CEO Elon Musk is becoming notorious for sharing this point of view) are correct, then predictive text is only the beginning.
Andrew Stroup is an entrepreneur and roboticist who cofounded MegaBots, Inc., which is quite literally a fighting league for giant human-controlled robots. “We think Hawking, along with others, have a valid point regarding the potential downside of sophisticated, autonomous computing systems that are able to adapt and make decisions based on perceived optimal choices,” he said. “We’d say that since Hawking is considered the ‘smartest person on the planet,’ his statement is credible, but it really comes down to how companies and people utilize the technology that will dictate its future.”
Let’s assume a worst-case scenario: Hawking is correct and it’s only a matter of time until machines wipe us all out. While this may manifest itself in a number of different ways, Stroup tells us that it will all start from the same place.
“The most likely path we’ve identified for AI systems wiping out mankind is when they’re given the job to optimize a task or function and they decide that humans are not the appropriate choice, are inefficient, or significantly hinder the optimization process and are deemed unnecessary and categorized as an expandable good or waste.” Whether robots blatantly kick in your front door and round up your family or a subtle, malicious software quietly takes control of every single plane in the sky will depend entirely on how we develop the technology we already have. It’s easy to see this as a choose-your-own-nightmare scenario.
They’ll either wipe us out as Hawking suggests, or live peaceably among us.
Robotics academics generally don’t like to offer professional opinions on Terminator-type scenarios, but some already have. “I do not believe machines will surpass human intelligence,” robo-ethicist Ryan Calo told Business Insider. “Even if processing power continues to advance, we would need an achievement in software on par with the work of Mozart to reproduce consciousness.” A reassuring but surprising paper by Northwestern professor John McGinnis suggests that robots will get along just fine with humans, but only because we will model them to think and get along like us: “Artificial intelligence can be programmed to weigh human values in its decision making. The key will be to assure such programming.”
Unfortunately it seems that the jury is far too divided to definitively answer questions about the murderous tendencies of the machines of the future. They’ll either wipe us out as Hawking suggests, live peaceably among us per McGinnis’s paper, or remain the dumb machines we know them as today.
Steve Cousins, CEO of robotics company Savioke, is highly skeptical of any such robo-apocalypse scenarios and was quick to put them to bed for us. “Malevolence implies a conscious intention to cause harm,” he said. “We’re a long, long way from artificial intentions, much less consciousness.”
Photo via lwpkommunikacio/Flickr (CC BY 2.0)