elon musk

Vitaly Sosnovskiy/Shutterstock Steve Jurvetson/Flickr (CC BY SA 2.0) (Licensed) remix by Jason Reed

Tech titans call for 6-month AI pause to figure out what kind of monster they unleashed

They want it to be deemed safe beyond a 'reasonable doubt.'

 

Mikael Thalen

Tech

Posted on Mar 29, 2023   Updated on Mar 31, 2023, 4:18 pm CDT

An open letter signed by some of the biggest names in the tech world is calling for a pause on the development of artificial intelligence (AI) due to fears that the technology could pose “profound risks to society and humanity.”

Published by the Future of Life Institute, an organization whose stated goal is to “steer transformative technologies away from extreme, large-scale risks,” the letter has managed to attract more than 1,100 signatures from the likes of Twitter CEO Elon Musk and Apple co-founder Steve Wozniak.

Specifically, the letter calls on all AI laboratories across the globe “to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the large multimodal language model recently updated and released by the company OpenAI.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter reads. “This confidence must be well justified and increase with the magnitude of a system’s potential effect.”

If such a pause cannot be achieved, the letter continues, “governments should step in and institute a moratorium.”

Other prominent signatories include former presidential candidate Andrew Yang, Skype co-founder Jaan Tallinn, as well as numerous experts in the field of AI research. Given that the letter is publicly open to signatures, however, users are urged to treat the list of names with caution.

As cited in the letter, OpenAI recently agreed that it would likely be important at some point to receive independent review prior to training future systems. The AI company also suggested that a limit may need to be agreed upon regarding “the rate of growth of compute used for creating new models.”

The Future of Life Institute argues that the time for such measures is now.

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter continues. “These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.”

Musk, who is an outside advisor to the Future of Life Institute, helped co-found OpenAI before resigning from the company’s board due to disagreements with the company’s direction.

“OpenAI was created as an open source (which is why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft,” Musk tweeted last month. “Not what I intended at all.”

The letter is unlikely to have any major effect, at least for the time being, on the development of AI, as major tech companies clamor to compete with OpenAI and ChatGPT.

web_crawlr
We crawl the web so you don’t have to.
Sign up for the Daily Dot newsletter to get the best and worst of the internet in your inbox every day.
Sign up now for free
Share this article
*First Published: Mar 29, 2023, 10:02 am CDT