Tech

Lawmakers introduce legislation to prevent AI from launching a nuke

‘Nuclear weapons are horrifically damaging and devastating… we can’t allow robots to hold the power to command them’

Photo of Mikael Thalen

Mikael Thalen

Article Lead Image

Lawmakers introduced bipartisan and bicameral legislation on Wednesday aimed at preventing artificial intelligence (AI) from launching a nuclear weapon. But is such an effort the result of unrealistic paranoia, a necessary safeguard, or an exercise in futility?

Featured Video

Known as the Block Nuclear Launch by Autonomous Artificial Intelligence Act, the bill, sponsored by Sen. Ed Markey (D-Mass.), Rep. Ted Lieu (D-Calif.), Rep. Don Beyer (D-Va.), and Rep. Ken Buck (R-Colo.), is intended to keep AI-based decision making out of the nuclear command and control process.

The bill would reinforce existing guidelines as outlined in the Department of Defense’s 2022 Nuclear Posture Review, which states that a human must be kept “in the loop” for any and “all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment.”

In remarks on Twitter, Markey highlighted the importance of regulating AI while calling on his fellow lawmakers to ensure that the legislation is adopted.

Advertisement

“Nuclear weapons are horrifically damaging and devastating. In an increasingly digital age, we can’t allow robots to hold the power to command them,” he tweeted. “We must adopt the Autonomous Artificial Intelligence Act and ensure that AI will never make decisions to use deadly force.”

In similar remarks, Lieu praised the legislation as “forward-thinking” and part of the foresight needed to protect “future generations from potentially devastating consequences.”

“AI can never be a substitute for human judgment when it comes to launching nuclear weapons,” Lieu said.

Advertisement

Experts in nuclear weapons policy see both sides of the debate. While the devastation of nuclear weapons warrants precautionary measures, too much attention has been given to hypothetical scenarios regarding a clash between humans and a self-aware AI.

Speaking with the Daily Dot, Ian J. Stewart, Executive Director at the James Martin Center for Nonproliferation Studies, described the risk of an AI-launched nuclear weapons as “generally overplayed.” Still, taking a considered approach could convince adversaries, Stewart added, to adopt similar policies.

AI experts responded much the same, noting the extensive focus on hypothetical future scenarios as opposed to the more immediate problems involving AI.

Author and generative AI expert Nina Schick called the legislation an important step in providing “oversight over existential threats” but also urged lawmakers to not take their “eye off the ball.”

Advertisement

“So less focus on the robots losing control—more focus on how these—especially generative AI systems —are being developed and rolled out in a way where they are impacting literally hundreds of millions of people … within weeks,” Schick said.

ChatGPT, when asked by the Daily Dot, said that, “From a rational and logical perspective, the Block Nuclear Launch by Autonomous Artificial Intelligence Act is a necessary step towards ensuring the responsible use of AI in the military domain. Nuclear weapons are incredibly destructive, and their use could lead to devastating consequences. Allowing AI systems to launch such weapons could be potentially dangerous, as it could lead to unintended or malicious actions.”

With the U.S., Russia, and China all either increasing or upgrading their nuclear arsenals, the legislation, if enacted, could prove to be an important milestone given the current AI arms race.

But additionally, if an omnipresent, out-of-control AI does take over, the odds it will respect the will of Congress seem pretty low.

Advertisement
web_crawlr
We crawl the web so you don’t have to.
Sign up for the Daily Dot newsletter to get the best and worst of the internet in your inbox every day.
Sign up now for free


 
The Daily Dot