Skip to content

Downplaying AI’s existential risks is a fatal error, some say

Policy experts say Congress should act before AI systems become even more advanced

Utah Republican Sen. Mitt Romney says he and others would like to see restrictions on dangerous outcomes from AI.
Utah Republican Sen. Mitt Romney says he and others would like to see restrictions on dangerous outcomes from AI. (Bill Clark/CQ Roll Call file photo)

A handful of lawmakers say they plan to press the issue of the threat to humans posed by generative artificial intelligence after a recent bipartisan Senate report largely sidestepped the matter. 

“There’s been no action taken yet, no regulatory action taken yet, at least here in the United States, that would restrict the types of actions that could lead to existential, or health, or other serious consequences,” Sen. Mitt Romney, R-Utah, said in an interview. “And that’s something we’d like to see happen.”

Romney joined Sens. Jack Reed, D-R.I., Jerry Moran, R-Kan., and Angus King, I-Maine, in April to propose a framework that would establish federal oversight of so-called frontier AI models to guard against biological, chemical, cyber and nuclear threats.

Frontier AI models include ChatGPT by OpenAI, Claude 3 by Anthropic PBC and Gemini Ultra by Google LLC, which are capable of generating human-like responses when prompted, based on training with vast quantities of data.

The lawmakers said in a document explaining their proposal that it calls for a federal agency or coordinating body that would enforce new safeguards, “which would apply to only the very largest and most advanced models.”

“Such safeguards would be reevaluated on a recurring basis to anticipate evolving threat landscapes and technology,” they said.

AI systems’ potential threats were highlighted by a group of scientists, tech industry executives and academics in a May 2023 open letter advising that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The signatories included top executives from OpenAI, Microsoft Corp., Google, Anthropic and others. 

Rep. Ted Lieu, D-Calif., who holds a computer science degree and was one of the signatories of that letter, said he remains concerned about the existential risks.

He said that he and Rep. Sara Jacobs, D-Calif., sought to address one aspect in the fiscal 2025 defense policy bill advanced by the House Armed Services Committee last month. The provision would require a human to be in the loop on any decision involving the launch of a nuclear weapon, to prevent autonomous AI systems from causing World War III.

Lieu, co-chair of the bipartisan House Task Force on Artificial Intelligence, said in an interview that he and others have tried to address further risks. But he and his colleagues are still trying to grasp the depths of these perils, such as AI spitting out instructions to build a better chemical or a biological weapon.

“That is an issue we’re looking at now,” Lieu said. “How you want to prevent that is a whole different sort of issue that can get very complicated, so we’re still gathering data and trying to explore.” 

There are several proposals to control and supervise advanced AI systems, though none have been fast-tracked in Congress.

In August 2023, Sens. Richard Blumenthal, D-Conn., and Josh Hawley, R-Mo., proposed a licensing regime for advanced AI models that would be managed by a federal agency. Companies developing such AI models would be required to register with the agency, which would have authority to audit the models and issue licenses. 

Policymaking pace

Experts studying technology and policy say that Congress and federal agencies should act before tech companies turn out AI systems with even more advanced capabilities. 

“Policymakers should begin to put in place today a regulatory framework to prepare for this future,” when highly capable systems are widely available around the world, Paul Scharre, executive vice president at the Center for a New American Security, wrote in a recent report. “Building an anticipatory regulatory framework is essential because of the disconnect in speeds between AI progress and the policymaking process, the difficulty in predicting the capabilities of new AI systems for specific tasks, and the speed with which AI models proliferate today, absent regulation.

“Waiting to regulate frontier AI systems until concrete harms materialize will almost certainly result in regulation being too late,” said Scharre, a former Pentagon official who helped prepare the Defense Department’s policies on the use of autonomous weapons systems. 

Senate Majority Leader Charles E. Schumer, D-N.Y., who led a monthslong effort of briefings with dozens of tech industry executives, civil society groups and experts, last month issued a bipartisan policy road map on AI legislation.

The road map and associated material mentioned existential risks just once — it noted some participants in one briefing were “quite concerned about the possibilities for AI systems to cause severe harm,” while others were more optimistic. 

The report directed various congressional committees to address legislation on AI through their normal legislative processes. 

One reason the risks may be downplayed is that some in the tech industry say fears of existential risks from AI are overblown. 

IBM, for example, has urged lawmakers to stay away from licensing and federal oversight for advanced AI systems.

Chris Padilla, IBM’s vice president for government and regulatory affairs, last week recounted for reporters the stance of Chief Privacy and Trust Officer Christina Montgomery, who told participants at a Schumer briefing that she didn’t think AI is an existential risk to humanity and that the U.S. doesn’t need a government licensing regime.

IBM has advocated an open-source approach, which would allow experts and developers around the world to see how AI models are designed and built and what data is ingested by them, Padilla said. 

A large community of AI developers peering into algorithms that power the AI systems can potentially identify dangers and threats better than a single company scrutinizing its own product, Padilla said. That approach differs widely, however, from OpenAI and Microsoft, which uses OpenAI’s models, that are advocating proprietary AI systems that are not subject to public scrutiny. 

Padilla and Daniela Combe, vice president for emerging technologies at IBM, compared the company’s open-source approach to the widespread use of Linux operating system that runs on IBM’s mainframe computers. Microsoft declined to comment on the idea.

Instead of licensing and regulatory oversight of AI models, the government should hold developers and users of AI systems legally liable for harms they cause, Padilla said. “The main way that our CEO suggested this happen is through legal liability, basically, through the courts,” he said. 

Padilla spoke to reporters before as many as 100 IBM executives traveled last week to Washington to meet with lawmakers on AI legislation. IBM and its subsidiaries spent $5.6 million lobbying Congress last year on a variety of issues that included AI, according to data from OpenSecrets.org. 

The issue isn’t likely to be resolved soon, as Padilla and others say legislation this year is doubtful.

At least one key lawmaker agreed. Asked whether his AI proposal is likely to turn into legislation and pass this year, Romney said it may not. 

“It’s unlikely this year because we move as slow as molasses,” he said. “Particularly in an election year.”

Recent Stories

Justice Department expands where it will monitor on Election Day

GOP centers election concerns on noncitizen voting, but it’s rare

Boozman, Klobuchar lined up to follow Stabenow on Agriculture

Awkward abound: Joe Biden and the lame-duck countdown

Ratings changes: What we do and don’t know about the fight for Congress

Trump advocates ‘nine barrels shooting at’ Liz Cheney