Skip to content

Experts say AI poses ‘extinction’ level of risk, as Congress seeks legislative response

‘Mitigating the risk of extinction from AI should be a global priority’

National Telecommunications and Information Administration head Alan Davidson has said "accountability mechanisms" can help ensure that AI is trustworthy.
National Telecommunications and Information Administration head Alan Davidson has said "accountability mechanisms" can help ensure that AI is trustworthy. (CQ Roll Call file photo)

Lawmakers and regulators gearing up to address risks from artificial intelligence technology got another boost this week from experts warning of potential “extinction” and calling on governments to step up regulations.

Senate Majority Leader Charles E. Schumer, D-N.Y., has said he and his staff have met with more than 100 CEOs, scientists and other experts to figure out how to draw up legislation. 

The National Telecommunications Information Administration, or NTIA, is gathering comments from industry groups and tech experts on how to design audits that can examine AI systems and ensure they’re safe for public use. And former Federal Trade Commission officials are urging the agency to use its authority over antitrust and consumer protection to regulate the sector.

More than 350 researchers, executives and engineers working on AI systems added to the urgency Tuesday in a statement released by the Center for AI Safety, a nonprofit group. 

Among those who signed are Geoffrey Hinton, a top Google AI scientist until he recently resigned to warn about risks of the technology; Sam Altman, CEO of OpenAI, the company that has developed ChatGPT; and Dario Amodei, the CEO of Anthropic, a company that focuses on AI safety. 

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the group said. 

The experts listed eight broad categories of risk posed by AI systems that digest vast quantities of information. Those systems can create text, images and video that are difficult to distinguish from human-created content. 

The Center for AI Safety says AI systems can help criminals and malicious actors create chemical weapons and spread misinformation, perpetuate inequalities by helping small groups of people gain a lot of power, and deceive human overseers and seek power for themselves. 

Schumer appears to have heard the message. 

“We can’t move so fast that we do flawed legislation, but there’s no time for waste or delay or sitting back,” he said on the Senate floor on May 18. “We’ve got to move fast.”

Schumer’s office didn’t respond to a question about when legislation would be unveiled. 

Altman himself appeared before the Senate Judiciary Committee only two days before Schumer’s floor remarks, telling lawmakers that “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.” 

But Altman, who called on U.S. lawmakers to regulate the technology, balked a week later at the European Union’s effort to do so. According to the Financial Times, he told reporters in London that he had “many concerns” about the EU’s proposed AI Act that is still being debated. 

The EU proposal would categorize AI systems into three buckets: systems that pose unacceptable risk, such as government-run social scoring applications found in China and elsewhere, that violate “fundamental rights,” including ones doing predictive policing that would be banned; high-risk applications such as those that scan resumes and rank job applicants that would have to meet legal and transparency requirements; and a third category of systems that would be left unregulated. 

The Financial Times reported that Altman said his OpenAI’s ChatGPT could end up being classified as high-risk. 

“The details really matter,” Altman said. “We will try to comply, but if we can’t comply we will cease operating,” he said, referring to potentially removing ChatGPT from the European Union, according to the newspaper. 

The EU rules have yet to be finalized but are expected to go into effect in 2025. 

Regulators everywhere are trying to figure out how to write rules that wouldn’t be too stifling while also ensuring that people’s privacy and safety aren’t violated, said Ken Kumayama, a partner who focuses on technology issues at the law firm of Skadden, Arps in California. The technology is rapidly changing, and people’s understanding of risks and benefits is also evolving, he said. 

“It’s more art than science,” Kumayama said in an interview. “I think it really is anyone’s guess what’s ultimately going to happen.”

The U.S. is “behind in our thinking and our drafting” of rules on artificial intelligence and other aspects, Kumayama said. “We are playing catch-up.” 

Lack of clear vision

While lawmakers, industry executives and experts in the U.S. agree that “we need some regulation, some guardrails, some guidelines, some rules of the road … lawmakers in the U.S. don’t seem to have any clear vision regarding what that should look like,” Kumayama said. 

One proposal from the 117th Congress, known as the Algorithmic Accountability Act and sponsored by Rep. Yvette D. Clarke, D-N.Y., was backed by 39 other Democrats but failed to advance in the House. 

That measure, which would have empowered the FTC to assess the impact of AI systems, was a good start, but it applied only to large companies and left out smaller companies that often are the ones that drive innovation in artificial intelligence, Kumayama said. 

President Joe Biden in April called on tech companies to ensure that their AI systems are “safe before making them public.” His comments led the NTIA to ask industry groups and tech experts to weigh in. 

“Much as financial audits create trust in the accuracy of financial statements, accountability mechanisms for AI can help assure that an AI system is trustworthy,” Alan Davidson, the assistant secretary of communications and information at the NTIA, said at the time. 

The agency has since received more than 500 comments but plans to make them public only after the comment period ends June 12, Zahir Rasheed, an agency spokesman, said in an email. 

Amba Kak and Sarah Myers West, two former FTC officials, said in a report published last month that the FTC has existing authority relating to antitrust and consumer harm to regulate the fast-growing artificial intelligence tech sector even without new legislation.

The report from the nonprofit AI Now Institute, titled “Confronting Tech Power,” said the FTC could “enforce existing law on the books to create public accountability in the rollout of generative AI systems and prevent harm to consumers and competition.” 

ChatGPT is one example of a generative AI, which refers to a system that can generate text, image or video in response to prompts. 

Since the FTC is focused on stopping deceptive or unfair acts or practices, some experts argue that irrespective of underlying technologies, if the result or an outcome is illegal or harmful to consumers, “you need to stop,” Kumayama said. “I don’t disagree with them. The law is technology-agnostic.”

Recent Stories

What happened and what’s next after the 2024 elections?

A look at the health priorities of incoming Senate Democrats

A look at new GOP senators on health policy

Latta eyes top GOP spot on Energy and Commerce panel

A look at those who could be on Trump’s health team short list

At the Races: The big sort