Lawmakers are scrambling to catch up on emerging artificial intelligence technology, with House members this week proposing a national commission and the Senate majority leader prepping a regulatory framework.
Senate Majority Leader Charles E. Schumer, D-N.Y., is set to unveil a framework for regulating the development of artificial intelligence on Wednesday. Dubbed “Safe Innovation in the AI Age,” his plan will outline ways to “protect, expand, and harness AI’s potential,” his office said.
“I will talk about some steps we must take to stay ahead of AI’s rapid development,” Schumer said on the Senate floor Tuesday. “Many of AI’s impact are truly exciting … It will reshape how we fight disease, tackle hunger, manage our lives, enrich our minds and ensure peace,” he said. “But we cannot ignore AI’s many dangers.”
Meanwhile, a bipartisan group of lawmakers led by Rep. Ted Lieu, D-Calif., on Tuesday introduced legislation to create a national commission to study the technology.
The proposal by Lieu and Reps. Ken Buck, R-Colo., and Anna G. Eshoo, D-Calif., would create a 20-member commission to examine how the U.S. can maintain leadership while setting up guardrails to prevent harms, examine which federal agencies oversee aspects of AI and study efforts to regulate it elsewhere in the world.
The commission would recommend regulatory and enforcement actions Congress ought to take.
While AI technology offers several benefits to society, “it can also cause significant harm if left unchecked and unregulated,” said Lieu, a computer scientist and one of the signatories to a recent warning about the dangers it poses.
Congress cannot afford to “stay on the sidelines,” Lieu said in a statement. “However, we must also be humble and acknowledge that there is much we as members of Congress don’t know about AI.” He added that the commission would help lawmakers draw up well-informed legislative proposals.
The flurry of activity aimed at catching up to fast-growing technological developments comes as developers and experts have warned that, left unregulated, AI systems could pose an extinction level risk to humanity on par with nuclear weapons and pandemics.
In May a group of more than 350 researchers, executives and engineers working on AI systems said that “mitigating the risk of extinction from AI should be a global priority.”
The signatories included Geoffrey Hinton, one of Google’s top AI scientists who recently resigned from the company to openly warn about risks of the technology; Sam Altman, CEO of OpenAI, which developed ChatGPT; Dario Amodei, CEO of Anthropic, a company that focuses on AI safety; and Lieu.
The warning follows the rapid development of so-called large-language models that are fed huge volumes of text and images and are then capable of generating new text and images that mimic how humans would create them.
After not acting in a timely manner to address the dangers of social media platforms, lawmakers are catching up to AI developments, said Karen Kornbluh, managing director for digital innovation and democracy at the German Marshall Fund of the United States in Washington. Lawmakers were “very aware of the opportunities and everyone heralded that social media was going to bring democracy to the Middle East,” Kornbluh said. “And it was after disinformation became rampant, and foreign interference in U.S. elections, that suddenly [lawmakers] were playing catch up.”
Members of Congress getting educated on AI “on the front end makes a lot of sense,” Kornbluh said, referring to three closed-door briefings that Schumer arranged for all senators. The first, on “Where is AI Today,” was held June 13. Two more, on how the U.S. can maintain leadership over the next decade and national security uses and implications of AI, are yet to be scheduled.
The congressional action dovetails with steps by the Biden administration to get up to speed.
The National Telecommunications and Information Administration, which is part of the Department of Commerce, last week said it had received more than 1,400 comments in response to a request for comments issued in April.
The NTIA request was part of President Joe Biden’s efforts to help responsibly develop AI systems while establishing safe protocols. The NTIA said it sought “feedback on what policies can support the development of AI audits, assessments, certifications, and other mechanisms to create earned trust in AI systems, raising confidence that they are trustworthy.”
The agency said such audits could help create confidence similar to how investors seek information in audited financial statements from businesses.
The agency plans to publish a report outlining steps the federal government could take to garner public trust in AI systems. Biden, along with Vice President Kamala Harris, is directly involved in efforts to draw up an AI regulatory framework.
The White House chief of staff and other top officials meet two to three times a week to discuss AI regulations, with the goal of “rapidly [developing] decisive actions we can take over the coming weeks,” the White House said in a statement.
Actions by lawmakers and the White House also follow developments in the European Union, where the 27-nation bloc passed draft legislation last week that would prohibit the use of AI technologies in areas that can pose “unacceptable level of risk to people’s safety,” including those that can manipulate people’s behaviors. A final version of the law is not expected to pass until later in the year and would go into effect two years after that.
While the United States is playing catch up, one could argue that the “EU is moving too fast … before we really have a clear sense of all the challenges,” Kornbluh said. Restrictions could kill innovation and competition — especially as other nations, including China, aim to become global leaders in AI, Kornbluh said.
“We don’t want to go so fast that we don’t know what the issues are, but we don’t want to go so slow that we have these huge costs,” Kornbluh said. “That’s a big test for the government.”