Companies have pledged safe AI development, White House says
Companies include Google, Amazon, Meta, Microsoft, and OpenAI
The Biden administration said it has received voluntary commitments from the world’s largest developers of artificial intelligence systems that the companies would pursue development in a safe and secure manner.
Google, Amazon, Inflection, Meta, Microsoft, Anthropic and OpenAI have committed to developing such technologies in a “safe, secure, and transparent” manner, a White House official told reporters Thursday on the condition of anonymity, while discussing a meeting taking place Friday between the top executives of the companies and administration officials.
The Biden administration is preparing an executive order “that will ensure the federal government is doing everything in its power to advance safe, secure, and trustworthy AI and manage its risks to individuals and society,” the official said.
The White House also is coordinating with efforts in Congress aimed at establishing a “legal and regulatory regime,” the official said.
Senate Majority Leader Charles E. Schumer has arranged a series of briefings for senators on artificial intelligence systems and has said he intends to draw up legislation to regulate AI systems in the next few months. On Friday, Schumer said he welcome the voluntary commitments, but added that harnessing the potential while tackling challenges posed by AI “requires legislation to build and expand on the actions” by the White House.
Sen. Mark Warner, D-Va., chair of the Senate Intelligence Committee also said “some degree of regulation” was needed to ensures that AI companies “prioritize security, combat bias, and responsibly roll out new technologies.” In April Warner wrote to CEOs of major tech companies asking them to prioritize security in design and development of technologies.
The White House official said the companies have committed to making sure their products and technologies are safe before they’re publicly launched, building systems with security as a priority, and ensuring that consumers and users know which content is generated by AI and publicly reporting on inappropriate uses of the technology.
Some companies including OpenAI already have said they use so-called red teams, which pretend to emulate nefarious actors’ actions, to test for weakness in the ChatGPT-4 model. The voluntary commitments are intended to ensure that all companies adopt similar practices, as well as “pushing the envelope” safety, security, and transparency measures, the official said.
One of the measures that would advance the goal of safety and trust is an effort by companies to develop a watermarking system, the official said. Tech companies have committed to developing a system that would label audio and video content developed by artificial intelligence tools to distinguish them from those generated by humans, the official said.
Brad Smith, vice chairman and president of Microsoft, wrote in a blog post on Friday that the company would go beyond those commitments by collaborating with the National Science Foundation to explore the creation of a national AI research resource that would facilitate independent research by academics on AI safety. The company also would back the creation of a national registry of high-risk AI systems, Smith wrote.
Companies also have committed to sharing information with each other and with academic researchers, government agencies, and civil society groups on best practices, and information on managing risks, according to a White House fact sheet.
The fact sheet also said the companies have committed to allowing third-party researchers to find and report vulnerabilities in the AI systems.