Skip to content

Peters pitches AI legislation as model for private sector

Proposal would direct agencies to assess risks of artificial intelligence systems they procure

“Setting the stage for how the federal government is going to buy and use AI will have a large effect on the AI industry, as well as innovation,” says Sen. Gary Peters.
“Setting the stage for how the federal government is going to buy and use AI will have a large effect on the AI industry, as well as innovation,” says Sen. Gary Peters. (Bill Clark/CQ Roll Call file photo)

A bipartisan proposal in the Senate would create standards for government procurement of artificial intelligence in part by classifying systems according to risk.

The bill sponsored by Sen. Gary Peters, D-Mich., and co-sponsored by Sen. Thom Tillis, R-N.C., is among the first to emerge after several summits on Capitol Hill on how to handle AI. It would require each federal agency to create the position of a chief AI officer and grade systems on a scale from unacceptable to low risk. 

Unacceptable risks, including those that could result in social scoring systems or mapping facial features and emotions, would be prohibited. High-risk systems would require developers to conduct pre-deployment testing and ongoing monitoring. 

Having U.S. agencies categorize AI systems and set standards for their purchase and use could have a far-reaching impact, Peters, chairman of the Senate Homeland Security and Governmental Affairs Committee, said in an interview.

“The federal government is the leading purchaser of goods and services in the entire world,” Peters said. “We are also perhaps the largest purchaser of AI in the United States, so setting the stage for how the federal government is going to buy and use AI will have a large effect on the AI industry, as well as innovation.”

The standards developed by the federal agencies will become “a model for other private sector companies to follow” because AI companies that do business with the U.S. government will be positioned to compete in the private sector, he argued. 

Peters said he plans to mark up the bill later this summer and is in discussions with House counterparts on companion legislation. 

The bill is the first proposal to emerge since a bipartisan group of lawmakers, led by Senate Majority Leader Charles E. Schumer, released a road map in May that directed congressional committees to address AI through their normal legislative processes. 

Schumer has backed the Peters-Tillis proposal, calling it vitally important in remarks on the Senate floor. 

“This legislation will establish some of the first guidelines for the responsible procurement of AI by the federal government,” Schumer said on June 13. “The guidelines in this bill will be essential for the federal government to deploy AI so it protects people’s civil rights, prevents bias and ensures people’s privacy.”

Such protections “are critical not just for the application of AI in the federal government, they are important for the application of AI in every industry,” Schumer said. 

The measure has drawn support from some privacy advocates and groups promoting consumer rights. 

It would codify “transparency, risk evaluation and other safeguards” that would help federal agencies make smart decisions on buying and using AI systems, said Alexandra Reeve Givens, president and CEO of the Center for Democracy & Technology. 

“There’s an open question in the market right now about what responsible AI systems look like,” Givens said in an interview. “And so if the government leads by example and creates mechanisms where those types of standards and criteria have to be addressed, that can send an important signal to the private sector as well and shape the development and creation of those types of standards.”

A report released by the center in April noted that in 2022, U.S. agencies awarded more than $2 billion in contracts to companies that provide services using AI tools. Total U.S. government spending on AI has increased by about 250 percent since 2017, the report said. 

Preventing bias

Some civil society groups had criticized the Schumer-led road map for not forcefully addressing dangers posed by AI systems such as bias and discrimination. Peters said that’s a key risk federal agencies should aim to avoid.

He noted that experts warned lawmakers that AI-based decision-making models that are trained using vast quantities of data could contain hidden biases that even the developers are unaware of. 

“The federal government wants to make sure that there is no bias or as little bias as possible in their systems,” especially when it comes to decisions related to a person’s benefits, he said.

The bill aims to protect individuals’ privacy, as well, with prohibitions on facial mapping and using the information to assign emotions to individuals, Peters said.

Several AI companies offer technologies claiming to detect emotions by analyzing facial expressions that are then used in a range of situations from hiring for jobs to evaluating if a car driver is too tired.

Last year, a group of tech companies led by Amazon.com Inc., Anthropic, Google LLC, Inflection, Meta Platforms Inc., Microsoft Corp. and OpenAI Inc. pledged to the White House to develop technologies in a “safe, secure, and transparent” manner. 

President Joe Biden later codified those commitments in an executive order and required AI companies to share safety data with the U.S. government. But voluntary measures alone are not enough, and a buyer as big as the U.S. government has a right to demand standards from vendors, Peters said. 

“You’d expect private industry to demand certain things out of the vendors that are selling their products,” Peters said. “The federal government is no different. We’re saying … this is what we expect those vendors to build into their products. And that’s why this legislation is so important, because as vendors and companies are building those safeguards and those processes in place to sell to the federal government, I would expect private industry is going to want that standard as well.”

The Peters-Tillis bill deviates from some other proposals in at least one respect.

A framework from Sens. Mitt Romney, R-Utah; Jack Reed, D-R.I.; Jerry Moran, R-Kan.; and Angus King, I-Maine, envisions a single federal agency that would oversee advanced AI models and create a licensing regime. That’s similar to a proposal by Sens. Richard Blumenthal, D-Conn., and Josh Hawley, R-Mo.

The Peters-Tillis proposal, however, would leave standards setting and risk assessment to individual agencies rather than a new entity.  

While individual agencies would have to develop their own technological expertise, they could also look to the White House Office of Management and Budget and the General Services Administration for procurement templates that could then be tailored to each agency’s needs, the Center for Democracy & Technology’s Givens said.

The description of a facial mapping provision was corrected in this report.

Recent Stories

Bandaged Trump met with thunderous reaction at RNC light on policy plans

Congress launches investigations of security failure at Trump rally

Running mate Vance is ardent Trump backer with brief Hill tenure

Florida federal judge tosses out Trump classified documents case

Capitol Lens | Calm before the storm

Convention puts Wisconsin in spotlight, but it’s used to that