Skip to content

Gallagher advocates targeted approach to AI regulation

Congress ‘rarely does comprehensive well,’ chair of cyber panel says

Chairman Mike Gallagher conducts a meeting of the Select Committee on the Strategic Competition Between the United States and the Chinese Communist Party on May 24.
Chairman Mike Gallagher conducts a meeting of the Select Committee on the Strategic Competition Between the United States and the Chinese Communist Party on May 24. (Tom Williams/CQ Roll Call)

As the Senate and its top Democrat eye comprehensive legislation for regulating artificial intelligence, House Republican Rep. Mike Gallagher is advocating a “clinical, targeted and iterative approach” to the technology. 

Wisconsin’s Gallagher, who helms the House Armed Services’ cyber subcommittee, said that could include the build-out of the Pentagon’s previously released AI ethical principles across other parts of the U.S. government. Eventually, that effort could grow to include international allies and partners. 

But overall, he stressed the importance of moving quickly during the current Congress to knock out straightforward AI regulatory priorities on issues that extend beyond the Defense Department, ranging from autonomous vehicle reciprocity to controls on outbound capital investments. 

“I think the instinct in the House — and I don’t speak for my colleagues but I think it’s a bipartisan instinct — is to not do comprehensive because Congress rarely does comprehensive well,” he told reporters Tuesday. 

Others, though, aren’t wholly ruling out a more extensive effort. While Rep. Ro Khanna, the ranking member of the cyber panel, said in an interview last week that he could see lawmakers “start with some of the low-hanging fruit and tackle those issues,” he also said he’s “open to a comprehensive approach.”

Regardless, the California Democrat, whose district includes Silicon Valley, said he wants to see legislation presented “in an informed way,” with a commission of experts driving recommendations both for the military and for wider AI applications. 

“On a broad scale, we need some form of human judgment in decision-making. We need some sense of transparency when it comes to understanding what AI is being used for and the data sets that are being used. We need to have a safety assessment,” he said. “But I think the details of this really need to be worked out by people with deep knowledge of the issues.” 

[Lawmakers suggest agency to supervise artificial intelligence]

The comments come as lawmakers have intensified their focus on AI, with the Senate standing poised to hold its third and final briefing on the topic next week. Meanwhile, Majority Leader Charles E. Schumer, D-N.Y., is gearing up to release a “comprehensive” plan in the coming months that he has said will “protect, expand, and harness AI’s potential.”

Still, big questions persist about appropriate AI safeguards and Congress’ ability to legislate the fast-moving technology without stifling innovation. 

“The tension underlying all of this is we don’t want to overregulate our advantage in the AI race out of existence,” Gallagher said. 

‘Free world framework for AI’

Guardrails on the military applications of AI both domestically and worldwide was a prominent area of discussion during a House Armed Services Cyber, Information Technologies, and Innovation Subcommittee hearing Tuesday. 

“How we deal with AI as it pertains to national security … is one of the most important topics for us to get right in the near term,” said witness Alexandr Wang, the chief executive officer of software company Scale AI. 

The Pentagon in recent years has moved to issue and adopt AI ethical principles and a responsible AI strategy as it works to leverage evolving technology developments and become more data-centric in its approach to warfighting. 

During the hearing, Gallagher questioned whether those ethical principles could serve as a foundation for a “free world framework for AI.”

The document could potentially be extended to other parts of government, something that may require an endorsement via legislative language, he told reporters after the hearing. It could then be used internationally to ensure that the United States is on the same page with the British, Australians, the other member countries of the Five Eyes alliance and the North Atlantic Treaty Organization. 

When considering other international partnerships in AI, Klon Kitchen, a nonresident senior fellow for the think tank American Enterprise Institute, told lawmakers that nations moving forward will work to “build trusted technology ecosystems amongst trusted partners and allies,” though underpinning those ties is a “common understanding of the opportunities and challenges.”

“When we think about military interoperability in these types of alliances, we also need to understand that military interoperability is going to be predicated on regulatory interoperability,” he said. “And that is where we have a real gap between us and some of our key friends.” 

Asked about the potential for an AI governance structure to curb officials’ efforts to scale the technology, Haniyeh Mahmoudian, a global AI ethicist with software company DataRobot, said such practices don’t present obstacles to deployment. 

“A robust and comprehensive governance process actually enables us to have standards and policies in place that can easily apply to any AI use case that we have,” she said. “So with that foundation of AI governance, we would be able to replicate the process for any AI use case that we have.”

Recent Stories

The great Democratic divide elects Trump twice

Rep. Bishop picked for No. 2 slot in Trump OMB after statewide loss

Senate Democrats air concerns about Trump mass deportation plan

McConnell suffers minor injuries in fall

Don’t count out Roy Cooper in 2026

DOJ watchdog review sparks change to policy on lawmaker records