Promise or peril? AI may offer both to congressional staff
Technology can cut tasks down to seconds if the 'hallucinations' can be avoided
Daniel Schuman had a problem. Schuman needed to prepare for a presentation, but like any other “lazy person” with a lot of other stuff to do, the policy director at Demand Progress didn’t want to put in the time and effort to write the darn thing. So he asked a bot to do it instead.
“My starting point for this presentation was to ask AI ‘how you can use AI,’” Schuman said Friday before a packed room of over 100 congressional staffers gathered to hear how artificial intelligence tools could help — or hurt — their work on the Hill.
Schuman used his prompts into ChatGPT, the experimental text-generating model from OpenAI, and its responses to demonstrate the technology’s promise and peril. The bot’s first bullet point of the pros — “ability to analyze vast amounts of data and inform in a timely manner” — was dead on, Schuman said. But it’s next — “can assist in drafting legislation” — was way off. “I don’t think that’s true at all,” he said.
The event was co-hosted by the POPVOX Foundation, Lincoln Network and Demand Progress with collaboration from the chief administrative officer's House Digital Service office. HDS has a suite of AI software licenses available for staffers to experiment with through an ongoing working group. Offices can sign up via the HDS website.
Like all technology going back to the first stone ax, AI can cut down the work hours spent on drudgery, the panel said. HDS’ Ken Ward suggested that text generating programs like ChatGPT would be great for coming up with first drafts of “Dear colleague” letters and press releases, which staffers could then edit for accuracy.
Schuman pointed to another potential timesaver for aides: Coming up with bill title “backronyms.” Where Schuman once spent three hours coming up with a bill title, ChatGPT took two seconds to produce 10 options for a debt ceiling bill, including the “Halting Outdated Restrictions In Zealous investment in Our Nation (HORIZON) Act.”
There are obvious limitations to this generation of AI technology, which has a well-documented loose relationship with facts. Roll Call attempted to ask ChatGPT to turn a transcript of Friday’s event into a “news story in the style of Roll Call.” It instead came up with a summary of a completely fabricated event hosted “Thursday” by the “Congressional Artificial Intelligence Caucus” (which is real but wasn’t involved) co-chaired by “Congressman James Garcia” (not real) and keynoted by “Stanford University’s Dr. Samantha Chen” (also make believe).
These AI “hallucinations” are usually less common when the programs are given a specific data source to generate their responses — the attempt at a news story failed probably as much because of user error as buggy software. Schuman said his own experiments using ChatGPT to summarize lengthy Congressional Research Service reports produced mostly good — but still imperfect — results.
Still, Congress will need to keep up with the arms race, as lobbyists and constituents use AI for, or against, them, Friday’s panel warned. While technology often advances exponentially, policymaking is a much slower grind, said POPVOX Foundation CEO Marci Harris. “Are you able to use modern tools and processes for your work in ways that are similar to the ways that other industries are going to be using these new technologies?” Harris asked.
The internet has already made it much easier for interest groups to "astroturf," or mimic authentic grassroots movements, and inundate congressional offices in thousands of mostly identical emails. AI, however, will allow them to flood offices with seemingly unique, individually written correspondence, lending the impression that thousands of average Americans really do have strong opinions about something like proposed changes to the Financial Accounting Standards Board’s expected credit loss reporting principles.
It’s an issue that Lars Erik Schönander, a policy technologist at the Lincoln Network, expects staff for the Appropriations committees — which accept written testimony from the public — will need to confront soon.
The solution, Schönander said, is more AI — programs that can discern computer-generated text from the real thing.
Similarly, “deep fake” and voice cloning software will mean campaign staffers and the media will need to be ready to confront a flood of faked recordings of candidates uttering damning statements.
Garrett Auzenne, senior legislative counsel for Rep. Sheila Jackson Lee, D-Texas, was most excited about the potential to ask AI for computer code or spreadsheet macros that could take the reams of constituent correspondence that congressional offices receive and turn it into actionable data, or quickly translate information from various government sources into something legible. “Before this, you would have to be some sort of data scientist or computer coder that knows and understands APIs,” he said, referring to the application programming interfaces that software programs use to communicate with each other.
Still, Auzenne emphasized the need for caution, citing data privacy concerns and AI’s propensity to hallucinate. “Like, if you’re a radiologist and you use AI to look at [a growth] and it says ‘no cancer here!’ I’m going to want somebody else to review that,” he said.