Artificial intelligence could transform politics as profoundly as television or radio, providing the early masters of the nascent technology a sizable — and perhaps decisive — advantage in the upcoming elections.
But even as campaign professionals embrace AI, they worry that the newfound ability to quickly and cheaply generate convincingly deceptive audio and video has troubling implications for a political system already beset by misinformation. How can voters hold politicians accountable for their failings if they believe those failings are fake? How will campaign professionals respond when their candidates are smeared with fabricated “recordings”?
Despite the widespread anxiety over deepfakes’ effects on democracy, political consultants say they are more excited about generative AI’s potential to tackle boring grunt work and expand their ability to deploy big-race tactics in down-ballot contests.
AI’s real impact on campaigning will be “behind the scenes,” said Tom Newhouse, vice president of digital marketing at Converge Media, a Republican advertising and consulting firm. “It’s going to be improving fundraising capabilities by better targeting, whether that is location targeting, income, hobbies or habits, providing campaigns with up-to-date voter data, more personalized advertising, [or] messages.”
“There are many small campaigns that I think can potentially leverage the tools to [not just] save time, but to create content that may not have been possible otherwise,” said Larry Huynh, a partner at Trilogy Interactive, a Democratic digital marketing firm.
Campaign professionals across the country are now racing to see how they can use these new machine learning tools to supercharge their work in advance of their first big test: the 2024 presidential elections.
“Anyone who wants to do their job better here — and in any industry, let alone politics — is trying to see how the tool can be beneficial to their work,” said Huynh, who is also the incoming president of the American Association of Political Consultants.
The election pros that CQ Roll Call spoke to all expect AI to give some tech-savvy candidates a big leg up on their opponents.
“Campaigns that can innovate and lean into these tactics are going to have a strategic advantage,” said Newhouse.
From fireside chats to ChatGPT
A short survey of the history of technology and politics suggests that the earliest adopters and first masters of new technologies benefited greatly.
Franklin Delano Roosevelt wasn’t the first president to address the nation by radio; that would be Warren G. Harding. But unlike his predecessors, who either orated into the microphone as if delivering a whistle-stop speech or stiltedly read their remarks like a sixth grader in front of the classroom, FDR’s weekly fireside chats were delivered as if the listeners were sitting beside him, giving him a reassuring, avuncular presence in homes across the nation.
At the other end of the spectrum, radio was the front line of fascism’s propaganda war: Benito Mussolini seized the Italian radio stations as he consolidated power, while Joseph Goebbels commissioned and subsidized cheaper radio receivers to further the reach of Adolf Hitler’s rants.
Television soon replaced radio as the preferred medium of mass communication. President Dwight D. Eisenhower was the first to regularly broadcast his news conferences and to hire an ad agency to advise his campaign. The election of 1960 arguably was decided by the first televised presidential debate, where voters who watched on TV preferred a youthful and tan John F. Kennedy to a sweaty, sallow Richard M. Nixon, while radio listeners thought the Republican had performed better. Nixon learned his lesson, and eight years later he embraced TV wholeheartedly to sell a “new Nixon,” hiring an entire team of Madison Avenue executives and TV producers to run his campaign, including a young Roger Ailes. In the words of Joe McGinnis, Nixon “depended on a television studio the way a polio victim relied on an iron lung.”
With new technologies came newfound boons to politicians who knew how to use them — or, more accurately, find someone who did. Ronald Reagan benefited from the direct mailing work of Richard Viguerie, George W. Bush from the microtargeting innovations of Karl Rove. Barack Obama’s campaigns would expand on that data-driven work to organize grassroots enthusiasm into an army of volunteers, while Donald Trump’s team benefited from Cambridge Analytica’s mining of voters’ social media data.
“These are the political campaigns that told industry and private practice and commercial industries how to best leverage these tools,” said Newhouse, describing campaigns as “laboratories of innovation.”
Newhouse thinks those competitive pressures could mean that electioneering, itself a multibillion-dollar industry, might lead the corporate sector on adopting AI. “As you look forward [to the 2024 elections], political campaigns are going to be incentivized to use these technologies quicker … than you might see in the private markets,” he said.
AI is already being used in politics. After President Joe Biden announced his reelection campaign, the Republican National Committee released an AI-generated video that envisioned a dystopian future wrought by his four more years in office. In the Chicago mayoral primary earlier this year, a Twitter account posing as a local news outlet posted a deepfake video impersonating candidate Paul Vallas on the eve of the election. And campaigns have used machine-learning models to guide their ad buys on social media platforms like Facebook for years now.
Right now, though, it’s the potential to use large language models like OpenAI’s ChatGPT to update voter files, perform data analysis and program automated functions that excite political operatives the most. While well-funded Senate or gubernatorial races can afford to hire data scientists to crunch numbers, smaller campaigns rarely have that luxury, said Colin Strother, a Democratic political consultant based in Texas. AI will change that.
“I’m excited about some of the brute work that would be really great to do, but — unless you’re on a big-time campaign, with a ton of money and a ton of staff — you can’t afford to do,” Strother said.
Strother thinks AI will let campaigns upgrade their microtargeting operations into hypertargeting and speed up their rapid response operations. An AI-powered chatbot fed with an opponent’s every public utterance and voting record could churn out instantaneous rebuttals to whatever they say in a speech, debate or campaign ad.
He also expects campaigns to use AI to update voter databases in real time, perform trend analyses and send tailored communications to different tranches of voters. Those could be persuasive pitches to undecided independents on the issues that probably matter most to them or fundraising appeals to stronger supporters playing on their biggest fears if their candidate loses. “If you can automate all of that, then these previously time-intensive, labor-intensive and therefore cash-intensive campaign functions get a lot easier to do and a lot cheaper to accomplish,” Strother said.
Huynh also sees AI leveling the playing field somewhat between well-heeled and poorer campaigns. His agency has already played around with AI to create a video that portrayed one client as a superhero. It’s the kind of project that would have required renting out a studio with green screens, blocking off hours of the client’s time and hiring some computer animators and editors to put it all together — but, with AI, was created “for fun” in no time at all.
Already the proliferation of high-end video editing software and improved cameras has meant that any candidate for town dogcatcher with an artsy nephew can put together a slick campaign ad, even if they can’t afford to run it anywhere but their own YouTube page. AI will only supercharge that trend, said Huynh.
“You’ll see some examples of content and video graphics images that that generative AI will make possible for leaner campaigns to do that they wouldn’t have been able to do otherwise,” he said.
As a result, for most voters, the rise of AI might not mean being bombarded by hard-to-spot political lies so much as just being bombarded by political content in general. The history of technology’s impact on work suggests campaigns will be able to do more with less, and thus will make more of the election stuff that inundates and annoys voters: more online ads, polls, texts, emails, fundraising appeals, emotional manipulation and another hive dumped on top of the growing swarm of robocalls incessantly buzzing our cellphones.
A recent report in The New York Times showed how 527 groups targeted conservatives using computer-generated audio to raise $89 million purportedly to support cops, firefighters and veterans politically that instead went almost entirely into the pockets of the fundraising groups themselves.
That potential for bad actors to abuse AI worries campaign professionals.
“Y’know, I am apprehensive about it being used for evil,” Strother said.
“From a professional standpoint, I’m excited,” said Newhouse. “From a personal standpoint, it’s tough not to get concerned that the trends of the last decade are going to continue as far as laying the burden at the feet of the voters to figure out what’s true and what’s false.”
Earlier this month, AAPC released a policy statement forbidding its members from using generative AI to create deepfakes that mislead the public. But Huynh knows not every operative will abide by the trade group’s ethical standards. “I expect some campaigns to implement things that we believe are inappropriate, not good for campaigns and not good for democracy,” said Huynh. “There will be thousands and thousands of campaigns that happen in 2024. It would be silly to dismiss that possibility.”
Strother’s primary concern is AI’s ability to flood the zone with misinformation, creating “chaos,” which disproportionately turns off the more moderate voters who don’t view politics as an extension of their social identity. “If we’re creating homework for them — and they now have to figure out which one of us is telling the truth [and] which one of us is lying — they’re going to bail because they’re not looking for extra work to do,” he said.
Regulation to the rescue?
The use of AI remains largely unregulated. Rep. Yvette D. Clarke, D-N.Y., introduced a bill that would extend the Federal Election Campaign Act’s disclosure rules for TV and radio advertising — the “I support this message” disclaimers — to online content and add another notice requirement if it used AI-generated material. “When you think about our adversaries that have been tampering with our [elections], this is another tool that they can access, knowing that we’re vulnerable in an open society to misinformation and disinformation,” Clarke told CQ Roll Call.
Democratic Sens. Amy Klobuchar of Minnesota, Cory Booker of New Jersey and Michael Bennet of Colorado recently introduced similar legislation in the Senate, while Majority Leader Charles E. Schumer has met with Sens. Martin Heinrich, D-N.M., Mike Rounds, R-S.D., and Todd Young, R-Ind., to brainstorm bipartisan legislation.
Given that Congress has yet to pass legislation regulating how social media companies utilize user data, some advocates worry lawmakers will similarly fail to act quickly on AI. But AI’s ability to generate election-swaying deceptions is far more obvious than social media influence on votes. A deepfake’s ability to fool voters is readily apparent to lawmakers, as Sen. Richard Blumenthal, D-Conn., made clear by opening his remarks at a Senate Judiciary subcommittee hearing last week by playing AI-generated audio of his own voice reading an AI-generated script. At that hearing, Republicans sounded receptive to OpenAI CEO Sam Altman’s calls for a new federal agency to regulate AI and support for disclosure rules.
But even if Congress does act quickly, there are questions of how much it’ll help.
Given the prevalence of dark-money political groups, the First Amendment’s broad protections of political speech and the fact that defamation lawsuits take years to resolve themselves, the schemers behind a libelous deepfake could potentially avoid any liability by hiding behind the organization’s corporate veil. “It really opens the door for some really nasty stuff,” said Strother.
People are prone to logical hiccups like politically motivated reasoning, whereby the desire to maintain a socially held belief overpowers our ability to grapple with a truth that challenges those convictions. And then there is confirmation basis: accepting information that confirms our prior beliefs while dismissing data that confounds them. That means that once a false conviction is set, subsequent fact-checking may not be enough to dislodge that fantasy.
Despite hundreds of court cases, audits and media reports proving otherwise, polls consistently show that a majority of Republican primary voters believe the 2020 elections were stolen. So a computer-fabricated recording of President Joe Biden purporting to admit to such a plot wouldn’t need to be good to be effective. It doesn’t take much to convince the faithful that devils walk among us.
Those worries over AI may be overblown or premature. So far, political deepfakes have been more amusing than malicious. But the technology is advancing rapidly, leading to uncertainty over its potential for good or ill, and uncertainty tends to compound anxiety. That may be why the political consultants experimenting the most with AI are the least bothered about its downsides — although they’re still concerned.
Even as he continues to expand his work with AI, Newhouse hesitates to make predictions on its impact without a caveat.
“If we talked about this technology six, seven months ago, you’d probably get a blank stare,” said Newhouse. “So to predict another six, seven months in the future, let alone 16 months in the future to the November 2024 elections, it’s difficult to say what that looks like.”