AI chatbots should pay for news, bipartisan Senate group says
Some experts see shift in reporters' responsibilities as technology use grows in newsrooms
Senators who have raised bipartisan outcry over the demise of newsrooms at the hands of Big Tech companies are rallying to protect journalism from the potentially fatal blow of artificial intelligence.
They’re hoping to ensure that news organizations receive full compensation when algorithms are trained using news articles — and are urging colleagues to act before the twilight of human-generated news sets in.
“There are legitimate fears that AI will directly replace journalists,” Sen. Richard Blumenthal, D-Conn., chairman of the Judiciary Committee’s Privacy, Technology and the Law Subcommittee, said at a hearing last month. Blumenthal urged senators not to repeat a prior mistake by being too slow to regulate technology’s impact on the industry.
“Our purpose here must be to determine how we can ensure that reporters and readers reap the benefits of AI and avoid the pitfalls,” he said.
The danger of AI arrives amid findings by Northwestern University that one-third of local newspapers in the U.S. have folded since 2005. And it’s been a brutal start to the year for the industry, with layoffs of more than 500 journalists in January, including at major publications like the Los Angeles Times, Time magazine and Business Insider.
AI is already a reality in newsrooms across the country. Journalists use it to generate transcripts during interviews, analyze large troves of leaked documents and perform other repetitive tasks more efficiently.
But the advent of generative AI, the technology behind ChatGPT and OpenAI that creates new content, is seen as a direct threat to journalists’ jobs despite misgivings over the quality and accuracy of the content. The New York Times is suing OpenAI and Microsoft over this very issue — asking a federal judge in Manhattan to block the companies from using its stories to train chatbots.
“Adding insult to injury, those models are then used to compete with newspapers and broadcast, cannibalizing readership and revenue from the journalistic institutions that generate the content in the first place,” Blumenthal said at the hearing.
Blumenthal and Sen. Josh Hawley, R-Mo., in September issued a legislative outline that would create license requirements to guarantee that “newspapers and broadcasters are given credit financially and publicly for reporting and other content” used by AI companies, Blumenthal said.
Hawley stressed that Congress has a role to play in making sure newsrooms can protect their content from being scraped off their websites and then regurgitated, for free, by AI chatbots.
“That includes hoovering up content they didn’t pay for and using it to make their own platforms more powerful. Frankly, the big AI companies should be paying for whatever personal data they’re using to train their models, rather than swiping it now and asking questions later. That’s basic justice,” Hawley said in a statement to CQ Roll Call.
As part of the outline, Hawley proposed a measure co-sponsored by Blumenthal that would waive immunity for generative AI content under Section 230 of the Communications Decency Act of 1996, which shields internet companies from liability for the content users post on their sites. As a stand-alone measure, it could potentially move faster in the Senate than their full slate of proposals.
The bill, though, has already hit a speed bump. Hawley took to the floor to try to pass the legislation by unanimous consent in December but was blocked by Sen. Ted Cruz, R-Texas.
Other legislation includes a bill led by Sen. Amy Klobuchar, D-Minn., and backed by eight Republicans, that would temporarily exempt newsrooms from antitrust law and allow them to collectively negotiate with Big Tech companies for compensation when the platforms distribute news content. It’s modeled after legislation in Australia and Canada that Klobuchar said has redirected millions of dollars back into the journalism industry.
‘Grow the pie’
Many experts who acknowledge the risks say the technology can free journalists from mundane and rudimentary tasks, such as writing headlines or summarizing complex scientific studies, that suck time away from in-depth reporting.
Nicholas Diakopoulos, an associate professor of communication studies and computer science at Northwestern University, has examined the effects of AI on journalism for 15 years. He expects AI to be complementary technology, not substitute technology, as the process of producing news evolves.
“It’s going to grow the pie, so to speak, of the amount of work that gets done. But it’s also going to change the nature of that work,” he said. “It’s going to mean maybe being a reporter is a little bit less writing, and maybe it’s a little bit more formulating the information you’ve collected so that it can be rewritten by generative AI.”
Contrary to some expectations, more human effort may be needed as AI tasks are scaled up, he added.
David Karpf, a professor of media and public affairs at George Washington University, said the pace of AI integration into newsrooms may dictate how successful the transition will be. Newsrooms under financial strain that use the technology to slash jobs will do so at the expense of quality, Karpf said, as human eyes are still needed to check for so-called hallucinations, the term used to describe when AI models generate “inaccurate or fabricated responses to queries.”
“The smart move is to move slow. … The only reason that I can see to move fast is companies wanting to downsize and cost-cut,” he said.
But both Karpf and Diakopoulos noted that journalism faces an existential crisis that goes beyond the threat of AI.
“All of the media layoffs we’ve seen are the result of massive transitions in the business model of media. There is so much competition for attention. … The fact that anyone is looking at news is kind of a miracle,” Diakopoulos said.
Danielle Coffey, president and CEO of News Media Alliance, said at the Senate Judiciary hearing that Klobuchar’s bill aims to rectify this “market imbalance.”
The bill, however, is not without critics, including the American Civil Liberties Union. The ACLU argues it would “significantly interfere” with press freedom and free speech online.
‘Devastating consequences’
Aram Sinnreich, professor and chair of communication studies at American University, said Democrats and Republicans can accelerate the role of government in regulating AI by adding the issue to their national political party platforms.
“We’ve done that with the environment. We’ve done that with foreign policy, with education. There’s no reason that you couldn’t do the same thing … when it comes to AI and media,” Sinnreich said in an interview.
The risk of deepfakes on the campaign trail has already motivated lawmakers. Klobuchar is leading a bipartisan group on another bill that would ban the “distribution of materially deceptive AI-generated audio or visual media” of individuals seeking federal office.
It’s another issue that could plague newsrooms.
“The rising prevalence of deepfakes makes it increasingly burdensome for both our newsrooms and users to identify and distinguish legitimate copyrighted broadcast content from the unvetted and potentially inaccurate content being generated by AI,” Curtis LeGeyt, president and CEO of the National Association of Broadcasters, testified at the Judiciary subcommittee hearing. “We have seen the steady decline in our public discourse as a result of the fact that the public can’t separate fact from fiction.”
Condé Nast CEO Roger Lynch also testified, warning that fake photos, videos and statements can damage companies’ brands.
“Americans can’t possibly spend all of their time determining what’s true and what’s false. Instead, they may stop trusting any source of information, with devastating consequences,” Lynch said. He urged lawmakers to clarify that news content is not fair use under licensing laws, and AI companies have to compensate newsrooms for Gen AI training and output.
So what does ChatGPT think Congress should do to regulate generative artificial intelligence in the journalism industry? When prompted, it churned out an article apparently aware of its own shortcomings, focused on the need to address “the potential for misinformation, biased reporting, and the erosion of journalistic standards.”
Gopal Ratnam contributed to this report.