Skip to content

Social media algorithms threaten democracy, experts tell senators

Facebook, Google, Twitter go up against researchers who say algorithms pose existential threats to individual thought

Monika Bickert, vice president for content policy for Facebook, makes an opening statement Tuesday via videoconference during a hearing about social media algorithms at the Senate Judiciary Subcommittee on Privacy, Technology, and the Law.
Monika Bickert, vice president for content policy for Facebook, makes an opening statement Tuesday via videoconference during a hearing about social media algorithms at the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. (Getty Images)

A Senate hearing on Tuesday pitted three powerful social media companies against researchers who testified that the algorithms used by the platforms to generate revenue by keeping users engaged pose existential threats to individual thought, and democracy itself.

The hearing before the Judiciary Subcommittee on Privacy, Technology and the Law featured a bipartisan approach to the issue from the new chairman, Democratic Sen. Chris Coons of Delaware, and ranking member, GOP Sen. Ben Sasse of Nebraska. Algorithms can be useful, the senators agreed, but they also amplify harmful content and may need to be regulated.

Government relations and content policy executives from Facebook, YouTube, and Twitter described for the senators how their algorithms help them identify and remove content in violation of their terms of use, including hateful or harassing speech and disinformation. And they said their algorithms have begun “downranking,” or suppressing, “borderline” content.

Monika Bickert, Facebook’s vice president for content policy, said it would be “self-defeating” for social media companies to direct users toward extreme content. 

But Tristan Harris, a former industry executive who became a data ethicist and now runs the Center for Humane Technology, told the committee that no matter what steps the companies took, their core business would still depend on steering users into individual “rabbit holes of reality.”

“It’s almost like having the heads of Exxon, BP, and Shell here and asking about what you’re doing to responsibly stop climate change,” Harris said. “Their business model is to create a society that’s addicted, outraged, polarized, performative and disinformed.”

“While they can try to skim the major harm off the top and do what they can — and we want to celebrate that, we really do — it’s just that they are fundamentally trapped in something they cannot change,” Harris continued.

[Kids emerge as bipartisan bridge for taking on social media giants]

Joan Donovan, the research director at the Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy, said the platforms should be required to offer users a “public interest” version of their news feeds or timelines and provide robust tools to moderate content.

“We didn’t build airports overnight but tech companies are flying the planes with nowhere to land,” Donovan said. “The cost of doing nothing is nothing short of democracy’s end.”

Coons and Sasse commended the platforms for efforts to suppress harmful content and increase transparency but questioned whether they would do enough if left to their own devices. Coons noted that Facebook recently took special measures to limit misinformation and violent content ahead of the verdict in the trial of former Minneapolis police officer Derek Chauvin, who was convicted in the May 2020 murder of George Floyd, a Black man.

“My question for you is why wouldn’t Facebook always limit the rapid spread of content likely to violate your standards,” Coons asked Bickert.

Bickert responded that such measures, in addition to removing harmful content, might also limit the spread of “false-positive” content that would not violate the company’s policies.

“So there is a cost to taking action on that content,” Bickert said. “But in situations where we know there is extreme or finite risk, such as an election in a country experiencing unrest or in Minneapolis with the Chauvin trial, we’ll put in a temporary measure where we’ll de-emphasize content that the technology, the algorithms, say is likely to violate [company policy].”

Coons said the hearing was a learning opportunity for both him and Sasse and that he had no specific regulatory agenda but thinks the issue demands urgent attention and would consider supporting voluntary, regulatory or legislative remedies.

Sasse said the piecemeal approaches by each company were “irreconcilable” with the broad challenges described by Harris.

“He’s making a big argument and we’re hearing responses that I think are only around the margins,” Sasse said.

Recent Stories

Rep. Nancy Mace wears sling at the Capitol after saying she was ‘accosted’

House Democratic border hawks eye new influence next Congress

House sends compromise NDAA to Senate

Capitol Lens | Statue debut

Disaster aid for national parks deemed ‘critical’ by advocates

Rep. Tony Cárdenas on his legacy and Latinos’ electoral shift