Skip to content

Facebook can’t be trusted to stop spread of extremism, experts tell Senate

Social media cannot police themselves because their business models will always prioritize the spread of polarizing content, specialists say

Senate Homeland Security Chairman Gary Peters says pledges by social media companies to crack down on harmful content “have gone largely unfulfilled for several years now.”
Senate Homeland Security Chairman Gary Peters says pledges by social media companies to crack down on harmful content “have gone largely unfulfilled for several years now.” (Tom Williams/CQ Roll Call file photo)

The failure of social media companies to contain the spread of violent extremist content on their platforms will continue to have deadly real-world consequences unless the government intervenes, a panel of internet watchdogs and academics told senators on Thursday.

Facebook, in particular, is responsible for spreading extremist content and fomenting domestic terrorism, including activity related to the “Stop the Steal” movement that led to the deadly Jan. 6 riot at the Capitol, the experts testified during a hearing of the Senate Homeland Security and Governmental Affairs Committee.

Karen Kornbluh, who directs the Digital Innovation and Democracy Initiative at the German Marshall Fund of the United States, described how social media algorithms can quickly lure a user from innocuous political content to instructional videos for forming a militia.

“This is a national security vulnerability,” Kornbluh said, giving a small number of content producers the ability to exploit social media algorithms to gain enormous reach.

“Social media goes well beyond providing users tools to connect organically with others,” she testified. “It pulls users into rabbit holes and empowers small numbers of extremist recruiters to engineer algorithmic radicalization.”

David Sifry, a former social media executive and current vice president of the Anti-Defamation League’s Center for Technology and Society, said platforms can no longer be depended on to police themselves because their business models, and the algorithms behind them, will always prioritize the spread of lucrative yet polarizing content.

“How many lives will be lost before companies put people before profit?” Sifry asked.

Understanding the algorithms powering the platforms is key to stopping the spread of extremist content, said Nathaniel Persily, a Stanford University law professor who is co-director of the Stanford Cyber Policy Center. If companies refuse to share their algorithms with independent researchers, the Federal Trade Commission should compel them, he said.

“They have lost their right to secrecy,” Persily said. “We are at a critical moment when we need to know exactly what is happening on these platforms.”

Without granting researchers unencumbered access to the algorithms, an accurate public understanding of how the platforms operate is impossible, Persily said.

“It doesn’t matter if Facebook says it takes down 4 billion accounts each year,” he said. “That’s interesting, but we don’t know what the denominator is.”

Persily said regulators should no longer accept claims that the algorithms are proprietary technology that could pose business risks to the companies if they are disclosed.

Documents recently shared by Facebook whistleblower Frances Haugen “brought front and center that the internal researchers at these companies know an enormous amount about us and we know very little about them,” Persily said. “This is an unsustainable equilibrium.”

In a statement in response to the hearing, a spokesman for Facebook said the company is on track to spend more than $5 billion on safety and security this year and would support industrywide regulations aimed at reducing the spread of harmful content.

“While we have rules against harmful content and publish regular transparency reports, we agree we need regulation for the whole industry so that businesses like ours aren’t making these decisions on our own,” said the spokesman, Drew Pusateri.

It’s a tenuous moment for the social media giant, which is dealing with the fallout from Haugen’s disclosures and fending off bipartisan outrage at the spread of violent content and bombshell revelations that the company failed to report on internal research showing that Instagram, its subsidiary, has negative effects on the mental and physical health of teenage users.

Sen. Gary Peters, D-Mich., the chairman of the Homeland Security Committee, said pledges by the companies to crack down on harmful content “have gone largely unfulfilled for several years now.”

“Americans deserve answers on how the platforms themselves are designed to funnel specific content to certain users, and how that might distort users’ views and shape their behavior, online and offline,” Peters said.

Recent Stories

Capitol Lens | O’s face

Mayorkas impeachment headed to Senate for April 11 trial

Muslim American appeals court nominee loses Democratic support

At the Races: Lieberman lookback

Court says South Carolina can use current congressional map

Joseph Lieberman: A Capitol life in photos