Curbing disinformation: How much should social media companies do?
Both parties want tech firms to do more, but are at odds over type of actions required

Facebook’s efforts to limit online disinformation while simultaneously allowing politicians to lie in paid advertisements ahead of the 2020 election is forcing a debate over the responsibility of technology companies to crack down on domestic and foreign disinformation, and the consequences if they don’t.
Over the weekend, the company removed a political group’s advertisement that falsely claimed South Carolina Republican Sen. Lindsey Graham supported the Green New Deal. The move stands in contrast to an earlier decision to allow President Donald Trump’s reelection campaign to run an advertisement that claimed, without evidence, that former Vice President Joe Biden threatened to withhold $1 billion from Ukraine unless its government fired a prosecutor investigating Biden’s son Hunter.
The Green New Deal ad, ostensibly inspired by an exchange last week between New York Democratic Rep. Alexandria Ocasio-Cortez and Mark Zuckerberg, Facebook’s chief executive, represents the new reality Facebook now faces as it seeks to allow free expression while clamping down on disinformation.
“In a democracy, I believe people should be able to see for themselves what politicians who they may or may not vote for are saying so they can judge your character for themselves,” Zuckerberg said.
Democrats have blasted Facebook for allowing politicians to lie in ads. Massachusetts Sen. Elizabeth Warren, who is vying with Biden for the Democratic presidential nomination, accused the company of running a “disinformation-for-profit machine.”
Facebook’s decision to allow the anti-Biden ad, and its general approach to content moderation, has critics on both sides of the political aisle wondering whether technology companies should be forced to take more responsibility for false information posted on their sites.
At the center of the debate is a 1996 law, Section 230 of the Communications Decency Act, that gives technology companies, including social media platforms, broad immunity from liability for third-party content posted on their sites. That includes paid advertisements, such as the Trump campaign’s Biden video, courts have found.
In the two decades since its creation, the statute has been credited with allowing Silicon Valley to thrive and enabling freedom of expression online. But it also allows platforms significant cover for their hosting of “fake news,” a phenomenon that has renewed interest in possible changes to the law.
“One of the main goals of Section 230 in 1996 was to give platforms the flexibility to implement effective content moderation systems,” said Jeff Kosseff, a law professor at the U.S. Naval Academy and author of a book about the law, “The Twenty-Six Words That Created the Internet.”
“What happened in 2016,” Kosseff said, “had a big effect on the political debate because it highlighted the less-than-perfect content moderation systems that platforms have.”
[jwp-video n=”1″]
Could Section 230 disappear?
But the path forward is muddled. For instance, proposals by Republicans to change Section 230 are largely driven by unproven allegations that content moderation at companies like Facebook and Twitter is biased against conservatives. Another issue is that, even if Section 230 were abolished, the First Amendment would still protect most third-party content, including hate speech and fake news.
Sen. Ron Wyden, the Oregon Democrat who helped write Section 230, said getting rid of the law in an effort to curb disinformation would not have the desired effect. In an interview with Recode’s Kara Swisher last year, Wyden said the law was envisioned as a shield from liability for early entrepreneurs but “also a sword to deal with irresponsible conduct.”
“Part of the reason the companies are in so much trouble,” Wyden said, “is that they haven’t been using the sword.”
As focus on the companies’ content moderation practices has ballooned, so has interest in Section 230, and what steps might be taken to change it.
“The political consensus around Section 230 that’s existed for the last 10 years or so is really starting to break down,” said Tim Hwang, a technologist and researcher who has written about disinformation for Stanford University’s Project on Democracy and the Internet. “Folks on both sides of the aisle are interested in revisiting the deal that was struck.”
Kosseff said that for the first two decades of its existence, Section 230 was “treated as almost a birthright by the platforms.”
“They got really arrogant about it, and they did not meet their responsibility as good corporate citizens, which was kind of assumed by the passage of Section 230, that they would take an approach that protects their users and society,” Kosseff said.
Then came the 2016 election, Russian interference in it, and with it increased scrutiny of Big Tech. In April, Speaker Nancy Pelosi described Section 230 as a “gift” to platforms and said she didn’t think they “are treating it with the respect that they should.”
“It is not out of the question that that could be removed,” the California Democrat said of the section.
Missouri GOP Sen. Josh Hawley, often Pelosi’s ideological opposite, introduced in June legislation to revoke Section 230 privileges for large technology companies and instead require them to earn it by submitting to a government audit that would determine their political neutrality.
Too much or too little
But neither approach would do much to target directly the spread of disinformation on the platforms.
“There are two very different views about platforms and Section 230 that are really hard to reconcile,” Kosseff said.
On one hand, he said, Republicans are accusing platforms of doing too much moderating, and doing it in an anti-conservative way. On the other, Democrats say the platforms aren’t monitoring enough and that too much disinformation is being allowed to spread.
“What one person considers to be legitimate political discourse, another could consider it to be hate speech,” Kosseff said. “And you have the platforms in the middle saying, ‘What do we keep and what do we get rid of?’ They’re not going to make everyone happy no matter what they do.”
Facebook, for its part, has outlined its plan to limit disinformation. In a speech at The Atlantic Festival in Washington on Sept. 24, Nick Clegg, Facebook’s vice president of global affairs and communications, acknowledged that the company “made mistakes in 2016, and that Russia tried to use Facebook to interfere with the election by spreading division and misinformation.”
But the company has “learned the lessons of 2016,” Clegg said, and “spent the three years since building its defenses to stop that happening again.”
He said Facebook is preventing the creation of millions of fake accounts, known as bots, each day. The company has also contracted with independent fact-checkers and hired 30,000 content moderators. It’s also exploring ways to use artificial intelligence to detect and cull dangerous content.
In some ways, this type of behavior is what the authors of Section 230 had in mind, the “sword” to which Wyden referred.
“One of the main goals of Section 230 was to give platforms the flexibility to implement effective content moderation systems,” said Kosseff, although it’s hard to know whether Facebook’s tactics are proving effective.
But there are also political realities with which to contend, not to mention ongoing criticism of Facebook’s policy on political ads. More than half of Americans believe social media companies “spread lies and falsehoods,” compared with only 31 percent that say they’re a source for real news, according to polling published in April by NBC and the Wall Street Journal.
Big technology companies are increasingly unpopular among the American public, Kosseff said, and are “starting to realize that if they don’t clean up their act, Section 230 is not going to be here.”
[jwp-video n=”2″]