Facebook

Why Zuckerberg’s New “Trust Indicators” Can’t Fix Fake News

A system providing additional context for publishers is too little, too late in a world where basic facts and mainstream publications are quickly written off as partisan.
Mark Zuckerberg
By Daniel Biskup/laif/Redux.

When Daily Beast correspondent Michael Tomasky took a recent dive into snopes.com, what he found there alarmed him. “As I toggled over the home page, I was flabbergasted by what a high percentage of Snopes articles now are devoted to debunking fake news,” he wrote. “And I don’t mean CNN. I mean, if I may use the phrase, real fake news. Garbage. And more specifically, right-wing garbage.” Prior to the era of fake news, Snopes was most commonly used to settle the occasional political dispute—now, however, the bulk of the site’s articles are dedicated to debunking salacious stories and conspiracy theories swirling around the Internet thanks to the likes of Stefan Molyneux and Alex Jones. Despite the site’s best efforts, sites like Infowars continue to spew stories with no basis in fact—stories Jones pushes as a counter-narrative to the “mainstream media,” despite new findings that they often originate there. And evidence shows his efforts are working; fueled by Republican discontent, a recent Gallup poll showed Americans’ trust in mass-media outlets has fallen to an all-time low.

The phenomenon has become a toxic feature of social-media platforms, whose mechanisms for promotion and sharing make them uniquely susceptible to disinformation. Facebook C.E.O. Mark Zuckerberg has done his best to defuse the backlash, declaring earlier this month that, “of all the content on Facebook, more than 99 percent of what people see is authentic. Only a very small amount is fake news and hoaxes.” At the same time, tech giants are still scrambling for a way to counter fake news, their newest idea being a system of “trust indicators” that offer users additional context about the source of the information they’re reading. With the news verification system, CNN reports, that publishers will be able to upload additional details about their outlets, their writers, and their own fact-checking policies; when you see an article from Vox on Facebook, for example, you may be able to tap into it for more details about Vox Media’s ethics policy and financial backers.

Facebook is reportedly the first social-media platform to pioneer the new verification system, which will be implemented alongside its fact-checking operation, in which fact checkers from third-party groups, including Snopes, are responsible for tagging potential fake stories with a “disputed” label. But much like that endeavor, which is a year in the making and, according to the fact checkers themselves, not very effective, tech’s new trust indicators ultimately rely on users deciding whether or not to trust a publisher—the crux of the fake-news problem in the first place.

As the media landscape has fragmented in a world with limitless sources of contradictory information, it has become harder to achieve consensus on even the most basic facts, particularly with a presidential administration hostile to major news outlets. Donald Trump spent much of his campaign bashing legacy media outlets as “fake news,” kicking off his tenure in the White House by lambasting CNN for misreporting the size of his inauguration crowd. And when three CNN staffers resigned earlier this year after the outlet published and then retracted a story linking Anthony Scaramucci to the Senate Intelligence Committee’s Russia investigation, the White House latched onto the network’s decision in an effort to undermine its credibility, with press secretary Sarah Huckabee Sanders decrying its “constant barrage of fake news.”

The increasingly hyper-partisan divide is evident in the newest wave of Snopes articles, which have titles like, “Was Barack Obama President During Hurricane Katrina?” (obviously not); “Was the Texas Church Shooter an Antifa Member Who Vowed to Start a Civil War?” (easily debunked); and, of course, “Hillary Clinton Gave 20 Percent of United States’ Uranium to Russia in Exchange for Clinton Foundation Donations?” (a little more complicated, but no). But the very readers such articles are aimed at—those who subscribe to the theories they disprove—are arguably the least amenable to them. If a reader has already decided to trust a site like Jones’s over The New York Times, for example, then Snopes’s efforts will do about as much to sway them as Facebook’s new trust indicator. Those who care enough to read about which venture-capital firms fund BuzzFeed’s operation would have read and trusted BuzzFeed anyway; the people who would write BuzzFeed off will do so, trust indicator or not. Though Facebook spokespeople have acknowledged that there is no “silver bullet” for combating misinformation on the site, the true scope of the problem may be worse. At what point do investors (and regulators and journalists) concede that “fake news” is part of a larger cultural and epistemological war that Mark Zuckerberg cannot win?