clock menu more-arrow no yes mobile

Filed under:

Mark Zuckerberg is in denial about how Facebook is harming our politics

President Obama Speaks At The Global Entrepreneurship Summit
Facebook CEO Mark Zuckerberg might be the most powerful person in the news business.
Photo by Justin Sullivan/Getty Images

Over the last generation, there’s been a big shift in the way people find news stories. We used to get our news from the morning newspaper and the nightly news. Now more and more of us pull out our smartphones and look at our Facebook feeds.

The result has been a disaster for the public’s understanding of current affairs. Reporters have come under increasing pressure to write “clickbait” articles that pander to readers’ worst impulses. Too-good-to-check stories gain more traction online than stories that are balanced and thoroughly reported. Partisans share totally fake stories from fringe websites. That has worsened the nation’s political polarization and lowered the quality of democratic discourse.

Could this flood of fake news have gotten Donald Trump elected? A lot of people are wondering if this could have been a factor. On Thursday evening, Facebook CEO Mark Zuckerberg was asked about Facebook’s responsibility for the election results at a California technology conference.

“There is a certain profound lack of empathy in asserting that the only reason someone could have voted the way they did is they saw some fake news,” Zuckerberg argued. Also, he insisted, fake news existed at both ends of the political spectrum. “Why would you think there would be fake news on one side and not the other?”

Of course, we don’t know if Facebook changed the outcome of the election. But even if Zuckerberg is right, fake news is a real problem on Facebook. And the broader problem is that Facebook’s leadership is in denial about the kind of organization it has become.

“We are a tech company, not a media company,” Zuckerberg has said repeatedly over the last few years. In the mind of the Facebook CEO, Facebook is just a “platform,” a neutral conduit for helping users share information with one another.

But that’s wrong. Facebook makes billions of editorial decisions every day. And often they are bad editorial decisions — steering people to sensational, one-sided, or just plain inaccurate stories. The fact that these decisions are being made by algorithms rather than human editors doesn’t make Facebook any less responsible for the harmful effect on its users and the broader society.

Facebook makes editorial judgments all the time

Hillary Clinton Campaigns In Salinas, California Photo by Justin Sullivan/Getty Images

It’s easy to lump all social media together, but there’s a crucial difference between Facebook and Twitter, the two social media sites that people most often use to find news.

Until recently, Twitter really was a neutral platform. Each Twitter user chose a list of people to follow. When you logged into Twitter you saw a list of recent tweets by those people in strict chronological order. So Twitter could plausibly argue that it was not responsible for the stories users see. Twitter has moved away from this approach somewhat in the last year, but for the most part the tweets users see are determined by choices made by users, not Twitter.

Facebook — which is now vastly more popular than Twitter — is different. The order of posts in the Facebook news feed is chosen by a proprietary Facebook algorithm. This algorithm takes into account a variety of factors, like how close you are to the poster, how many times a post has been shared or liked by other Facebook users, the type of post (wedding and baby announcements seem to bubble up to the top), and so forth. And then it chooses the posts it thinks you’re most likely to enjoy — whether they were posted three minutes or three days ago — and puts them at the top of your news feed.

Most of us will only ever see a fraction of the things our friends post on Facebook, so the ability to decide which posts to show first amounts to the power to control which posts users read at all.

It’s easy to think of an algorithm as an alternative to making messy and imprecise human judgments. But as Slate’s Will Oremus pointed out earlier this year, that’s a mistake:

The intelligence behind Facebook’s software is fundamentally human. Humans decide what data goes into it, what it can do with that data, and what they want to come out the other end. When the algorithm errs, humans are to blame. When it evolves, it’s because a bunch of humans read a bunch of spreadsheets, held a bunch of meetings, ran a bunch of tests, and decided to make it better.

Facebook’s editorial decisions are often bad

National Clean Energy Summit 7.0 In Las Vegas
Email stolen from the account of Clinton campaign chair John Podesta led to a number of embarrassing stories about the Clinton campaign — some accurate, others not.
David Becker/Getty Images for National Clean Energy Summit

Facebook hasn’t told the public very much about how its algorithm works. But we know that one of the company’s top priorities for the news feed is “engagement.” The company tries to choose posts that people are likely to read, like, and share with their friends. Which, they hope, will induce people to return to the site over and over again.

This would be a reasonable way to do things if Facebook were just a way of finding your friends’ cutest baby pictures. But it’s more troubling as a way of choosing the news stories people read. Essentially, Facebook is using the same criteria as a supermarket tabloid: giving people the most attention-grabbing headlines without worrying about whether articles are fair, accurate, or important.

A couple of weeks ago, WikiLeaks released a 2008 email between Democratic operatives that discussed how to oversample Democrat-leaning votes like Hispanics and young people for Democrats’ internal polls. Some conservative bloggers interpreted this as evidence that public polls of the 2016 race were being rigged in Hillary Clinton’s favor, and the story rocketed around Facebook.

To experienced campaign operatives, this was obviously a nonstory. “Oversampling” is a standard technique for getting more information about particular groups. It’s not a nefarious way to skew a poll’s overall results. The email was from 2008, not 2016. And it was discussing private polls conducted by Democrats for their own use, so it would have been pointless for them to skew their own polling.

But Facebook’s algorithm doesn’t take into account whether a particular story is accurate or not. If it generates a lot of “engagement,” it gets moved to the top of the pile. And often, a sensational-but-wrong story will generate more engagement than a story that accurately explains that nothing nefarious actually happened.

A recent BuzzFeed article illustrates how catastrophically bad Facebook’s editorial judgment can be. According to BuzzFeed, a group of young people in Macedonia has created more than 140 pro-Trump news sites.

Most of the posts on these sites are aggregated, or completely plagiarized, from fringe and right-wing sites in the US,” BuzzFeed reports. “The Macedonians see a story elsewhere, write a sensationalized headline, and quickly post it to their site.”

“Several teens and young men who run these sites told BuzzFeed News that they learned the best way to generate traffic is to get their politics stories to spread on Facebook,” according to BuzzFeed. “The best way to generate shares on Facebook is to publish sensationalist and often false content that caters to Trump supporters.”

“Yes, the info in the blogs is bad, false, and misleading,” one of these youngsters told BuzzFeed. “But the rationale is that ‘if it gets the people to click on it and engage, then use it.’”

Facebook creates perverse incentives for journalists

GOP Presidential Candidates Debate In Las Vegas Ethan Miller/Getty Images

Those Macedonian teenagers are obviously an extreme case, and any platform as vast as Facebook is going to attract its share of opportunists trying to game the system. The thing that’s really bad about this is the incentive it creates for people at more reputable news organizations. Because journalists trying to maximize the traffic to their articles quickly learn the same lessons those Macedonian teenagers did: Sensationalism attracts clicks. Fairness and accuracy doesn’t.

Of course, established news organizations try to maintain minimum standards of fairness and accuracy to avoid damaging their reputation. But the huge demand for clickbait created by Facebook creates a constant temptation for online reporters. At the margin, the existence of Facebook and its huge audience creates constant pressure on the journalistic profession to become more sensationalistic and less careful about the facts.

And even if most journalists at most news organizations resist these temptations, there are always going to be plenty of fringe blogs and Macedonian teenagers that are willing to pander to their right-wing (or left-wing, depending on the site) audiences without worrying too much about niceties like accuracy.

This dynamic helps to explain why the 2016 election has taken on such an apocalyptic tone. Partisans on each side have been fed a steady diet of stories about the outrages perpetrated by the other side’s presidential candidate. Some of these stories are accurate. Others are exaggerated or wholly made up. But less sophisticated readers have no good way to tell the difference, and in the aggregate they’ve provided a distorted view of the election, convincing millions of voters on each side that the other candidate represents an existential threat to the Republic.

To be clear, I think inaccurate, sensationalistic stories have been a bigger problem on the right than the left during the 2016 general election. That’s a view that’s backed up by a recent Buzzfeed investigation that looked at a new breed of hyper-partisan Facebook pages that have become extremely popular during the 2016 election.

They found that 12 percent of the stories on right-wing pages were “mostly false,” compared to 5 percent of stories on left-wing pages. Another 25 percent of right-wing stories and 14 percent of left-wing stories were a mix of true and false content, leaving just 47 percent of conservative articles and 56 percent of liberal articles being “mostly true.”

But even if you’re not persuaded that the problem is worse on the right than the left, it doesn’t actually matter very much. Having fake news equally distributed at both ends of the political spectrum — inflaming Democrats against Republican politicians at the same time as it inflames Republicans against Democratic politicians — would still be a problem for our democracy. Our system is based on compromise, and compromise becomes more difficult if partisans on each side have their heads stuffed with false conspiracy theories about the crimes committed by the other side.

Facebook should take its editorial responsibilities seriously

Advertising Week New York 2016 - Day 2
Facebook COO Sheryl Sandberg.
Slaven Vlasic/Getty Images for Advertising Week New York

There are a lot of specific things Facebook could do to improve the average quality of the stories its readers see. But Facebook’s first step is to admit that it is, in fact, a media company, that the design of its news feed inherently involves making editorial decisions, and that it has a responsibility to make those decisions responsibly.

Facebook took a small step in this direction in August when it announced that it would begin penalizing stories with “clickbait” headlines. But the announcement that it had built a system that “identifies phrases that are commonly used in clickbait headlines” suggests that Facebook is thinking about this issue very narrowly. Articles with clickbait headlines are annoying, but they ultimately do far less harm than articles with straightforward headlines but inaccurate information in the story itself.

A big issue here is about the way Facebook has staffed its editorial efforts. In a traditional news organization, experienced editorial staff occupy senior roles. In contrast, Facebook has relegated the few editorial decisions it has made to junior staffers. For example, until earlier this year Facebook had a team of 15 to 18 independent contractors who were in charge of writing headlines for Facebook’s “trending news” box.

When Facebook faced accusations that these staffers were suppressing conservative stories, Facebook panicked and laid all of them off, running the trending stories box as an automated feature instead. But that hasn’t worked so well either, as fake news keeps popping up in the trending news box.

The problem here wasn’t that Facebook was employing human editors to evaluate stories and write headlines. The problem was that Facebook’s leadership didn’t treat this as an important part of Facebook’s operations.

If Facebook had an experienced, senior editorial team in place, there’s a lot it could do to steer users toward high-quality, deeply reported news stories and away from superficial, sensationalistic, or outright inaccurate ones.

As just one example, Facebook could choose a random sample of articles from popular online publications and send them to independent experts for a review of their depth and accuracy. If a publication’s articles generally receive high marks, then other articles from the same publication could get a bonus in the Facebook algorithm, while a low-scoring article would push its publication’s posts toward the back of the line.

Facebook took a small step in this direction when it chose to open its Instant Articles program to “high quality brands” first, but a much more ambitious program will be needed to consistently provide users with high-quality, accurate news.

This would not only improve the user experience in the short run, it would also change the incentives of journalists and their editors. They would need to worry that rushing out a poorly-researched, one-sided article could lead to a poor quality score, which would harm the site’s Facebook traffic in the future.

Facebook could also rewrite clickbait headlines, replacing them with more neutral headlines that accurately describe the contents of an article. And when Facebook does display an article its editorial staff believes is inaccurate or misleading, it could automatically include a link to another article providing an opposing view.

There are two obvious objections to this kind of shift. One is that it could require hiring a small army of human editorial staff to administer. That’s a real concern, but it could also be worth it if it means that users find the average quality of the articles they read starts going up instead of down.

The other concern is about bias and censorship. Conservatives in particular are likely to worry about Facebook subtly tilting story selection in a liberal direction. There are some safeguards Facebook can take here, like ensuring that conservatives are well represented among the people making editorial decisions.

One way to help address these concerns is by being transparent. Facebook could provide users with a lot more information about why the news feed algorithm chose the particular stories it did. If it rates articles and publications for accuracy, it could make those scores public to demonstrate that they aren’t systematically biasing things toward any particular ideology.

Another way is to provide users with an opt-out option. Most users would likely appreciate having Facebook steer them toward high-quality articles. But a minority won’t trust Facebook and won’t want to be steered toward articles Facebook considers high quality. An easy solution would be to let users opt out of the quality filter with a single click.

But the point is that more than a billion people now rely on Facebook as a major source of information about the world, and right now Facebook is serving them poorly. It needs to embrace its status as a major media company and find ways to improve the average quality of the news stories it recommends to its users.

Facebook responds

Facebook sent me the following statement about their approach to misinformation on the site:

We value authentic communication, and hear consistently from those who use Facebook that they prefer not to see misinformation. In News Feed we use various signals based on community feedback to determine which posts are likely to contain inaccurate information, and reduce their distribution. In Trending we look at a variety of signals to help make sure the topics being shown are reflective of real-world events, and take additional steps to prevent false or misleading content from appearing.

Despite these efforts we understand there's so much more we need to do, and that is why it's important that we keep improving our ability to detect misinformation. We're committed to continuing to work on this issue and improve the experiences on our platform.


Watch: The bad map we see every presidential election

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.