Why This Addict Never Wants to See Another Poll Again

Why This Addict Never Wants to See Another Poll Again

Why you should care

Because nothing in this life is certain, friends. 

Obsession took hold. Every few hours, the map updated, and I scoured it for clues. I rattled off the stats to friends; I informed my boss when the margins got too close. At his best, Donald J. Trump stood a 50.1 percent chance of becoming president of the U.S., according to my trusted advisers at FiveThirtyEight. Those odds held for just one day, after the Democratic National Convention.

Election night blasted me in the face, as it did many other Americans who had taken comfort and a sense of control from the work of FiveThirtyEight’s editor, stats prodigy Nate Silver. His site outlined a range of possibilities on election night, but none so consistently as Hillary Clinton’s triumph. When Trump won the election, in the early hours of November 9, I checked the site again; it was still giving Clinton a 71.4 percent chance of winning. I felt tricked. For months the numbers had provided me wonky reassurance — the best kind — and now I felt betrayed. When a colleague said she never wanted to see another tweet about polls again, I — the onetime polling fanatic — nodded in agreement.

Of course, the blame does not lie solely with FiveThirtyEight, the most famous of the poll aggregators, or with Silver, who did not respond to a request for comment. With the faithfulness of a puppy, I’d also checked, compulsively, on CNN, ABC, Fox and the Washington Post. It turns out that all of them have the same basic disease, which can be summed up in a 1980s coding mantra: Garbage in, garbage out. These aggregators are only as good as the underlying data, and they’re all based on assumptions about turnout and sentiment that are fundamentally difficult to estimate. “The best a model can do is take the polls that are incorrect and do the best they can,” says Ben Zauzmer, a top Oscars forecaster. “The mistake people make is thinking we’re a lot more certain … than we are,” adds Milo Beckman, who has been published on FiveThirtyEight and who created a site that randomized probable election results.

Polls promise to deliver scientific conclusions, with a margin of error. But we’re likely underestimating those margins of error, says Joshua Clinton, Vanderbilt professor of political science and member of the American Association for Public Opinion Research, a task force to evaluate poll performance. For example, instead of being three points off, plus or minus, the polls may really be close to 10 points off “depending on the amount of uncertainty we have about nonresponse bias and the difficulty we have in identifying likely voters,” Clinton says.

Here are the myriad of ways polls can be wrong, as told to me by the experts. First, poll herding — that’s when pollsters adjust their findings to be more in line with other polls. Pollsters have few ways of judging the representativeness of their samples; in a country where turnout hovers around 50 to 55 percent, asking a random person whether he or she will vote on Election Day is unlikely to produce an accurate answer. This means that data can be skewed. Judging the quality of your sample is in itself expensive. But margins of error for polls are often calculated by only looking at the sample size. Many of the best surveys have a response rate of a meager 10 percent, and most response rates are in the single digits.

This isn’t the first time the polls have been off, I know. And in a lot of ways, I should have known better than to place my faith in big data and sophisticated algorithms. Just four years ago, Gallup suggested incumbent President Obama trailed in the polls to Gov. Mitt Romney — they were wrong. Three years and, I suspect, much hand-wringing later, the polling company told Politico it wouldn’t plan for any polls for this cycle.

So how do we make polls better? It will be costly. An accurate poll with small margins requires not simply calling folks on the phone, for starters. You need analysts who are “statistically savvy and knowledgeable about politics to analyze the results,” Clinton says. All this, he adds, would cost $100,000 to $200,000 per poll. And given the struggles that media organizations face these days, he says, “it’s hard to imagine that sort of polling ever materializing.”

Of course, that’s just a prediction. He really doesn’t know.

OZY2016

The route to the White House: news, stories and analysis from on and off the presidential campaign trail.