Skip Navigation

Close and don't show again.

Your browser is out of date.

You may not get the full experience here on National Journal.

Please upgrade your browser to any of the following supported browsers:

Why the Election Polls Missed the Mark Why the Election Polls Missed the Mark

This ad will end in seconds
Close X

Want access to this content? Learn More »

Forget Your Password?

Don't have an account? Register »

Reveal Navigation



Why the Election Polls Missed the Mark


This Dec. 11, 2012 photo shows a 1947 survey for the Gallup Poll at the University of Iowa library in Iowa City, Iowa.(AP Photo/Ryan J. Foley)

In the days following an election in which his organization's polls proved to be inaccurate, Gallup Editor in Chief Frank Newport published a blog post warning of "a collective mess."

The results of the election--President Obama's 4-point victory--was not the only indication that Gallup's polls were biased in favor of Republican Mitt Romney. Websites that average and aggregate polls showed, on balance, that Obama was in a stronger position than Gallup's polls did, which allowed some observers to paint the longtime pollster as an outlier, both before and after the votes were tallied.


Newport, in his blog post three days after the election, saw these aggregators as a threat--not only to the Gallup Organization but to the entire for-profit (and nonprofit) public-opinion industry. "It's not easy nor cheap to conduct traditional random sample polls," Newport wrote. "It's much easier, cheaper, and mostly less risky to focus on aggregating and analyzing others' polls. Organizations that traditionally go to the expense and effort to conduct individual polls could, in theory, decide to put their efforts into aggregation and statistical analyses of other people's polls in the next election cycle and cut out their own polling. If many organizations make this seemingly rational decision, we could quickly be in a situation in which there are fewer and fewer polls left to aggregate and put into statistical models."

Newport's hypothetical--that because aggregators that averaged polls or used polls to model the election results more accurately predicted the results than his traditional, phone polling--sounds a little paranoid on its face. But it underscores the effects that increasing costs and decreasing budgets are having on media organizations that cover politics and typically pay for this kind of survey work.

It also reopens a long-standing debate over poll aggregation. Some pollsters and media organizations think the practice of averaging polls that survey different universes or are conducted using different methodologies is bunk. They warn that considering cheaper, less rigorous polling on the same plane as live-caller polls that randomly contact landline and cell-phone respondents allows the averages to be improperly influenced by less accurate surveys. And, ultimately, while the poll averages and poll-based forecasts accurately picked the winner, they underestimated the margin of Obama's victory by a significant magnitude.


But others, including the poll aggregators themselves, maintain that averaging polls, or using poll results as part of a predictive model, produces a more accurate forecast than considering any one individual poll. Before an election, it's difficult to predict which polls will be more accurate and which polls will miss the mark. Averaging results together also provides important context to media and consumers of political information when every new poll is released, proponents argue.

Ultimately, this is a debate that also goes beyond the statistical questions about averaging polls. It touches on the nature of horse-race journalism and the way in which we cover campaigns.

The First Number Crunchers

Real Clear Politics began the practice of averaging polls before the 2002 midterm elections. RCP was joined by is now part of The Huffington Post--four years later. "Pollster started in 2006, and we were really building on what Real Clear Politics did," founding Coeditor Mark Blumenthal said. The statistician Nate Silver began a similar practice in 2008, and his site, FiveThirtyEight, was acquired by The New York Times shortly thereafter. More recently, the left-leaning website Talking Points Memo started its PollTracker website before the 2012 election.

Each of these organizations differ in their approaches. Real Clear Politics does a more straightforward averaging of the most recent polls. TPM's PollTracker is an aggregation involving regression analysis that uses the most recent polls to project a trajectory for the race. FiveThirtyEight and HuffPost Pollster use polls, adjusting them for house effects--the degree to which a survey house's polls lean consistently in one direction or another. FiveThirtyEight also uses non-survey data to project the election results.


All four of these outlets underestimated Obama's margin of victory. Both Real Clear Politics and PollTracker had Obama ahead by only 0.7 percentage points in their final measurements. HuffPost Pollster had Obama leading by 1.5 points, while FiveThirtyEight was closest, showing Obama 2.5 points ahead of Romney in the last estimate. The aggregators that came closest to Obama's overall winning margin were the ones that attempted to account for pollsters' house effects.

"The polls, on balance, understated President Obama's support," said John McIntryre, cofounder of Real Clear Politics. "Our product is only as good as the quality and the quantity of the polls that we use."

These sorts of house effects were why HuffPost Pollster moved to a model that attempted to control for them, but their average still underestimated Obama's margin of victory by a sizable magnitude. "One of the main reasons why we moved to using a more complex model that controlled for house effects was precisely to prevent that phenomenon from happening," Blumenthal said. "Our goal is to minimize that to next to zero."

comments powered by Disqus