Skip Navigation

Close and don't show again.

Your browser is out of date.

You may not get the full experience here on National Journal.

Please upgrade your browser to any of the following supported browsers:

The NH Poll Glitch: Are We Getting Anywhere? The NH Poll Glitch: Are We Getting Anywhere?

NEXT :
This ad will end in seconds
 
Close X

Not a member or subscriber? Learn More »

Forget Your Password?

Don't have an account? Register »

Reveal Navigation
 

 

MYSTERY POLLSTER

The NH Poll Glitch: Are We Getting Anywhere?

With the unexpectedly long and competitive primary season now winding down, it is a good time to look back on the now-infamous polling miscue at the beginning, in New Hampshire. What have we learned?

Although many, including yours truly, have offered speculation, we still lack a definitive answer. In January, the American Association for Public Opinion Research (AAPOR) named a "special committee" of pollsters and academics to examine the data from New Hampshire and subsequent primaries to "see if they help explain what occurred in New Hampshire."

 

The committee had scheduled a briefing at AAPOR's annual conference last week in hopes of presenting initial findings, but the chair of the committee, University of Michigan professor Michael Traugott, had little to report. As Politico first reported last week, the work of the committee "has been stalled by the hesitancy of pollsters to submit their methods and practices for peer review."

Reading this story, a disappointed friend e-mailed: "Do you think your efforts at transparency are getting anywhere?"

The short answer is yes.

 

One reason is that AAPOR's request -- put to 26 organizations that publicly released surveys in New Hampshire, South Carolina and California -- was extraordinary in scope and detail.

The request letter, shared with me by outgoing AAPOR president Nancy Mathiowetz, spanned six pages and delineated more than 22 categories of information or data that the committee hopes to examine for each poll.

It began by asking for the eight items listed in AAPOR's Code of Professional Ethics as appropriate for "minimal" disclosure, including information -- such as the survey response rate and sampling method -- that many pollsters routinely fail to disclose.

Committee members, however, went far beyond the minimum industry disclosure guidelines. They also requested the actual respondent-level data for those interviewed and for those screened out as unlikely to vote. They asked for call records (showing when and how often pollsters dialed each sampled number) and data on the demographic characteristics of interviewers that could be matched to respondents. They asked for complete documentation of calling and sampling procedures, how pollsters identified "likely voters," the weights used to run the data and the methods used, if any, to allocate undecided voters. Finally, they asked pollsters to make all this data and information available to scholars through Roper Center Public Opinion Archives.

 

Given the ongoing crush of primary polling since January, we should not be surprised by the slow pace of pollster response so far. In fact, Mathiowetz is optimistic that we will see "an outpouring of information" once the primaries wrap up in early June. Although just five organizations have submitted complete data, only one (that she would not name) has explicitly declined to cooperate.

"Everyone [else]," she said, "has written me and said, 'We absolutely want to comply, but we're busy running off to Pennsylvania, Ohio, Indiana" and the other primary states.

Things were different following the "Dewey Beats Truman" polling debacle of 1948, when a similar investigation concluded its work in just five weeks. The pollsters, according to the report by the independent Social Science Research Council (SSRC), "promptly agreed to cooperate, opened their files and made their staffs available to the committee for interrogation and discussion."

Of course, the pollsters were finished conducting their political surveys at that point, and the 1948 snafu looked like "the end of the world" to the nascent polling industry. Clients were calling to cancel surveys, and some saw their companies "going down the tubes."

The reaction this year has been a bit different. Traffic to the poll-tracking charts of my site, Pollster.com, soared in the weeks and months following the New Hampshire debacle. Readers who e-mailed me in early January were as likely to be concerned about why we had so few polls in Nevada as about what had gone wrong in New Hampshire.

Pre-election polling is certainly a more "competitive environment" than it was in 1948, as Gallup's Frank Newport said last week in explaining why his own company had not yet complied with AAPOR's request. He noted a "yin and yang" conflict between protecting proprietary interests and facilitating the pursuit of survey science as the reason pollsters have historically been reluctant to share such data. Gallup has not yet found "a smoking gun" based on its own investigation, Newport said, and "we want to understand ourselves what happened in New Hampshire" before sharing the data with others.

Let's hope all involved err on the side of more disclosure, since as Mathiowetz's request put it, polling is a "profession that benefits from a collective understanding of the sources of errors that impact our estimates."

And if not, the final report of the special committee will have one more important benefit for people trying to sort out which survey organizations to trust: It will report in detail which organizations responded to requests and which did not.

Interests disclosed: I served on AAPOR's Executive Council for the last two years and ended my term of service last week. I recused myself from the work of the special committee.

Comments
comments powered by Disqus
 
MORE NATIONAL JOURNAL
 
 
 
 
What should you expect from on Election Night?
See more ▲
 
Hide