This is a column many of my pollster colleagues will not like. It involves a subject, response rates, that many wish would go away, and a protagonist, Arianna Huffington, whom many pollsters consider a mortal foe.
In the wake of the polling miscues in New Hampshire last year, Huffington renewed her personal crusade against polling, and then a few weeks later added a new twist. Henceforth, polling results would be reported on the Huffington Post alongside candidate horoscopes, betting lines and primary state weather forecasts in a feature they dubbed "Huffpollstrology."
The point, in case it's not already obvious, was to put polling into what Huffington describes as "the right context." They would continue to report polls, she promised, but "as lightweight diversions."
Pollsters already viewed Huffington warily. As one pollster put it to me after I endorsed an earlier Huffington Post polling project, she "publicly advocated hanging up on pollsters to skew the results. How is that helping our industry?"
But in August, the Huffpollstrology feature started doing something more interesting, asking about a dozen national pollsters to report the "response and refusal rates" for their national surveys.
Some of the good citizens of the polling world disclosed response rates ranging from 10 to 28 percent, with most falling in the mid-teens. Other pollsters found ways to avoid answering Huffington's question -- either creatively or with a more straightforward approach. "We don't give out that information," said a representative of Rasmussen Reports.
Others seemed to cross their lines of communications. A representative of the research firm IPSOS directed the Huffington Post to their polling partners at the time, the Associated Press: "You'd have to ask them about releasing the response and refusal rates." An AP representative subsequently pointed back at IPSOS: "We don't have them here from IPSOS. It takes a little while for them to get those to us." If Huffington Post ever received or published the AP/IPSOS response rates, I have not found them.
But perhaps the most memorable non-response came from an unnamed, undoubtedly beleaguered representative of the Tarrance Group (on behalf of the GWU/Battleground poll, also conducted by Lake Research Partners). "It would take me about a half an hour of phone calls to dig all that stuff up, and I don't have the time to do that," he said in late September, promising to e-mail the numbers soon.
But alas, it was not to be. "Write what you need to write," the Tarrance staffer responded a few days later, "but it's not going to be. We're four weeks out from a campaign and quite frankly this is not anywhere near my priority list. Okay?"
Not surprisingly, Arianna Huffington is pleased with the information they gathered. "Plummeting response rates have been the dirty little secret of the polling industry," she told me via e-mail, adding that a drop below 20 percent response makes "the core polling principle of 'equal probability of selection' something of a joke."
I am not so sure.
First, it is not clear that declining response rates have produced significantly skewed results. Stanford professor Jon Krosnick points out that while his study of response rates found a significant decline between 1996 and 2003, "the accuracy of such surveys during that period declined only slightly." Even the Huffington Post's Seth Colter Walls found that "most of the pollsters had the election mostly right" in the fall of 2008.
Second, when it comes to response rates, higher is not necessarily better. Krosnick notes that recent research has shown that "some methods of increasing response rates (e.g., offering financial incentives) can sometimes increase a survey's response rate while at the same time lowering its accuracy (by attracting groups of people who were already over-represented in the sample)."
Finally, as professor Robert Groves put it to me in an interview last year before his appointment as U.S. Census director, the response rate alone "isn't that informative." What we really need to know is whether the respondents to the survey differ significantly from those who refuse or whom the pollster cannot reach. When they do, then results on some questions can be in error.
What can we do? "The only way to get a purchase on the answer," as Groves put it, is to collect "auxiliary" data on the sample, "information you know about both respondents and non-respondents." While Groves did not provide specific examples, such data on an election survey could come from registered voter lists. Do registered Democrats respond more readily than Republicans?
And that brings me back to the Huffpollstrology project. I believe that polls done well are usually an accurate tool for measuring public opinion, but they are not infallible. Huffington is right when she says we should not treat them "like they were just brought down from the mountaintop by Moses."
We can best use polling data by understanding its limitations, and the best way for us to become better-informed consumers of polling data is to ask pollsters to tell us more about their data and methods. That Huffington Post did just that last year is a good thing.
Never mind that the embarrassing dog-ate-the-data evasions suggest an industry with something to hide. Revealing more about our response rates and the procedures we use to collect the data would help scholars provide advice about how to make surveys more accurate and efficient. At very least, consumers would have more tools available to assess new survey data.
Who knows? Arianna Huffington could end up making polls better.