Updated at 9:00 a.m. on Nov. 16.
The end of each election usually produces at least one story about a polling triumph, fiasco or controversy. This year's example involves the successful forecasts produced by automated telephone surveys, those that ask voters to answer questions by the touch-tone keys on their telephones, particularly in the New Jersey governor's race.
Pundits didn't just argue over results this year: They made claims on methodology. On election night, Mickey Kaus declared the automated polls a big "winner" and explained their importance: "Rasmussen's [automated] polls tend to show the highest level of opposition to health care reform. If they accurately predict who will turn out to vote, they may signify big potential trouble for Democrats in lower-turnout mid-term elections."
Karl Rove wasted no time underlining the point. "Automated polling firms like Rasmussen... more accurately represent the electorate in off-year elections" because they draw the motivated voters who turn out in off years, he argues in a memo posted to his Web site. "Democrats who face re-election next year should start worrying -- automated pollsters' results showing a majority of Americans opposed to health care reform may be the most prescient look at what lies in store for next year's midterms."
In some ways, this year's twist should come as no surprise. Presidents and partisans have used poll results "to support their positions in an attempt to influence others" since the 1960s, as Columbia University political scientist Robert Shapiro put it when I e-mailed him. The use of internal polling by political leaders like Lyndon Johnson to influence others, he added, helped news organizations "see the virtue of doing their own independent polling."
But since this latest argument concerns a relatively narrow issue of survey methodology, it is worth a closer look.
Kaus and Rove are really raising two different questions: Were automated polls (sometimes known as IVR, for "interactive voice response") more accurate in forecasting the turnout in the 2009 elections? And does any such advantage, if real, translate into a more accurate measurement of public opinion on health care reform?
The first issue is easier. The polls in New Jersey showed a consistent and statistically significant gap throughout 2009: Republican Chris Christie ran stronger against Democrat Jon Corzine on automated than on live-interviewer public polls, and the margins predicted by the final IVR polls were more accurate. They came closer to the actual outcome than those using more traditional methods.
In Virginia, however, the difference was neither large nor consistent. And while a single automated survey from Public Policy Polling was closer to the mark in the referendum on gay marriage in Maine, their survey was also way off in New York's 23rd Congressional District.
The results this month reaffirmed what has been evident for much of the last decade: When it comes to predicting the outcome of an election, automated surveys are as accurate as those that use live interviewers. But the evidence that they were better on this score this year is limited to New Jersey. My sense is that the New Jersey advantage was about more closely simulating the secret ballot (thus more accurately measuring voter preferences) than any inherently better representation of the electorate. And as Christie pollster Adam Geller argues, other methodological issues may also explain the gap.
And before we turn to health care, let's remember that public polls serve more purposes than simply predicting the outcome. A two-minute automated survey can only tell us so much about issues beyond the horse race, and the New Jersey results include inconsistencies within subgroups that should give us pause. No one methodology is best for all purposes.
As it happens, that is exactly the message I got from automated pollsters themselves when I asked about polling on health care reform.
Scott Rasmussen, for example, is understandably pleased that Rove praised the accuracy of his polling, but differs with Rove's conclusion that different methodologies produce different results. "All polls," he argues, "show a plurality or majority opposition to the health care plan working its way through Congress."
True, Rasmussen's question typically shows a few points more opposition to "the health care reform plan" than similar questions asked by other pollsters. The difference, however, may result from factors unrelated to the automated mode, such as question wording or because Rasmussen is one of the few to report results for "likely voters."
Public Policy Polling's Tom Jensen agrees that while automated polls were more accurate this year and deserve to be taken seriously, "that does not mean IVR is superior to live interviewers on every kind of question that ever gets polled."
SurveyUSA CEO Jay Leve points out that although self-administered surveys tend to produce higher reports of sensitive behavior (such as drinking or sexual activity), no one type of survey is "inherently superior.... I don't think you can argue, on an issue as complicated as health care, that mode trumps."
(You can read full interviews with Rasmussen, Jensen and Leve at Pollster.com.)
I have written previously in this space that the complex attitudes at work on the health care reform issue do not reduce easily to a single question or methodology. As Robert Shapiro points out, the great value of independent polling is our ability "to compare different polls with different question wording to get a sense of the range of public opinion. In this way, too, over time we can get a good sense of changes in opinion."
So the lesson: Don't trust one question, one pollster or one type of poll. Trust many.