Skip Navigation

Close and don't show again.

Your browser is out of date.

You may not get the full experience here on National Journal.

Please upgrade your browser to any of the following supported browsers:

So How Do You Poll Off-Year Elections? So How Do You Poll Off-Year Elections?

NEXT :
This ad will end in seconds
 
Close X

Want access to this content? Learn More »

Forget Your Password?

Don't have an account? Register »

Reveal Navigation
 

 

MYSTERY POLLSTER

So How Do You Poll Off-Year Elections?

Low-Turnout Races Require A Different Approach Than Those In Regular Election Cycles

Updated at 9:19 a.m. on June 8.

OK, so if you're so smart, how should a pollster measure vote preference in a low-turnout, off-year election many months before voters engage in the campaign?

 

I got that question, in essence, from a reader in response to last week's column on the way pollsters measure incumbent vulnerability in primaries. The same reader also took strong exception to my use of the word "misleading" in describing the May 2006 numbers reported by Quinnipiac University in an early poll of Connecticut voters that sampled all self-identified Democrats even though a much smaller segment of the population ultimately voted in the Senate primary that August.

I will grant that I should have chosen a less loaded word than "misleading," as some may have heard an insinuation about the pollster's motives or the accuracy of data they reported. For the record, I do not believe that Quinnipiac's pollsters intended to mislead anyone. The larger point I was trying to make is that we mislead ourselves -- and by we I mean pollsters, journalists, campaigns, political junkies -- whenever we treat samples of a third to half of adults in a state as a meaningful measure of the preferences a much smaller population of "likely primary voters."

The record should also show that once the Quinnipiac Poll shifted to reporting vote preferences among a narrower group of "likely" Democratic primary voters, they showed Ned Lamont overtaking Sen. Joe Lieberman over the summer. Their final poll showed Lamont leading by six percentage points; he won by four points.

 

But the reader also went on to ask more pointed questions:

How would you have captured the "new voters" turned on by the Iraq issue? How do you know they would have turned out eight months out for somebody they never heard of? And how would you have identified likely voters in a first ever August primary when a lot of people are on vacation? Bottom line -- it is not as easy as you make it sound.

No, sampling a potential electorate is not easy, but it was my job to do just that, literally hundreds of times, in my previous life as a campaign pollster. Drawing on my own experiences and the opinions gathered from some prominent campaign pollsters this week, here are some suggestions:

Use Registered Voter Lists. I would start by urging media pollsters to follow the practice of the vast majority of campaign pollsters when confronted with low-turnout, off-year primaries: Make greater use of samples drawn from the official lists of registered voters.

 

Campaign pollsters make more use of lists because of two big benefits: First, to paraphrase Democratic pollster Mark Mellman, it allows us to select likely voters based not on what people say but on what they do. "People are only very mediocre predictors of their own future behavior," Mellman says. Glen Bolger, partner in the Republican firm Public Opinion Strategies, agrees. "The best predictor of whether someone will vote in a primary," he says, "is if they have done so in the past."

Voter lists have another potential benefit: They're a rich source of additional data about sampled respondents that pollsters can use to identify and correct the statistical bias that may occur when sampled voters cannot be reached or do not respond.

My suggestion will be seen as heresy by some survey researchers, since samples of registered voters inevitably miss those voters who do not provide telephone numbers or whose numbers cannot be obtained from listed telephone directories. Registration-based list sampling is also problematic in states with poor lists or in elections that produce a surge of new registrants (like presidential primaries).

We should note that the 2006 Connecticut primary was a good example of both limitations. According to Amy Simon, the Democrat who used list samples to poll for Lamont, of the approximately 30,000 new voters who registered for the primary, only 12,000 were included in an update from the secretary of State provided a few weeks before the primary (and, of course, all were missed in her earlier benchmarks).

She also points out that vote history records are spotty or unavailable for many Connecticut towns. According to Quinnipiac polling director Doug Schwartz, they wanted to experiment with lists during the 2006 campaign but opted against it because of concerns about their quality. Nonetheless, both the Lamont and Lieberman camps used list samples for their primary campaign polls.

Screen Out Obvious Non-Voters. Even without actual vote history, pollsters still have the ability to screen out "those who freely admit they have zero chance of voting," according to Republican pollster B.J. Martino, vice president of the Tarrance Group. True, many voters "who say they will vote won't," as Mellman puts it, "but very few of those who say they won't, will."

An analogous approach is to ask about past voting habits. While not perfect, this question often asked by CBS News is a good example: "Do you usually vote in Democratic primary elections, or in Republican elections, or don't you usually vote in the primaries?" That is a crude measure, obviously, but it would provide a more realistic probable electorate than simply reporting the preferences of all registered partisans.

Report A Range Of Numbers. The specific practices of campaign pollsters vary widely, but all those I spoke with share a similar philosophy in polls conducted well in advance of low-turnout primaries: Cast a wide net of voters who might conceivably vote, compare it to a very small subgroup of voters who are likely to vote no matter what, and then report both sets of numbers to your client. Releasing two or more sets of data might make the published report a bit more complicated, but it could also provide a more accurate reading.

If All Else Fails, Don't Ask. I have nitpicked at the efforts of the Washington Post's pollsters to try to limit the surveys they have published on tomorrow's Democratic primary in Virginia, but I have to respect their decision against conducting their own poll. If they considered their budget or the available methodological tools inadequate to the task, good for them for opting to report nothing rather than numbers they considered inadequate.

Whether we like it or not, early "horse-race" vote preference questions proliferate widely and affect the decisions of potential donors, supporters and, sometimes, the candidates themselves. Better to hold off on asking a "horse-race" question -- even if it means fewer mentions of your poll on cable news or the Internet -- than asking a question that might, inadvertently, mislead.

Comments
comments powered by Disqus
 
MORE NATIONAL JOURNAL