"Have you done a column on Gallup/Rasmussen trackers vs. traditional polls, including Gallup, where the data is different?" I got that question from a valued reader recently and have been asked essentially the same question countless times over the last few weeks.
Why the sudden interest? It probably stems from the apparent disconnect between the two recent "traditional" polls from Newsweek and Los Angeles Times/Bloomberg that showed Barack Obama leading the presidential race by double-digit margins (see last week's column) and the two tracking polls, which have shown a closer contest.
The chart below shows how the Gallup Daily and Rasmussen Reports results compare to all other national polls when we plot a regression trend line through the available results. Up until a few weeks ago, both the Gallup Daily and Rasmussen surveys showed a consistently closer race than other national surveys.
The differences among the polls are interesting, but the "rolling average" aspect of their design is not a likely explanation.
Let's start at the beginning. Just what is a "tracker"? When conducting more traditional political surveys, pollsters will call for a few successive days and report one set of results for the full sample. When conducting rolling-average tracking surveys, pollsters interview voters every night on an ongoing basis and report daily results every day based on interviews conducted for the last few days combined.
Both Gallup and Rasmussen are now reporting daily results based on their last three nights of interviewing. So the results released on Thursday are based on interviews conducted Monday through Wednesday. On Friday, they throw out interviews from Monday and add interviews from Thursday. And so on.
Does that daily rotation compromise the methodology? Not necessarily. It depends on how the pollster handles numbers where no one answers the phone or where the desired respondent is not home.
In most traditional surveys, the pollsters will repeatedly "call back" in an attempt to contact as many sampled voters as possible. The more "call backs," the greater the response rate and -- in theory, at least -- the smaller the potential for error in the survey. But the "rolling average" survey need not be at a disadvantage when it comes to calling back successive nights. Here's how: Each night, the pollsters start calling a fresh new sample of numbers. However, they need not throw out that sample the next day. They can continue dialing the "no answers" for several nights. As such, each night's completed interviews can include a mix of numbers dialed just once and some that have been dialed repeatedly until someone answers.
That is exactly what the Gallup Daily tracking survey does. The pollsters call each sampled number at least five times over two to three successive evenings, essentially the same procedure used for the surveys Gallup conducts in partnership with USA Today.
The Rasmussen survey, on the other hand, rarely calls a selected number more than once over successive nights.
How much do other national polls call back unavailable voters? Some are more forthcoming than others about their procedure, but we have a good clue from the number of days each poll is in the field. The field periods of the 17 "traditional" national polls released in June ranged from two to eight nights (with a median of four). So while traditional polls are probably calling back more often, the Gallup Daily procedure is not radically different.
However, there are many other methodological differences having nothing to do with the "tracking" aspects of the survey that deserve far more attention. Gallup uses live interviewers, as do most of the other national surveys. Gallup also has Spanish-speaking interviewers available if necessary.
Rasmussen is an automated (interactive voice response, or IVR) survey. Respondents hear a recorded voice and answer by pressing the keys of their touch-tone phones.
Since January, Gallup has included a supplemental sample of voters from "cell-phone-only" households interviewed on their cell phones, something most of the other national surveys are not yet doing. Federal regulations prohibit IVR pollsters like Rasmussen from interviewing voters by cell phone.
Rasmussen screens to select voters that it identifies as "likely" to vote. The Gallup Daily tracking, for now at least, reports on self-identified registered voters (the USA Today/Gallup surveys also report the results based on their sometimes controversial likely voter model).
Gallup weights its base sample of adults by demographics (such as gender, age and race) to match the statistics provided by the U.S. Census, but the results are not adjusted for party identification. Most of the other "traditional" national surveys take the same approach. Rasmussen weights by demographics and also adjusts the mix of self-identified Democrats and Republicans to match the results it has obtained in its last three months of interviewing.
Finally, the vote preference questions asked by the various national surveys use slightly different language and appear at different points in the interview, and those interviews vary in length (the Gallup editors speculated about some of these differences in an analysis released last week).
So which of these factors is responsible for the difference? I do not have an easy answer, but it may help to look back at that chart to keep this whole issue in perspective.
Over the last few weeks, the Rasmussen results have been remarkably close to the mashed-up results of the "traditional" national surveys. As for Gallup, while it has showed a closer race, the difference has been at most 2 to 3 percentage points on the margin between the two candidates. That difference is real, but for now, not huge.