Earlier this month, MSNBC's First Read offered a warning about President Obama's job approval rating: "There are more BAD polls now than ever before; it confuses the issue and lets some folks cherry-pick what they want. The VERY erratic robo-polling firms have added to the confusion like never before."
They're half right. We certainly see more job approval polls now than ever before, with two firms, Gallup and Rasmussen Reports, producing daily rolling tracking all year round. Those results can vary greatly. Our Pollster.com chart shows that approval typically bounces around within a 10 percentage point range.
The usual random noise inherent in sample surveys explains some of the variation; the rest comes from systematic differences in question wording or the populations surveyed (such as adults vs. "likely voters"). But either way, First Read is right that the variety makes it easy to cherry-pick numbers to fit any conceivable narrative.
But are automated surveys, the so-called "robo-polls" that use a recorded voice asking respondents to answer by pressing keys on their touch-tone phones, inherently less reliable than those using more traditional methods?
Automated pollsters argue that theirs is a different kind of trade-off: Without an interviewer, respondents are more likely to give honest answers.
As I noted here a few weeks ago, independent analyses from the National Council on Public Polls, the American Association for Public Opinion Research, the Pew Research Center, the Wall Street Journal and FiveThirtyEight.com have all shown that the horse-race numbers produced by automated telephone surveys did at least as well as those from conventional live-interviewer surveys in predicting election outcomes.
Also, the national job approval data does not support the assertion that automated polls are more "erratic." My Pollster.com partner Charles Franklin checked and found that despite identically sized three-day samples, the Rasmussen daily tracking poll is less variable than Gallup (showing standard deviations of 1.8 and 2.4, respectively), probably because Rasmussen weights its results by party identification.
Their own answers center on children. "Anyone who can answer the phone and hit the buttons can be counted in the survey -- regardless of age," says the Times. "There's no way to verify whether an 8-year-old is on the line pushing the buttons," the Post explains. Automated polls, the AP Stylebook chimes in, "cannot exclude children from adult samples."
They can't? Automated surveys can and do ask respondents to provide their age, just like live interviewer polls. Yes, theoretically, a 15-year-old girl could pretend to be a 65-year-old man. But why would she do so? A curious teenager might want to stay on the phone to answer automated poll questions, but how would he know that he needed to lie about his age to be counted?
If the automated pollsters offered up-front incentives like, say, a free Sony PSP or the latest "High School Musical" DVD (they don't), if Miley Cyrus were the recorded voice on the other end of the line (she isn't), maybe. But that's not the way it works. (Automated pollster SurveyUSA prefers to use the voices of well-known local news anchors). Do we really think that American children are waiting by the phone, knowing that they need to impersonate an adult, in order to answer questions about politics and public policy? Please.
A potentially more important shortcoming, also identified by the AP, is that automated surveys "do not randomly select respondents within a household." True. Older women tend to be the first in their families to answer a ringing phone, so ideally telephone surveys should ask for a specific random person. Most automated polls simply interview whoever answers first and use quotas or weighting to correct any age or gender bias.
This is not a fatal flaw, however, because many respected pollsters also fail to select respondents "randomly" within households, using instead a method in which they always ask to speak to the "youngest male" in the household first, knowing that this procedure nudges their samples in the right direction. Pollsters justify the compromise as a trade-off between one kind of potential error (bias from losing respondents put off by the invasive questions used to identify household members) and another (making the sample less random).
Automated pollsters argue that theirs is a different kind of trade-off: Without an interviewer, respondents are more likely to give honest answers about whether and for whom they plan to vote. So to improve the accuracy of their measurement, they accept a little less randomness in their selections.
That said, we should recognize that something is lost when we use the automated methods to poll on subjects more complex than horse-race results. Automated polls are typically very short and do not lend themselves to open-ended questions, and no interviewer is available to probe for more complete responses or help clarify meaning on confusing questions.
Howard Schuman, a highly respected academic who has spent a career studying the role interviewers play in survey research, recently lamented the rush to more "direct" paths to respondents, "on the way to wireless tapping of blood flow in selected areas of a respondent's brain." Yet he also has praise for SurveyUSA, having listened to their interviews and worked with their data. In an e-mail to me, he expressed amazement that surveys can continue to "exist with response rates of around 10% or so."
"It's a remarkable experience," he says. It sure is.