Two polls released last week on the special election for Senate in Massachusetts may provide a harbinger of our polling future.
One, sponsored by the liberal Web site Blue Mass Group and conducted by polling firm Research2000 using live interviewers, showed Democrat Martha Coakley leading Republican Scott Brown by 8 points (49 percent to 41 percent). Another poll, sponsored by the conservative site Pajamas Media and conducted by the Republican firm CrossTarget using a recorded, automated methodology, showed a diametrically opposite result: Brown leading Coakley by 15 points (54 percent to 39 percent).
While the huge divergence alone is troubling, what really captured my attention were the entities that sponsored the surveys: "new media" outfits with clear partisan or ideological leanings. Neither is the type of organization that typically conducts public polling.
Two other stories that got a lot of attention last week explain why these two polls may be just the beginning of a whole new wave. Conservative commentator David Frum's site reported that automated pollster Scott Rasmussen will soon launch a new venture, Pulse Opinion Research, that will allow anyone willing to pay $600 to "go to the [Pulse] website, type in their credit card number, and run any poll that they wanted, with any language that they want."
Yes, you read that right. For a remarkably low price, anyone with a credit card can write their own questions, and Rasmussen's company will record an automated survey and use it to conduct an automated telephone interview.
That news coincided with the launch of Precision Polling, a self-service Web site that allows anyone to enter their own questions, call a toll-free number and record the questions in their own voice, submit a list of telephone numbers and start dialing at a cost of 10 cents per call.
The Web site suggests that political polling is an ideal application: "Find out how your candidate or issue is doing," they urge, "by quickly running a simple poll. You can set it up to run on a regular basis to track performance over time."
Precision Polling's public debut prompted mixed opinions on the members-only listserv of the American Association for Public Opinion Research -- but mostly skepticism. "Surely this was a story in The Onion," one researcher quipped.
My AAPOR colleagues might want to dial down their cynicism in one respect: Ventures like Pulse and Precision Polling are deadly serious. As described to me by CEO Gaurav Oberoi, Precision Polling has made its software surprisingly powerful. Among the things it allows for (warning, pollster jargon to follow): question branching, rotation of answer categories, full disposition reports and calculation of response rates and a cursory ability to weight data. In the works are more sophisticated weighting options and the ability to purchase a telephone sample.
In other words, it's amplifying a system appropriate for "polling" 20 friends to the sort of power a professional pollster might want. In fact, according to Oberoi, a handful of survey research companies have already used the service to conduct political polls for campaign clients.
What's troubling is less the technical capacity of these ventures than the potential consequences of their success. Over the last 10 years, we have seen the proliferation of automated polling reshape the way we follow and cover political campaigns. A decade ago, automated polls were a curiosity dismissed by journalists and political professionals. In 2008, 37 percent of the polls we logged on the presidential race at Pollster.com were conducted using an automated telephone methodology, though virtually all of those were done by just three firms: Rasmussen, SurveyUSA and relative newcomer Public Policy Polling.
What will happen if we start to see a flood of polls from the kinds of Web sites that sponsored the two wildly divergent polls in Massachusetts last week? What will happen if hundreds of local political organizations and PACs choose to conduct and release their own robo-polls?
The answers are probably grist for a series of columns, but the undeniable trend toward more and more polls sponsored by a wider range of interests and based on increasingly diverse methods cries out for:
• Better measures of quality. That's no easy task in a profession still debating what defines "quality," but the popularity of various attempts to score the "accuracy" of political polling shows that consumers of the data, whether journalists or ordinary political junkies, are eager for simple scoring of survey quality.
• Better disclosure. Assessments of quality will not be possible unless pollsters and their sponsors start disclosing far more about their methods and respondents. A simple example: The two divergent Massachusetts polls both sampled "likely voters," yet neither release tells us how such voters were identified or defined.
• Safeguards against abuse. The explosion of persuasion robo-calling, often under the false guise of a survey, has prompted calls to ban all automated polling. AAPOR member Mike Donatello worries that these new polling ventures, if successful, "could become a catalyst for even greater regulation of phone-based interviewing." At Precision Polling, they say they "do not allow soliciting or political advertising," but how would they prevent more subtle abuses, such as so-called "push polling"? Oberoi told me they "support AAPOR's stance" on push polling but did not have a specific plan to police it.
O brave new world that has such challenges in it.