Turn on the television, open up a newspaper, or visit any website for information about the 2012 presidential campaign, and you’ll likely be inundated with polling data, even months before the election. You’ll see national polls, as well as polls of the various battleground states that are likely to decide whether Barack Obama or Mitt Romney will be taking the oath of office on the West Front of the Capitol on Jan. 21. Some websites are even dedicated to the compilation and aggregation of poll data, which include not only surveys conducted by legendary organizations such as Gallup but also newcomers such as Rasmussen Reports and Public Policy Polling.
The proliferation of polls belies an important reality, however: The days of accurate telephone polling are numbered. With more and more Americans dropping their landline service, reliable phone surveys are becoming prohibitively expensive for news organizations and nonprofit groups with tight budgets. Many news outlets are choosing to forgo the rigorous survey research they have commissioned for decades.
Despite the increased difficulty—and cost—of phone surveys, pollsters insist that their results are still accurate. But the rapid pace of change in the way Americans communicate with each other is forcing public-opinion researchers to face the fact that this soon will not be the case.
The polling industry is clearly at a crossroads. In 2016, or 2020, telephone surveys may no longer be the prevailing way to measure what the public thinks or how it intends to vote. But all of the alternatives, for now, have significant drawbacks that put the industry in a sort of limbo—clinging desperately to an increasingly outdated practice while it tries to find better methodologies.
Conversations with prominent survey researchers—academic, media, and private campaign pollsters—reveal both optimism and uncertainty about the future of the polling industry and the medium-term prospects for accurately measuring public opinion. The future may not be now for public-opinion polling, but it is fast approaching.
PICK UP THE PHONE
In May, the Pew Research Center released a study showing a dramatic drop in the percentage of households participating in its surveys. Pew reported that its contact rate—the percentage of households in which an adult was reached at all—had fallen to just 62 percent in 2012, down from 90 percent in 1997. Of those successfully contacted, the cooperation rate—the percentage of contacts with an adult that yielded an interview—was only 14 percent, down from 43 percent 15 years earlier. That means Pew’s overall response rate—the rate of completed interviews to the number of phone numbers dialed—was just 9 percent, one fourth of the 36 percent level from 1997.
The plummeting response rate has far-reaching implications for public-opinion polling, but it is important to understand first why fewer Americans are participating in telephone surveys.
Pew points to the inclusion of cell phones in its sampling as contributing to the sharp decline in response rates. Thirty-four percent of American households have only wireless telephones, according to the most recent data from the National Health Interview Survey, recorded during the second half
of 2011. During the second half of 2008, at the time of the last presidential election, just 20 percent of American households were cell-phone-only.
The explosion of American households without a landline phone has already changed telephone polling. Most polls risk under-sampling key groups if they rely only on random-digit dialing of landline telephones. For example, nearly six in 10 adults ages 25-29 live in cell-phone-only households, the National Health Interview Survey showed. By comparison, only 8.5 percent of Americans 65 or older live in homes with only wireless phones. Americans in cellular-only households are also more likely than those in landline households to rent their homes, be male, live at or near the poverty level, live outside the Northeast, and be Hispanic.
Pollsters should engage in “enthusiastic experimentation” of new methods. —Scott Keeter, Pew Research Center
And yet, automated polls that rely on landline phones are proliferating. Over the past decade or so, the number of firms that conduct so-called interactive voice-response polls, in which respondents are questioned by a recorded interviewer and answer questions via keypad or voice recognition, has increased dramatically. A penny-pinching media has also increased its use of these surveys, conducted by firms such as Rasmussen Reports, SurveyUSA, and Public Policy Polling. It is illegal to conduct an automated poll over a wireless phone, so those “robo-polls” ignore the sizable number of cell-phone-only households. (Rasmussen and SurveyUSA, to name a few, sometimes use, controversially, nonrandom opt-in Internet panels of respondents without landlines in efforts to include these under-sampled groups.)
“Our elections are being decided by old women without cell phones, because that’s who robo-polls are reaching,” said Rutgers University professor Cliff Zukin, who spoke at a roundtable discussion at the May annual conference of the American Association for Public Opinion Research.
The decrease of Americans using landline phones has also had a significant effect on live-caller surveys. Before the previous five years, nearly all live-caller polls contacted only landline respondents. But with the growth in cell-phone-only households, that methodology clearly risks sampling errors.
Live-caller polls, in most cases, use a computer to randomly select and then dial
a telephone number, after which a human operator gets on the line and asks the survey questions. But federal law prohibits using an auto-dialer to call a wireless telephone—meaning that, to reach respondents on cell phones, the interviewer must dial the number manually. That process is more time-consuming and, as a result, more costly.
“Finding and surveying a representative sample has become much, much harder to do,” said Zukin, a past president of AAPOR. “And it’s become much more expensive to do.”
Nearly all major live-caller polling organizations, including Pew, Gallup, and pollsters for the country’s major news organizations, now include cell-phone-only households. But even that adjustment doesn’t guarantee a representative sample.
Former Democratic pollster Mark Blumenthal, cocreator of the site Pollster.com, examined a number of Gallup polls earlier this year. Blumenthal found that Gallup registers lower approval ratings and ballot-test performances for President Obama than other survey houses—differences that are small but statistically significant.
Blumenthal’s analysis starts with the acknowledgment that, to an increasing degree, pollsters must weight their results to reflect the overall population or electorate. Declining response rates have led to “nonresponse bias,” meaning that some demographic groups—such as blacks and Hispanics—are less likely to complete telephone surveys. A typical poll is likely to undercount these groups, and pollsters must adjust their results to achieve a representative sample.
Obama runs much stronger among African-Americans and Hispanics than among whites, and Blumenthal found that Gallup’s polls, in many cases, contain lower percentages of these respondents than other polling organizations. Most pollsters weight their results to data obtained by the Census Bureau; data from the bureau’s in-person American Communities Survey and Current Population Survey are considered the most accurate in defining the country’s population. Blumenthal found that Gallup weights its polls not to the overall population but to the population of households that own any type of telephone. Blacks and Hispanics are less likely to own a phone, so Gallup’s polls underrepresent those groups slightly. Additionally, Gallup uses a different method for respondents to self-identify their race, a method that exacerbates the underrepresentation of African-Americans, Blumenthal found.
All told, Blumenthal’s analysis of Gallup’s data led him to a pessimistic view of the future of phone polling. “Sophisticated random samples, live interviews, and rigorous calling procedures alone can no longer guarantee accurate results,” Blumenthal wrote. “Today’s rapidly declining response rates require more weighting than ever before to correct demographic skews, a phenomenon that places growing stress on previously reliable weighting procedures.”
Gallup Editor in Chief Frank Newport defended his methodology in the article and in an interview with National Journal. “One of the biggest changes in recent years, even as our response rates go down, we have millions of people, tens of millions who are desperate to give their opinion,” Newport said.
Incoming AAPOR President Paul Lavrakas said in a telephone interview with NJ that phone polling is “nowhere near dead, nor do I expect it to be dead for several decades. I don’t see that anytime soon we’re going to stop using random-digit-dial surveys in America,” he added.
Lavrakas also said that one reason telephone polling about elections remains fairly accurate is that the same individuals who are more willing to be surveyed also vote more often.
ENTER THE INTERNET
With consumer behavior upending traditional polling methods, Zukin of Rutgers predicts that pollsters will stop conducting dual-frame phone surveys (contacting both landline and cell-phone users) “within the next five years. I think we’re going to survey people with whatever mode they wish.”
That means Internet- and smartphone-based surveys, Zukin says. Indeed, a significant number of the sessions at the pollsters’ conference focused on this kind of research, which uses a methodology known as non-probability, or nonrandom sampling. In many cases, these surveys are completed by respondents who “opt-in,” clicking on a link to complete a poll or joining a Web-based panel (or downloading a smartphone application) to complete surveys—usually with monetary incentives or rewards given for doing so.
Some critics in the polling community are highly skeptical of this type of research. Yes, Internet pollsters can create and weight their panels to reflect the public at large, using demographic information to make their samples more representative, they say, but that kind of weighting can serve in some cases to further distort unreliable data.
One of the first adopters of non-probability sampling for public-opinion surveys was Stanford University professor Doug Rivers, who founded YouGov and Knowledge Networks to explore polling with preselected Internet panels crafted to match the population at large.
“Politicalpollsters arenot usuallythe industry leaders.” —Republican pollster Christina Matthews
The problems facing telephone surveys—low response rates and the increase in Americans who have cell phones only—are “not minor,” Rivers said, and “[they are] not going away.” Respondents are “selecting us; we’re not selecting them,” he said. Rivers conceded, “Do I think that an opt-in sample is better than a probability sample? The answer is, no, I don’t.” But he argued that the lower costs associated with Web-based surveys will drive the polling industry in that direction. “This means that studies that otherwise would not be done will get done,” he said. Zukin agreed. “It’s too cheap to hold off,” he said.
Scott Keeter, the director of survey research for the Pew Research Center, said that pollsters should engage in “enthusiastic experimentation” with these new methods. But Keeter also cautioned that “we want to judge the quality and applicability of these methods carefully before jumping into the pool. The paradigm of most of the survey work of AAPOR’s members is the probability sample. If we edge away from the probability model as we explore the new frontier, we must keep an eye on how well we are representing populations that aren’t as present on the Internet, Facebook, Twitter, or the commercial and credit databases that can be mined for insights.”
ON THE TRAIL
Most of the attendees at the pollsters’ conference in May were academics or market researchers. Some, like Langer, are pollsters for major media outlets (news organizations represented included The New York Times, The Washington Post, the Associated Press, ABC News, and CBS News). Only a few were campaign pollsters, the type of people who advise and conduct surveys for political candidates or organizations with electoral interests.
One of them was Christine Matthews, a Republican pollster in Alexandria, Va., whose client list includes Indiana Gov. Mitch Daniels. “Political pollsters are not usually the industry leaders,” she told NJ, when asked why relatively few of her fellow consultants attended. Matthews noted that many of her competitors—on both sides of the political aisle—are still conducting landline-only polls. “There are campaigns who don’t want to pay the price to include cell phones,” she said.
Alex Lundry, director of research at TargetPoint Consulting, a GOP micro-targeting firm based in Alexandria, Va., attended his first AAPOR conference this year with an eye toward the future. “I’m going to be in this industry for the next 30 years,” said Lundry, who moved recently to Boston to work for Romney full time, charged with leading a data-science team within the campaign’s strategy department. “And there’s no way 10 years from now the phone survey will be the dominant mode of data collection. I’d hate to see the kind of scare headline come out that ‘Phone Polling Is Dead.’ It’s not dead yet. But we’re at a point now where we really need to start looking at these alternatives. We’re at an inflection point now.”
Asked what would replace phone polling, Lundry offered a few suggestions. “We’ve got to figure out how we can use the Internet. We’ve got to figure out how we can use smartphones.” But, he cautioned, “we need to get to a place where these new methods are accurate and reliable. We’ve got to get them right. We’re at this very strange gray area where neither [phone nor Internet polling] is very good. And we’re hanging on to what we know.”
Lundry has used Internet surveys formessage-testing purposes, finding them useful to track which arguments are most or least effective for or against his candidate or cause. He said he can use an Internet survey to interview panel members about multiple messages, in addition to other information that can be discerned from this methodology. For instance, the speed with which users respond to certain images or messages could provide additional insight into the depth of their effectiveness. “I’ve gotten to a point where I am 95 percent comfortable with doing message testing online,” Lundry said. “The data can get richer and more interesting and more insightful when you move online. We can get to a point where we can actually have deeper insights into a race.”
A May Internet poll in the hotly contested California primary between Democratic Reps. Brad Sherman and Howard Berman (the two were drawn into the same seat as a result of the state’s decennial redistricting process) took advantage of the methodology to rate the effectiveness of each candidate’s television advertising. That poll—conducted for the University of Southern California by a bipartisan team of California campaign pollsters, Democratic Tulchin Research and Republican M4 Strategies—prompted respondents to rate how persuasive each ad was as the spot played for them on their computer screens. That allows analysts to evaluate the merit of the arguments within each TV ad in a more precise way, without the expense of empaneling an in-person focus group.
Sherman and Berman finished first and second, respectively, in the top-two primary and will meet in a November general election that will likely cost each lawmaker millions to advertise in the Los Angeles media market.
The future of social-media monitoring by news organizations and campaigns as a means of measuring public opinion is murkier. Republican pollster Jon McHenry said he is not sure how he could mine Twitter to assist his clients and dig out useful information. “Once you have 1 million tweets, what do you do with it?” he asked rhetorically.
That remains an open question to all survey researchers. Trying to infer meaning from the Twittersphere is a tricky proposition. What do Americans think about politics, or public policy, or popular culture? Applying so-called sentient analysis to Twitter and other social media is one option. (Sentient analysis tries to determine the attitude of the writer from the language used in his or her message.) But doing so in a way that can accurately reflect how the broader population feels about these issues is very problematic. “We’re at the very, very early stages of understanding the strengths and limitations” of using non-survey approaches to measure public opinion, Lavrakas said.
Lundry said he hoped that Facebook, whose parent company recently completed its initial public offering, would seek to monetize its membership and allow survey researchers access to its users. “The only hope I have of getting a really good sampling frame [online] is that Facebook opens up their membership to real polling,” he said.
Researchers can use Facebook now for polling, but most surveys conducted through the site contain just one question, making it impossible for pollsters to detect change over time among respondents. “River sampling”—recruiting individuals through pop-up ads and other targeted methods—can help reach specific groups but is unlikely to provide a truly random sample. As Lundry points out, though, with response rates of 9 percent, “are phones truly probability samples anymore?”
“The numbers are just falling off the cliff,” Lundry added. “Every single number that measures the quality of a phone survey is headed in the wrong direction. How much longer can we sustain that?”
BLAMING THE MEDIA
Pollsters say that it’s not just rapidly changing communication habits that are threatening the polling industry. The media is also a culprit, they say, when it uncritically reports on polls with dubious methodology and misinterprets those with rigorous methodology. “It’s harder to tell good [polls] from bad. There are fewer journalists who are trained for this,” said Zukin, the Rutgers professor. “I really think we’re losing that battle.”
All of this brings us back to 2012. The White House and control of both houses of Congress are at stake, and while the public demand for political news—specifically, news about political surveys—remains high, fewer news organizations are willing to pay for high-quality polling. These same news organizations are also cutting staff, including those with experience reporting and analyzing public-opinion data. Journalism “does not have an audience problem,” Pew’s Scott Keeter says. “It has a money problem. Even as the audience for mainstream news organizations has remained stable or even grown, revenues have plummeted.” Despite the challenges facing the polling industry, Keeter says he’s confident that pollsters “are up to meeting those challenges.”
The four months leading up to the 2012 elections will help illuminate the magnitude of those challenges. And the years that will follow will demonstrate how equipped public-opinion researchers are to overcome them.
Gallup’s Frank Newport quotes a line from a movie when asked about the state of political polling. “I’m very optimistic. There’s constant change,” he said. “As Woody Allen said in Annie Hall, a shark has to keep moving or it dies.”
But Newport doesn’t mention the rest of the line, which was delivered as Allen and Diane Keaton’s title character bemoaned the state of their relationship. His next words: “And I think what we got on our hands is a dead shark.”
This article appears in the July 21, 2012, edition of National Journal.