Politicians, pundits, and partisan journalists throw around a lot of statistics about where the public stands on the major issues of the day. They usually cherry-pick the best numbers for their side from the vast array of polls we have on any given topic.
The American public doesn’t have well-formed views on every topic, particularly on new or emerging events, or on the details and implications of proposals and legislation. That means the exact numbers in a poll are a function of a few factors: actual opinions on the topic, partisan or ideological cues, or the poll setup—question wording and placement.
Pollsters sometimes directly test question wording, as YouGov did to gauge approval for the firing of Kristi Noem from her role as Homeland Security secretary. One version of the question asked about “the decision” to fire Noem; another referred to “Trump’s decision” to fire her. Overall, “Trump’s decision” netted higher approval than the version without President Trump's name, driven chiefly by MAGA Republicans.
More often, we don’t have a direct test like YouGov’s, and we have to compare questions on a topic across different polls, which use different methodologies and samples. Sometimes the same poll will change its question wording compared to an earlier version. In that case, we don’t know whether any response shifts are due to the wording or to an actual opinion change over the time between polls.
One example of the latter is The Washington Post’s polling on Iran. The pollsters collected data just after the initial airstrikes by the U.S. and Israel, and found that 39 percent approved and 52 percent opposed “President Trump ordering airstrikes against Iran.” A week later, a new poll found that 42 percent approved and 40 percent opposed the “U.S. military campaign against Iran.” The indication that opinion had actually changed came from another question that was worded the same in both polls: When the Post asked whether the strikes should continue, support increased from 25 percent to 34 percent.
Polling on Iran generally showed considerable variation in the first week of the war, depending on which pollsters conducted it and what wording they used. Results have now mostly settled into a narrower range: Generally, a majority of Americans oppose the war and the way Trump is handling it, while around 40 percent approve. Questions about whether strikes should continue, what the outcome of the war should be, whether the Trump administration should have started it, and approval of the whole thing all show different results—because they are fundamentally different questions.
The exceptions we see—notably the Fox News Poll, which showed 50 percent support—are affected by other aspects of survey design. In Fox’s case, pollsters asked several questions about the dangers Iran posed to the world before the question about support for the war.
That question-order issue raises another variable: How much information should poll respondents get—and what information should they get—to gauge their opinions accurately? Most major media polls don’t include much context in their questions, largely because it opens the door to arguments about bias based on what information is included. It is very easy to use biased wording when explaining issues, and even easier for others to accuse pollsters of bias.
Including context becomes a huge issue with legislation like the SAVE Act. Republicans say that 80 percent of Americans support voter ID, implying or directly asserting that this piece of legislation has that level of support. Roughly 80 percent of Americans indeed support a photo-ID requirement for voting. More states already require a form of ID than do not require it.
The SAVE Act involves more than just voter ID, though, and voters’ opinions will shift depending on what information the poll question includes. A Harvard-Harris poll provided respondents with some provisions of the legislation and found 71 percent support for the bill. The White House has that result on its website. Notably, that support level comes after several questions about the bill's most popular provisions.
Of course, direct persuasion in a poll question also works. The SAVE Act’s opponents have no trouble providing information and priming responses—including Trump’s statements about nationalizing elections—to sway opinion in the other direction.
Going back to the Iran example, McLaughlin & Associates found ways to raise support for the war to nearly 60 percent by describing a hawkish view of the threat Iran poses to the world in stark terms across three questions. Is that information biased, or is it necessary context? Your answer to that question probably depends on whether you like the poll’s results.
Contributing editor Natalie Jackson is a vice president at GQR Research.





