AI is quickly changing huge swaths of society and reshaping countless industries. Already in politics, we have to worry about “deep fake” photos, voices, and videos; the Federal Communications Commission and other watchdogs doled out plenty of multimillion dollar fines to individuals and groups who impersonated former President Biden in phone calls and videos during the 2024 presidential campaign.
AI is coming for polling, too. Its potential uses range from being a simple tool to help pollsters do our jobs more efficiently, to replacing us completely.
The American Association for Public Opinion Research’s conference in May focused on AI. A couple dozen panels, often with standing-room-only crowds, showcased the many ways we can use AI. (To underscore the pace of change for pollsters over the last two decades, when I first attended this conference in 2008 the standing-room-only crowds were for talks about the new world of polling via cell phones.)
On the time-saving front, AI can help us sort through “open-ended” responses—the free-form answers people give us when we ask questions without giving them options to choose from. It’s great to get people’s own words, but sorting through several hundred or thousand unique answers requires many human hours. AI can do that for us—with human assistance—in far less time.
AI can also help us enhance web-based surveys. Telephone interviewers are able to ask follow-up questions and get complete answers from respondents if they are vague, but there hasn’t been a similar solution for online surveys. Now, we can program AI prompts into surveys to follow up on incomplete or vague answers. We can even use AI to help write the questions.
As in any industry, though, AI developments can be problematic as well. The very foundation of public opinion is talking to people, yet developers have created fully AI polls and focus groups, which do not interview any actual humans. The “participants” are all AI-created bots designed to mimic actual human behavior and responses. Why would this be attractive? Well, it’s a whole lot cheaper than conducting real polls—just don’t think about the environmental impact of spinning up 5,000 AI bots to take a poll.
There are other ethical red flags as well. The most fundamental problem is saying “this is what people think” without talking to any real humans. Further, the algorithms and all the information used to train the large language models that produce AI content are only as good as the information fed to them. Rather than “this is what people think,” it’s more like: “This is what a bot army thinks … based on what a developer (likely not a political person) told them … which they’re not about to reveal to you. It’s the secret sauce!"
That also means it’s entirely possible that AI bots are using other public-opinion information to decide what to say, meaning that other companies’ products could be used to produce a product that the AI company sells. That’s a substantial business-ethics problem.
This isn’t a hypothetical situation for political polls. Last fall, Semafor reported on a company called Aaru that produces polls with only AI bots for respondents under the audacious headline that the bots “predict elections better than humans.” The evidence was one congressional primary. They gave only vague details about the algorithm.
Semafor later reported Aaru’s predictions for the presidential election. They were no different from most real polls: Most swing states were essentially tied, but slight tilts in four of seven led to a prediction that Vice President Kamala Harris would win. It was decidedly not “better than humans.”
Even so, expect these polls to become more common in 2026. One of the perks that AI bots provide is that they are not constrained by geography or number of people available, meaning that campaigns can fake-poll a small congressional primary or legislative race by simply spinning up more bots. Pollsters have to rely on finding actual people, which is difficult in smaller geographies. In a midterm year, we might see more interest in this AI capability.
But what happens when voters’ choices aren’t logical? As one example, in Arizona last year a number of voters cast ballots for both President Trump and Sen. Ruben Gallego—two politicians with basically nothing in common. A few months ago, I had the opportunity to ask an AI poll developer how their algorithms would predict seemingly nonsensical votes like this.
They didn’t have an answer besides a vague reference to their algorithm.
I would advise campaigns and media to stay away from AI polling.
Contributing editor Natalie Jackson is a vice president at GQR Research.