National Journal Logo
×

Welcome to National Journal!

Enjoy this premium "unlocked" content until April 18, 2026.

Continue
LEADING INDICATORS

Make sure actual humans answered that poll you’re using

AI agents cannot be allowed to replace the voice of the people.

Aaru's founders discussing the origins of their AI company on CNBC
Aaru's founders discussing the origins of their AI company on CNBC
CNBC

Want more stories like this?

Subscribe to our free Sunday Nightcap newsletter, a weekly check-in on the latest in politics & policy with Editor in Chief, Jeff Dufour.

March 31, 2026, 4:27 p.m.

Public-opinion polling is one of the handful of ways we have any clue what people think of their government and public policy between elections. The goal of polling is to represent the voice of the people—all people.

Other methods of people expressing opinion, including protests, social media posts, letters, and calls to representatives, only include input from those who are engaged and serious enough about politics to go out of their way to speak out. Polls find people and beg for their opinions, whether they care about an issue or not. That’s actually one of the most difficult aspects of polling—getting people who don’t care to answer, and then figuring out what they do and don’t care about. It's important to represent them, too.

What pollsters don’t want to represent is an army of artificial-intelligence agents, who are already beginning to infiltrate the field in various ways. If we aren’t talking to humans, what are we even doing?

That’s not to say AI agents can’t ever stand in for humans. AI is useful for a wide variety of tasks, and the technology for simulating a polling environment is likely useful for some types of work in which we have a reasonable expectation that algorithms can predict human behavior.

Politics is not the place for that. Politics is complex, and the public holds weak, nonexistent, and contradictory opinions on all kinds of issues, especially those they don’t care much about. AI agents know and care more about the assignment than most humans, and would struggle to represent the specific logic of voting for President Trump and voting for an expansion of abortion rights, as so many voters in Arizona and Nevada did in 2024.

It’s also a huge violation of the public trust to let AI agents answer on behalf of humans in political polling. Something that purports to speak for the people in a democracy—which depends on the voice of the people for legitimacy—had damn well better talk to real people.

There are two ways in which AI is infiltrating polling. One is intentional—“polls” of AI agents in a simulated environment rather than of humans. There are not that many of these yet, but one company—Aaru—put out political “polling” before the 2024 election to gin up attention for the start-up. Despite claims that AI simulations remove the bias of actual polls, Aaru's AI agents showed a toss-up race—just as real polls did. Under Aaru's predicted outcome, we would be over a year into President Kamala Harris’s term by now.

At least those 2024 election estimates came with a warning that they were all AI-generated. “Polls” Aaru did for an organization called Heartland Forward came without such a warning. These studies include a methodology statement at the bottom of the page that is vague enough to leave unclear whether the “poll” is 100 percent AI-generated.

Many write-ups of Aaru's studies also fail to indicate that the “poll” didn’t sample real humans. Axios published one on maternal health, initially with no mention that the survey was of AI agents. Now the line reads: “New findings by Aaru, an AI simulation research firm, for Heartland Forward show that a majority of people trust their own doctors and nurses.” Good that it acknowledges the nature of the firm; bad that it still says “a majority of people.” People didn’t say that; AI did.

The second worry is that pollsters who are trying to get opinions from real humans may get some AI agents responding. Some users set these up to answer online surveys in bulk so that they can get cash incentives or other perks without any effort. Pollsters have been fighting bots in online surveys for years, but AI agents are sophisticated enough to bypass the checks in place to stymie them. High-quality firms know the risks and work to detect and filter out AI, as they have with many other issues before. But anyone who uses cheap online poll data does so at their own risk.

I’m not naive enough to think that fully AI research will go away. Aaru is already a billion-dollar company. What I want is clear AI-use disclosure and clear explanations of how researchers keep it out of their studies when respondents are supposed to be human. Journalists need to inform their audiences of both, because there are still many questions about the accuracy of AI agents’ representations of humans.

But we should refuse to use political “polls” of AI agents. When the goal is to give people a voice in representative democracy, pollsters and journalists owe it to the public to ensure the voices are human.

Contributing editor Natalie Jackson is a vice president at GQR Research.

Welcome to National Journal!

Enjoy this featured content until April 18, 2026. Interested in exploring more
content and tools available to members and subscribers?

×
×

Welcome to National Journal!

You are currently accessing National Journal from IP access. Please login to access this feature. If you have any questions, please contact your Dedicated Advisor.

Login