Last week, I wrote about the paradox of public opinion on artificial intelligence: Americans report increasing use of AI tools, but their concerns about the technology remain high. In short, I argued that survey respondents, when asked about concerns over AI, react to what they see in the media about the potential impacts of the technology rather than to what they are actually experiencing.
And what they are using now may not seem worth the risks that the media loves to cover. David Byler of NRG offers the insight that many people simply view AI as an enhanced search function—because that’s how they use it. The other primary way people see AI in action is via customer-service chatbots that appear when they would probably prefer to speak with a real person.
As a search engine, AI is imperfect—we constantly hear the warnings to check the accuracy of the information AI chatbots give you. Depending on how much time they spend online, many people may also see a lot of AI-generated content that seems junky and worthless, which spawned the phrase “AI slop.” These aren’t the best ways to showcase AI’s capabilities, yet they’re what most people see.
Anecdotally, I was on a train last week and overheard a young man explaining to his companion how you can’t trust an AI chatbot because it says the address for where they need to go is 375 Lexington but the real address is 369 Lexington. In that case, the address wasn’t far off, and they likely would have found where they needed to go—but it doesn’t help the case for trusting AI. In any case, he was still using the chatbot.
My use of AI is different. I sometimes use Claude to draft R code for statistical analysis. I make a simple request, and Claude gives me very detailed code—more detailed than I asked for. This saves hours of work, and the code has been far more accurate on the first try than what I would write. If there is an error, you can tell Claude, and it will fix it within a few seconds. No more hunting for the comma or parentheses that I missed. My trust in AI is likely much higher than that of the train passenger who got a slightly wrong address. But my use case is far more specialized than his.
Variations in how we use AI are critical to understanding opinions about the technology. “Artificial intelligence” is an incredibly vague term that encompasses a vast, ever-expanding category of technological developments and applications. Chatbots are generally based on large language models (LLMs), which are only one of many tools under the AI umbrella. Generative AI, used to create all that “slop,” is another.
Fears about AI are a completely different matter, related much more closely to headlines about potential job losses, assumptions that AI has caused layoffs, and a rough job market for young college graduates. There are safety concerns for children surrounding what AI directs them to say, do, or believe. Then, there are the questions about data centers. Where are they going up? Will they affect utility rates?
When we ask generic poll questions about AI, we have no idea which aspect of AI’s impacts people are responding to. Most public surveys don’t go into enough detail to give us the answers we really need to understand public sentiment.
It’s always difficult to boil a complex issue down to a digestible, standardized survey question, but asking for generic views on AI is so vague that we have no idea what drives that response. The data becomes useless as anything other than a vibes indicator.
NBC News put out a poll under the headline, “Majority of voters say risks of AI outweigh the benefits." The story goes on to say “the only topics that were less popular than AI were the Democratic Party and Iran”—decidedly negative vibes. Survey respondents also expressed little faith that either party is better than the other at “dealing with artificial intelligence.”
But why? What aspect of AI are they thinking about when giving these answers? What do people expect the political parties to do to “deal with” AI?
We have no idea, and until we see public polls asking full batteries of questions about how Americans use and experience AI, what they are worried about, and how those two things map onto concerns about the technology, we’re not going to have a full view of how the public is impacted by and adapting to the new AI world.
Contributing editor Natalie Jackson is a vice president at GQR Research.





