Americans are adopting artificial intelligence several times faster than they welcomed the internet into their lives a generation ago. Even AI companies are worried about the historically fast pace, which leaves society basically no time to adapt.
Anthropic is holding back its newest model, Mythos, because it’s too good at finding security vulnerabilities. At the same time, Anthropic is also fighting the U.S. government's designation of the company as a “supply-chain risk” for refusing to allow its models to be used for domestic surveillance and fully autonomous weapons.
OpenAI put out a white paper last week titled “Ideas to Keep People First,” suggesting policies to help integrate AI into every aspect of our lives. The paper argues that “in normal times, the case for letting markets work on their own is strong,” but that governments and institutions need to step in because AI creates “opportunities and risks that existing institutions aren’t equipped to manage.”
In other words, these aren’t normal times.
In addition to risks and changes to the workforce, the information ecosystem, national security, and cybersecurity, state and local governments are also debating how best to build and finance the data-center infrastructure needed to sustain AI growth. The global AI-development race, particularly between the U.S. and China, is fast becoming another flavor of great-power competition.
And that is all before we mention the extreme scenarios in which AI could go awry and take over systems, which brings to mind Skynet for those of us in a certain age bracket. Can Arnold Schwarzenegger save us?
Polling tells us that most Americans are kind of freaked out by it all, even as they increasingly use the technology. In a recent Quinnipiac poll, 55 percent say AI will do more harm than good in their day-to-day lives, while 34 percent say it will do more good than harm. Seventy percent think it means fewer jobs will be available in the future. Only 21 percent of Americans think they can trust AI most or all of the time, yet only 27 percent say they have never used AI tools.
It doesn’t really matter whether any given person is actively using an AI chatbot for work or for any other purpose; just about everyone is impacted either by job concerns, siting for data centers, or the takeover of customer-service systems and medical imaging.
Pew Research estimates that 38 percent of Americans live within 5 miles of a data center, but finds no differences in opinions based on proximity to one. A Harvard/MIT poll shows that about 40 percent of Americans support the idea of a data center in their area, while 32 percent oppose it, setting up fierce debates that are starting to surface in political campaigns.
While a handful of large tech companies have conducted layoffs that could be attributed to AI, there is not yet a widespread, cross-industry effect. Gallup reports that 13 percent of workers say they use AI daily in their jobs and another 15 percent use it a few times per week. Around 1 in 5 workers are very or somewhat worried that AI will take over their job completely in the next five years.
The public is reacting to possible changes, not what we’re seeing quite yet. But the possibilities make people very nervous—maybe not that many people will ultimately lose their jobs, but what about hiring trends for entry-level white-collar jobs? Young college graduates may have fewer options than they used to—and, according to the Quinnipiac poll, more than 80 percent of Gen Zers think this will happen.
In the Gallup poll of workers, those who frequently use AI represent fewer than 3 in 10 workers, but up to 50 percent of workers now say they use it at least a few times per year, a rapid increase from about 20 percent in mid-2023. Fears of job loss may intensify as use continues to scale up.
If fewer than half the population uses AI at work, we also have to consider that many Americans aren’t going to be familiar with its geopolitical implications either—which is to say, we probably aren’t yet getting a full read on where people stand. What we do know is that AI use is skyrocketing, even as few trust it and nerves about its impacts abound.
Contributing editor Natalie Jackson is a vice president at GQR Research.





