A Pew Research Center report published in March 2026 pulls together five years of survey work on American views of AI. In Pew's 2024 expert-public comparison, AI experts were 39 points more likely than the general public to expect a positive long-term impact from artificial intelligence. Over the five years Pew tracked, public concern rose. Understanding why is the first step toward closing the divide.
The number worth knowing
Pew Research has tracked American attitudes toward AI since 2021. The March 2026 summary report, drawing on five years of survey data, contains one statistic that cuts through every AI headline you have read this year.
Fifty-six percent of AI experts expect artificial intelligence to have a positive impact over the next 20 years. Seventeen percent of the general public agrees.
The separation between AI expert optimism (56%) and general public optimism (17%) on AI's 20-year impact, per Pew Research March 2026.
Source: Pew Research Center, 2026This is not a story about public skepticism. Americans have become more aware of AI over the same period. The share of adults who say they have heard "a lot" about AI climbed from 26% in 2022 to 47% in June 2025. Roughly a third of Americans now interact with AI multiple times a day, up from 22% in February 2024.
The awareness is rising. The optimism is not following.
What four in five workers are actually doing
The same body of research tracks workplace adoption. As of September 2025, 21% of U.S. workers say at least some of their work is done with AI. Put another way, nearly four in five workers do not report AI doing a meaningful share of their work. Pew separately finds that 65% say they do not use AI much or at all in their jobs.
The number is often cited to argue that adoption is accelerating. That is true. It climbed from 16% in 2024 to 21% in 2025, a meaningful single-year jump.
But the read usually stops there. It does not ask what most workers are actually doing instead.
The answer from the data: they are watching. Fifty percent of U.S. adults say the increased use of AI in daily life makes them feel more concerned than excited. Ten percent are more excited than concerned. Thirty-eight percent feel roughly equal amounts of both.
The numbers describe an audience paying close attention and waiting to be convinced.
The divide does not come from ignorance
The 39-point expert-public split is easy to misread as a knowledge problem. The story goes: experts understand AI deeply, so they see its potential; the public does not understand it, so they worry.
The evidence does not support that story.
Americans who interact with AI more frequently are not automatically more optimistic. Younger adults use AI at higher rates, but their survey responses are not proportionally more positive. The divide between concern and enthusiasm runs across age groups, educational backgrounds, and usage frequencies.
What the data shows is a structural difference in how experts and the public encounter AI. Experts encounter it as a research subject. They control the conditions, define the benchmarks, and choose when to publish results. The public encounters AI as an employment pressure, a workplace decision, a headline about layoffs, or a tool they are told they must adopt before they understand why.
Those two experiences produce different readings of the same technology.
"Fifty percent more concerned than excited describes a workforce watching and waiting. They want someone to make the path legible."
What this means for leaders in Indiana
The U.S. itself ranks 24th globally in AI population diffusion, according to Microsoft AI Economy Institute data cited in the Stanford Human-Centered Artificial Intelligence 2026 AI Index. Indiana's IN AI initiative, launched April 28, 2026, signals that state leaders see adoption as an urgent competitiveness issue, especially for employers and small businesses. A 39-point optimism shortfall in a country already behind on diffusion compounds quickly.
The practical question for a CEO in Indianapolis or Fort Wayne or Terre Haute is not whether their team is skeptical about AI. Most of them are. The question is whether that skepticism produces useful caution or just slows down decisions that need to be made.
Two kinds of caution
Useful caution looks like this: teams that ask "is this the right tool for this task?" before deploying AI, evaluate outputs critically, and build AI proficiency incrementally rather than all at once. That posture maps to Levels 3 and 4 in The 7 Levels of AI Proficiency framework, the stages where systematic thinking replaces ad hoc tool use.
Paralyzing caution looks like this: organizations where AI is the subject of every leadership meeting but the action item of none. Where everyone has heard a lot about it and no one knows what to do with it.
The Pew data suggests most organizations sit somewhere between these two descriptions. Fifty percent more concerned than excited describes a workforce watching and waiting. They want someone to make the path legible.
That is the opening for leaders who are willing to move first.
Related reading: AI proficiency, literacy, and fluency: what the distinction costs you.
Frequently Asked Questions
What does the Pew Research AI survey cover?
The March 2026 Pew Research Center report synthesizes five years of survey data on American attitudes toward AI. It covers sentiment trends from 2021 to 2025, workplace adoption rates, sector-specific views, demographic differences, and the split between AI expert optimism and general public optimism. The report draws on nationally representative samples of U.S. adults and teens.
Why do AI experts and the public view AI so differently?
The 39-point split reflects different modes of encounter with the technology. Experts engage with AI under controlled research conditions where they define outcomes and benchmarks. The public encounters AI as an employment pressure, a tool imposed by organizations, or a source of news about job displacement. The same technology produces different emotional readings depending on who controls the conditions of use.
What should a CEO do about employee AI skepticism?
Skepticism that produces useful caution is worth preserving. The most productive response is building structured AI proficiency over time, starting with individual contributor competencies and working up through team-level systems thinking. Organizations at Levels 3 and 4 in The 7 Levels of AI Proficiency framework consistently outperform those that treat adoption as a binary switch rather than a gradual capability build.
Sources
- Pew Research Center, "Key Findings About How Americans View Artificial Intelligence," March 12, 2026. pewresearch.org
- Stanford University Human-Centered Artificial Intelligence, "Artificial Intelligence Index Report 2026," 2026. hai.stanford.edu
Find your AI Proficiency level
The free 7 Levels assessment places you across seven stages of AI capability. Under ten minutes. Research-backed scoring.