AI Workforce

The Divide Between What AI Can Do and What It Actually Does

Anthropic studied its own usage data. AI can theoretically handle 94% of computer and math tasks. In practice, it covers 33%. That divide is where proficiency counts.

By Harrison Painter May 10, 2026 Updated May 10, 2026 4 min read

Anthropic built a tool to measure exactly how much AI is changing the labor market, then turned it on their own product. AI can theoretically handle 94% of computer and math occupation tasks. In actual Claude usage data, it covers 33%. That is the divide. And that divide is where your team's proficiency counts more than anything else you could buy.

The research, published in March 2026 by Anthropic economists Maxim Massenkoff and Peter McCrory, introduces a concept called "observed exposure" (a phrase the researchers use themselves). The measure tracks the share of real work tasks that AI systems are actually performing, versus what they could theoretically perform. They pulled this directly from millions of real Claude conversations matched against 800 occupations.

The result is the most honest picture of AI's labor market impact we have seen from any major AI company, precisely because it uses their own data and they had every incentive to be careful.

The number that resets the conversation

Theoretical exposure asks what AI could do. Observed exposure asks what AI is actually doing. The first generates headlines. The second tells you what is happening inside organizations today.

For computer programmers, observed coverage sits at 75%. For customer service and data entry roles, observed coverage is also high. For business, finance, and management occupations, the divide between theoretical capability and actual observed use is wide. AI could do far more in these roles than workers and organizations are currently asking it to do.

94% vs 33%

AI's theoretical task coverage versus observed task coverage in computer and math occupations. The distance between those two numbers is where workforce proficiency lives.

Source: Massenkoff and McCrory, Anthropic, 2026

This is worth sitting with. If you are running a company and your team is not closing that divide, someone else's team will.

The hiring signal that deserves attention

The research also surfaces a labor market signal worth watching. After ChatGPT launched, the job-finding rate for workers ages 22 to 25 in highly AI-exposed occupations dropped an estimated 14%. The researchers note this finding is just barely statistically significant. It is a suggestive early signal, not a confirmed trend. But the direction is clear.

Entry-level positions in AI-exposed fields are narrowing as organizations pause junior hiring in roles where AI output is expanding what senior workers can produce. The pipeline into knowledge-work careers is changing shape before the displacement shows up in unemployment numbers.

The researchers name a scenario they call "a Great Recession for white-collar workers." They present this as a possibility, not a prediction. But naming it at all, from Anthropic's own economists, tells you something about where serious researchers are placing the risk.

What the scenario actually describes

Unemployment in the most AI-exposed occupational quartile doubling from 3% to 6% would be a significant disruption to knowledge-work careers, even if it looks modest in percentage terms. We are not there. The current data shows no statistically significant increase in unemployment for highly AI-exposed workers. But the young-worker hiring signal suggests the pipeline is already changing.

What this looks like inside an organization

The Federal Reserve published its own AI adoption monitoring report in April 2026. The picture it shows explains why the Anthropic divide exists.

About 18% of US firms have adopted AI as of year-end 2025. Yet 78% of workers are employed at firms that have adopted AI. The difference between those two numbers reflects a simple reality: large firms adopt first, and large firms employ most people.

Financial sector firms show 30% adoption at the firm level and 63% individual generative AI adoption among workers. Professional services: 33% and 62%. Manufacturing lags both significantly.

So when Anthropic finds that computer and math occupations sit at 33% observed exposure despite 94% theoretical capability, the tools are not the bottleneck. The fluency to use them fully is still being built across organizations. That is a proficiency problem, not a technology problem.

The level where the divide closes

On the 7 Levels of AI Proficiency, the observed-versus-theoretical divide shows up between Level 2 and Level 4.

A Level 2 worker (AI Capable) knows AI exists and uses it occasionally. Their observed exposure is low. A Level 4 worker (AI Architect) has built AI into their workflow architecture. Their observed exposure approaches the theoretical ceiling for their role.

The workers moving between those levels are Level 3 (AI Fluent): actively building the habits and judgment that close the observed-theoretical divide. They are also the workers whose employers see the first productivity returns.

Deloitte's 2026 State of AI in the Enterprise report finds that 66% of organizations report productivity and efficiency gains from AI adoption, but only 25% have moved 40% or more of their AI pilots into production. The productivity is real. The scaling is not. That is still a proficiency bottleneck, not a technology one.

The organizations crossing the observed-exposure divide are not doing so by deploying more tools. They are doing it by developing people at Level 3 and above who know how to use what already exists.

Related reading: Level 4: AI Architect.

What to do with this

You do not need to worry about AI doing 94% of anyone's job next quarter. The Anthropic data makes clear that the actual observed rate is a fraction of theoretical capability, and it has been for years.

What you do need to understand is where your team sits in that divide.

If your knowledge workers are at Level 1 or Level 2, they are in the theoretical exposure zone. AI could handle parts of their work. They are not asking it to. That is both a productivity issue and a retention risk as employers who develop Level 3 and Level 4 workers pull ahead.

The window is open. The divide is real and it is large. The organizations crossing it are not doing so by deploying more tools. They are doing it by building the people who know how to use what they already have.

Frequently Asked Questions

What is "observed exposure" in AI labor market research?

Observed exposure is a measure developed by Anthropic economists that tracks the share of real work tasks AI systems are actually performing, based on real usage data from Claude conversations. It differs from theoretical exposure, which measures what AI could theoretically do. The divide between the two reveals how much AI capability organizations are leaving unused.

Has AI started causing significant job losses?

As of early 2026, Anthropic's research finds no statistically significant increase in unemployment for highly AI-exposed workers. The most notable early signal is a 14% drop in the job-finding rate for workers ages 22-25 in AI-exposed occupations. The researchers describe this as a suggestive early indicator, not a confirmed finding.

Which occupations are most exposed to AI?

Computer programmers show 75% observed AI task coverage, the highest in the Anthropic data. Customer service representatives and data entry roles are also highly exposed. Business, finance, and management roles show high theoretical exposure but lower observed exposure, meaning significant AI capability goes unused in these roles today.

Sources

  • Massenkoff, M. and McCrory, P. (2026). "Labor market impacts of AI: A new measure and early evidence." Anthropic. anthropic.com/research/labor-market-impacts
  • Federal Reserve Board (2026, April 3). "Monitoring AI Adoption in the US Economy." FEDS Notes. federalreserve.gov
  • Fortune (2026, March 6). "Anthropic just mapped out which jobs AI could potentially replace." fortune.com
  • Deloitte (2026). "State of AI in the Enterprise 2026." deloitte.com
  • Yale Insights (2026). "The Real Job Destruction from AI Is Hitting Before Careers Can Start." insights.som.yale.edu
Harrison Painter
Harrison Painter
AI Business Strategist. Founder, LaunchReady.ai and AI Law Tracker.

Harrison helps teams build AI systems that cut cost and grow revenue. Nearly 20 years of business experience. 2.8M YouTube views. Founder of LaunchReady.ai and the 7 Levels of AI framework.

Connect on LinkedIn

Find your AI Proficiency level

The free 7 Levels assessment places you across seven stages of AI capability. Under ten minutes. Research-backed scoring.

Get the weekly briefing

LaunchReady Indiana delivers AI news, compliance updates, and case studies for Indiana leaders. Every Tuesday. Five minutes.

Subscribe free