An AI capability audit is a written diagnostic that measures a team's AI readiness against a published proficiency framework, and produces a written plan for what to do about the findings. Three things distinguish a capability audit from the surrounding category of AI audits: it measures people, not policy. It requires a published proficiency framework, not a proprietary maturity model. And it produces a written next-step plan, not just a score.
What is an AI capability audit?
An AI capability audit is a written diagnostic that measures a team's AI readiness against a published proficiency framework, and produces a written plan for what to do about the findings.
Three things distinguish an AI capability audit from the broader category of AI audits:
- It measures people. Compliance audits measure policy. Governance audits measure decision-making structure. Security audits measure attack surface. A capability audit measures what your team can actually do.
- It requires a published proficiency framework. The framework has to be public, peer-readable, and stable across measurements. A proprietary internal maturity model fails this test, because the score it produces cannot be defended at the board level or compared across companies.
- It produces a written next-step plan. The audit is not a snapshot. It is a plan with a measurement target for what should be true 90 days from now.
The deliverable is typically eight to fifteen pages: team-level distribution, function variance, role-specific findings, a written intervention plan per role, and a measurement plan for the next cycle. It is read by the executive sponsor (usually the CEO, COO, or CHRO), shared with team leads, and presented at the board level when AI workforce strategy is on the agenda.
Why 78% of executives cannot pass an AI audit
Grant Thornton's 2026 AI Impact Survey put a number on the readiness shortfall: 78% of business executives say they could not confidently pass an independent AI governance audit within 90 days.
of business executives say they could not confidently pass an independent AI governance audit within 90 days.
Source: Grant Thornton 2026 AI Impact SurveyThat is a striking number, but it understates the operational picture, because "AI governance audit" is the loosest of the three audit types. The 78% includes companies with policy ambiguity, decision-making confusion, and unclear ownership of AI systems. It does not yet measure whether the workforce can do the work.
When you add the workforce-capability layer, which is what a capability audit measures, the percentage goes up, not down. Section AI's 2026 Proficiency Report places 28% of workers as AI Novices and 69% as AI Experimenters. That leaves only 2.7% as Practitioners and 0.08% as Experts. If your company is sampled from the same distribution, roughly 97% of your workforce sits at the bottom two tiers of capability.
So the picture is: 78% of executives cannot defend AI governance, and roughly 97% of workers are below practitioner-level capability. Both numbers are high. Both are addressable. But they require different audits. Governance audits measure policy and decision-making. Capability audits measure people and work.
AI compliance audit vs AI governance audit vs AI capability audit
These three terms get used interchangeably and they should not be. Each measures a different thing and demands a different deliverable.
AI compliance audit
Measures whether the company's AI use complies with regulations: the EU AI Act, the NIST AI Risk Management Framework, ISO 42001, and sector-specific rules like HIPAA for healthcare AI or FCRA for credit decisions. The auditor is typically a law firm or a specialized compliance consultant. The deliverable is a regulatory-readiness report. The remediation is policy and documentation.
AI governance audit
Measures whether the company has clear ownership, decision-making structure, and oversight for AI systems. Who approves a new AI tool? Who reviews model outputs before they go to customers? What is the escalation path when an AI system produces a bad result? The auditor is typically an internal audit team or a Big Four consulting firm. The deliverable is a governance-readiness report. The remediation is committee structure and process.
AI capability audit
Measures whether the workforce can do AI-augmented work at the standard the business needs. The auditor is typically a workforce-capability firm using a published proficiency framework. The deliverable is a capability report with a written intervention plan. The remediation is targeted training, role redefinition, and re-measurement.
A company can pass a compliance audit, fail a governance audit, and not even have a capability audit. The three are independent. A useful AI strategy requires all three.
Compliance audits remediate policy. Governance audits remediate process. Capability audits remediate skill. Most companies have run none of the three.
The five sections of a real AI capability audit
A defensible AI capability audit has five sections. Skip any one and the audit is incomplete.
Section 1. Team-level distribution
A visualization showing where every team member sits across the seven proficiency levels. Function variance and role variance become legible. The reader sees, in a single chart, how the team's capability is shaped.
The chart is built from the pre-engagement assessment data. Each team member has a level placement (1 through 7) plus a confidence interval. The team-level distribution is the aggregate.
Section 2. Function variance findings
The audit identifies functions clustered at lower levels and functions clustered higher. A common finding: operations and finance teams cluster at Levels 1 to 2, marketing and sales teams cluster at Levels 2 to 3, and the executive layer is stratified (some at Level 1, some at Level 5). The function variance tells you where to invest training first.
Not every function needs to be at Level 4 or above. A team doing routine document production can deliver excellent quarter-end work at Level 2 if the work does not require Level 4 context engineering. The audit calls that out so the executive sponsor does not over-invest in functions that are already adequate.
Section 3. Role-specific findings
Within each function, the audit identifies roles where the level distribution is below the work requirement. Example: senior operations role expects Level 4 (Context Engineer) capability. Three of four senior operations people are at Level 2. The role is bottlenecked by capability.
Role-specific findings are the most useful read for the team lead. Function variance tells the executive sponsor where to invest. Role-specific findings tell the team lead which individuals need which intervention.
Section 4. Written intervention plan
Per role, the specific intervention that would change capability. Cohort training, role redefinition, hire-up, or some combination. The plan specifies the target level distribution post-intervention and the time window.
Example written plan for the operations function: "Senior operations roles: cohort training to move from Level 2 average to Level 4 average over six weeks. Junior operations roles: cohort training to move from Level 1 average to Level 3 average over six weeks. Operations management: pair coaching with a Level 5+ external advisor for one quarter."
Section 5. Measurement plan
What the next assessment cycle should show. A stretch goal and a floor goal. Example: "At week 6, senior operations should be 75% Level 3+ minimum (floor goal), 50% Level 4 (stretch goal). The team-level distribution average should move up by at least 0.7 levels (floor) or 1.0 level (stretch)."
The measurement plan is what makes the audit longitudinal. Without it, the next audit cycle has nothing to compare against and the score remains a snapshot rather than a measurement.
The five sections together are about eight to fifteen pages. Anything less is incomplete; anything more is padding.
Sample audit excerpt (sanitized)
Here is what an actual audit excerpt looks like for a sanitized 32-person mid-market operations team. Names, function labels, and exact percentages have been adjusted; the structural shape is real.
Team-level distribution (week 0, pre-engagement)
- Level 1 (AI Aware): 7 of 32 team members (22%)
- Level 2 (Prompt Engineer): 14 of 32 (44%)
- Level 3 (Critical Thinker): 8 of 32 (25%)
- Level 4 (Context Engineer): 3 of 32 (9%)
- Levels 5 through 7: 0 of 32 (0%)
Function variance (week 0)
- Operations (12 people): Level 2.0 average. 50% at Level 1 to 2.
- Finance (6 people): Level 2.5 average. 67% at Level 2 to 3.
- Marketing (8 people): Level 2.8 average. 75% at Level 2 to 3.
- Sales (6 people): Level 2.3 average. 67% at Level 2.
Role-specific finding (excerpt)
The senior operations manager role (3 people) sits at Level 1.7 average. The role's quarterly deliverable (a 40-page operations playbook synthesized from cross-functional inputs) is structurally a Level 4 task: it requires rich-context AI-augmented synthesis, not just prompt engineering. The role is bottlenecked by capability. Recommended intervention: six-week cohort training with target Level 3.5 to 4.0 by week 6.
Measurement plan (week 6 target)
- Team average: Level 2.0 to Level 3.0 (floor) or Level 3.4 (stretch)
- Senior operations: 1.7 to 3.5 (floor) or 4.0 (stretch)
- Function variance: operations and finance close to within 0.3 levels of each other
- Level 4+: 9% to 30% (floor) or 40% (stretch)
That excerpt represents about three pages of a fifteen-page audit. The rest is per-role detail and the written intervention plan.
Who delivers an AI capability audit?
Three categories of provider, each with different fit:
Internal HR or L&D team
Possible but limited. Internal teams can run the assessment, gather the data, and run the cohort training. What internal teams cannot do is grade their own work objectively. An internal capability audit lacks the third-party validation that makes the score defensible at the board level. Useful for operational planning, not useful for board reporting or external defensibility (investors, regulators, partners).
Big Four or strategy consulting firms
McKinsey, BCG, Bain, Deloitte, Accenture, EY, PwC, KPMG. Possible but expensive and indirect. These firms typically use proprietary maturity models rather than published proficiency frameworks. The audit reads as a strategy document or a maturity benchmark, not a workforce-capability report. Pricing is $250,000 to $1 million for a six-month engagement. Useful when the audit is part of a broader transformation engagement; less useful when the goal is a focused capability measurement.
Specialized AI workforce firms
LaunchReady, Larridin, Section AI, BetterUp, Hone, and others. Direct fit. These firms use published or semi-published proficiency frameworks and deliver capability-specific audits. Pricing ranges from $19,500 to $96,000 for a single audit cycle, with annual programs at $96,000 to $180,000. Useful when the audit needs to be defensible at the board level, comparable across measurement cycles, and tied to a specific workforce intervention.
The right choice depends on the size of the engagement, the speed required, and whether the audit needs to stand up to external scrutiny. For most mid-market and enterprise teams, the specialized AI workforce firm category is the closest fit; the price-to-value ratio is meaningfully better than the Big Four route, and the framework comparability is better than the internal route.
How an audit feeds the next 90 days
A capability audit does not just sit in a folder. The output is a 90-day intervention plan with weekly milestones.
Days 1 through 7
Audit is read by the executive sponsor. Findings shared with HR, team leads, and the affected employees. Cohort selection: who is in the first cohort, what role transitions support the audit's recommendations, what dependencies need to be cleared before training starts.
Days 8 through 49 (six-week cohort)
The intervention runs. Cohort sessions weekly, between-session work in the team's actual workflow. Capability changes are observable starting around week 3 (the team's prompts get longer, evaluation gets more rigorous, business context gets richer). The intervention is structured at 75-minute weekly cadence with cohort up to 15.
Days 50 through 56
Post-engagement assessment. Team-level distribution is re-measured using the same instrument as week 0. Identical conditions. Comparability is what makes the proof work.
Days 57 through 90
Written capability audit report. Comparison to baseline. Per-role level changes. Recommendations for the next cohort or the next intervention. Cadence recommendation for the next measurement cycle.
The 90 days are tight. They have to be. AI capability decays without practice. A six-month or twelve-month intervention loses the practice cadence; capability gains made early in the engagement degrade by the time the post-assessment runs. The 90-day window is what holds the gains in place long enough to measure them.
How an audit feeds board-level reporting
A capability audit is the artifact a CEO can take to the board. The board does not want to see "we are spending $500,000 on AI training this year." The board wants to see "we ran a capability audit. Our operations team was 60% Level 2 in April. By June it was 50% Level 4. The next bottleneck is the marketing function."
Three board-level reads come out of a capability audit:
Read 1. Workforce position
Where the company sits relative to industry capability. Is your operations team ahead of, at parity with, or behind comparable teams in your sector? The 7 Levels of AI Proficiency framework is published; comparison across companies becomes possible as more companies adopt the same framework.
Read 2. Investment ROI
What capability change happened from the last cycle's training spend. ROI on AI training has been notoriously hard to measure. A capability audit makes it provable: capability moved from X to Y, costing $Z, in N weeks. That number is the first defensible AI training ROI most boards have seen.
Read 3. Risk exposure
Where the company is bottlenecked by capability shortfall and where that bottleneck shows up in operational risk. A 50% Level 1 finance team during quarter-end close is a risk. A 50% Level 1 internal communications team is a brand exposure. A 50% Level 1 customer-facing operations team is a churn risk.
When a CEO presents capability audit findings to the board, the conversation moves from "are we doing AI?" to "what is the capability state of our workforce, and what changes next." That is a different conversation, and a more useful one.
Related reading: how to measure AI readiness in a team, the 32-point spread between CIO and COO AI-readiness scores.
Frequently asked questions
What is an AI capability audit?
A written diagnostic that measures a team's AI readiness against a published proficiency framework. It distinguishes itself from AI compliance audits (which measure regulatory adherence) and AI governance audits (which measure decision-making structure) by measuring people, not policy. The deliverable is typically eight to fifteen pages.
How is an AI capability audit different from an AI governance audit?
A governance audit measures decision-making structure. A capability audit measures whether the workforce can do AI-augmented work at the standard the business needs. Governance audits remediate process; capability audits remediate skill. A company can pass a governance audit and still have a workforce that cannot deliver AI-augmented work.
Why do 78% of executives lack confidence in passing an AI audit?
Grant Thornton's 2026 AI Impact Survey found that 78% of business executives do not have strong confidence they could pass an independent AI governance audit within 90 days. The shortfall is structural: most companies have rushed AI deployment ahead of governance, ownership, and capability. The 78% number understates the workforce-capability picture.
Who delivers an AI capability audit?
Three categories. Internal HR or L&D teams can run the assessment but cannot grade their own work. Big Four and strategy firms deliver maturity reports rather than capability-specific audits, priced at $250K to $1M. Specialized AI workforce firms (LaunchReady, Larridin, Section AI) deliver capability-specific audits using published proficiency frameworks, priced from $19,500 to $96,000.
How long does an AI capability audit take?
The pre-engagement assessment runs in under ten minutes per team member. The capability audit document takes one to two days of analyst time. The full audit cycle (pre-assessment, audit, six-week intervention, post-assessment, written report) runs 90 days end-to-end.
What is in a written AI capability audit?
Five sections: team-level distribution, function variance findings, role-specific findings, written intervention plan, and a measurement plan for the next cycle. Together this is eight to fifteen pages.
How does an AI capability audit feed board reporting?
The audit produces three reads useful at board level: workforce position (where your company's capability sits relative to industry), investment ROI (what capability change happened from last cycle's training spend), and risk exposure (where capability shortfall maps to operational risk).
Can an AI capability audit be self-administered?
Partially. An internal team can run the assessment, gather the data, and even run the cohort training. What internal teams cannot do is grade their own work objectively. For board-level reporting and external defensibility, an independent third-party audit is required.
Run the audit on your team
The 7 Levels Engagement is the audit cycle: pre-engagement assessment, written capability audit, six-week intervention, post-engagement reassessment, and a written audit report at the end. Book a discovery call to scope the cycle for your team.