AI Governance · Pillar Read

The 7 Domains of AI Governance: A Framework for CEOs

Ninety-six percent of organizations deploy AI. Twenty-six percent have comprehensive AI security governance. The 70-point delta is the operating reality every CEO inherits in 2026. The 7 domains of AI governance are the architecture that closes it.

By Harrison Painter As of May 10, 2026 Updated May 12, 2026 17 min read
  • National + Indiana
  • Pillar Read

TL;DR

The 7 domains of AI governance are AI Inventory, Data Classification, Vendor Management, Human Oversight, Transparency, Incident Response, and Literacy. The seven compose into a working program. Ninety-six percent of organizations deploy AI; only 26 percent have comprehensive AI security governance. Sixty-three percent of breached organizations have no AI governance policy. Three regulatory clocks land between June 30 and August 2, 2026: the Colorado AI Act, the EU AI Act Articles 4, 26, 50, and 73, and the California AI Transparency Act. Indiana operators face HB 1133 political deepfake disclaimers (in force since spring 2024) plus the EU AI Act extraterritorial reach. HB 1620 (healthcare AI disclosure) was introduced in the 2025 Indiana General Assembly session, but available bill trackers show it did not become law. Start with Domain 1: AI Inventory. Every other domain depends on it.

Why AI governance is now a CEO problem

The 70-point delta is the architectural fact. Cloud Security Alliance and Google Cloud's 2025 State of AI Security and Governance survey found 96 percent of organizations deploy AI in at least one business function. Twenty-six percent have comprehensive AI security governance. The 70-point gap is deployment ahead of discipline. That is the population the regulator, the insurer, the customer, and the auditor are about to start asking questions of.

The financial picture from IBM's 2025 Cost of a Data Breach Report sharpens it. Sixty-three percent of breached organizations had no AI governance policy or were still developing one. IBM reported that shadow AI incidents added an average $670,000 to breach costs across breached organizations. Sixty-five percent of shadow AI breaches exposed customer personally identifiable information. Ninety-seven percent of organizations with AI-related breaches lacked proper AI access controls.

The velocity numbers from Stanford HAI's 2026 AI Index report close it. Documented AI incidents grew from 233 in 2024 to 362 in 2025, a 55 percent year-over-year increase. Self-rated "excellent" AI incident response capability dropped from 28 percent to 18 percent over the same period. Incident frequency is climbing while organizational capability is dropping. The structural shortfall is widening.

Three regulatory clocks land between June 30 and August 2, 2026. The Colorado AI Act SB 24-205 takes effect June 30, 2026. Under the current published EU AI Act timeline, major remaining obligations including Articles 26, 50, and 73 apply August 2, 2026, though Reuters reported on May 7, 2026 that EU countries and lawmakers reached a provisional deal that could delay enforcement of parts of the high-risk AI regime to December 2, 2027 if finalized. Article 4 AI literacy obligations became applicable February 2, 2025; enforcement mechanics depend on member-state implementation and the broader AI Act enforcement structure. The California AI Transparency Act SB 942 becomes operative August 2, 2026 (extended from January 1 by AB 853 signed October 13, 2025). May 12, 2026 sits 49 days before the Colorado clock and 82 days before the August clock. The runway is short.

This is no longer an IT signoff or a procurement checkbox. It is a CEO question because the regulatory exposure, the breach cost, and the customer trust signal all land at the executive layer. The 7 domains of AI governance are the architecture that turns the question into a program.

The 7 domains of AI governance

This article uses the AI Law Tracker 7-domain structure (at ailawtracker.org/governance) as its operating framework. Each domain is a self-contained operational discipline with its own artifacts, controls, and evidence trail. The seven compose into a working program. None are optional.

Domain 1: AI Inventory and Shadow AI

An AI inventory is a continuously updated catalog of every AI system, tool, model, and AI-powered SaaS application in use within an organization, including both sanctioned and unsanctioned tools. Shadow AI is the use of AI tools, especially consumer LLMs like ChatGPT, Claude, Copilot, Gemini, and Perplexity, by employees, contractors, executives, or business units outside formal IT and security oversight.

The inventory is the foundational artifact. NIST's AI Risk Management Framework places it inside the GOVERN function as a first-step organizational control. The Cybernews 2025 enterprise survey found 93 percent of executives and senior managers personally use shadow AI tools, materially higher than the 59 percent rate among general employees. The CEO who imagines shadow AI is what employees do without permission is not seeing the actual operational picture; the CEO is most likely a shadow AI user themselves.

Detail: The shadow AI inventory CEO guide.

Domain 2: Data Protection and Classification for AI Tools

Data classification for AI is the practice of categorizing information by sensitivity (typically Public, Internal, Confidential, Restricted, Regulated) and specifying which AI tools may process which categories under which controls. The NIST AI 600-1 Generative AI Profile specifies that organizations should identify personally identifiable information, protected health information, payment card industry data, secrets, and regulated data, and define AI-allowed versus AI-blocked categories before AI touches any data.

Cyberhaven Labs 2025 found 34.8 percent of corporate data employees put into AI tools is sensitive, up from 27.4 percent one year earlier. Source code (18.7 percent of all sensitive data), R&D materials (17.1 percent), and sales and marketing data (10.7 percent) are the most common categories. The IBM 2025 finding that 65 percent of shadow AI breaches expose customer PII at $670,000 above standard breach cost is the financial anchor for this domain.

Detail: Data classification for AI tools.

Domain 3: Vendor Management and Deployer Liability

Vendor management is the cross-functional discipline of evaluating, contracting, and monitoring AI vendors against the deployer's own legal obligations. EU AI Act Article 26 places twelve operational obligations directly on the deployer (the company using the AI), backed by penalties up to 35,000,000 euros or 7 percent of worldwide annual turnover for prohibited-practice violations under Article 99 Tier 1, and up to 15,000,000 euros or 3 percent under Tier 2 for high-risk system non-compliance.

The legal architecture is moving toward deployer accountability. The Massachusetts Attorney General settled for $2.5 million with Earnest Operations LLC on July 10, 2025 over its AI underwriting model that allegedly produced disparate impact on Black, Hispanic, and non-citizen applicants (knockout rule on immigration status plus a Cohort Default Rate variable). The settlement is a warning that the company operating the model carries the exposure regardless of whether the model was built in-house or sourced from a vendor. Industry analysis shows that AI vendor contracts commonly cap supplier liability and rarely warrant regulatory compliance in standard SaaS paper, so deployers should not assume the contract will transfer the exposure.

Detail: AI vendor management and deployer liability.

Domain 4: Human Oversight and Decision Authority

Human oversight is the documented mapping of decision types to oversight modes (human-in-the-loop, human-on-the-loop, human-out-of-the-loop), with named decision authority and tested escalation paths. EU AI Act Article 14 requires that high-risk AI systems be designed so natural persons can effectively oversee them, can detect automation bias, can correctly interpret outputs, and can decide not to use the system or disregard its output. Colorado SB 24-205 requires deployers to provide consumers an opportunity to appeal an adverse consequential decision via human review when technically feasible.

The behavioral research is sharp. METR (Becker et al. 2025) measured experienced developers at 19 percent slower with AI assistance even though they estimated themselves 20 percent faster. Aalto University 2026 (N=246 plus N=452 replication) found AI users overestimated their performance by 4 points while their actual performance rose by 3 points. Ayanna Howard's research, surfaced in Deloitte Tech Trends 2026, shows the same pattern in robotics: stated trust and behavioral trust diverge. Oversight protocols built on self-report measure the wrong thing. Real oversight is a behavioral instrument.

Detail: Human oversight and decision authority.

Domain 5: Transparency and Disclosure

Transparency and disclosure is the set of consumer-facing notification obligations triggered when AI interacts with people, generates content, or substantially influences a consequential decision. The disclosure surface spans EU AI Act Article 50, California SB 942, Colorado SB 24-205, Utah SB 149 (as narrowed by SB 226), Indiana HB 1133, the Texas Election Code, the Tennessee ELVIS Act, and the C2PA Content Credentials standard.

The consumer signal is overwhelming on the demand side. Eighty-nine percent of US adults believe companies should always offer the option to speak with a human (SurveyMonkey 2025, n=2,017 US adults). Eighty-four percent would abandon or restrict companies over AI opacity, and 76 percent would switch brands for transparency even at higher cost (Relyance Consumer AI Trust Survey 2025, n=1,000+ US consumers). Disclosure is where trust comes from in 2026, and the regulatory regime is catching up to consumer expectation that already exists.

Detail: Transparency and disclosure requirements CEO guide.

Domain 6: Incident Response and Monitoring

AI incident response is the operational capability to detect, contain, classify, notify, correct, document, and learn from AI incidents. EU AI Act Article 73 requires providers of high-risk AI systems to report serious incidents after becoming aware of them: 15 days as the standard deadline, 10 days where death is involved, and 2 days for widespread infringement or serious critical-infrastructure disruption. The application date is August 2, 2026.

The structural shortfall is the clock mismatch. The IBM 2025 Cost of a Data Breach Report puts the average global breach lifecycle at 241 days (60-day mean time to identify plus 181-day mean time to contain). Shadow-AI breaches run 247 days, six days longer than the global average. EU AI Act Article 73 gives reporters 15 days at the longest tier, 2 days at the tightest. Detection-to-reporting runs roughly 16x at the longest tier and over 100x at the tightest. The companies that ship internal AI incident response capability now are not chasing compliance; they are building the function the regulator and the insurer will assume exists 18 months from now.

Detail: AI incident response and monitoring.

Domain 7: Literacy and Training

AI literacy and training is the capability of staff and other persons dealing with AI systems to recognize, evaluate, and operate them at a sufficient level for the role. EU AI Act Article 4 became applicable February 2, 2025 and creates a legal floor for AI literacy across providers and deployers, including non-EU companies whose AI outputs are used in the EU. Enforcement mechanics depend on member-state implementation and the broader AI Act enforcement structure. Penalties run up to 15,000,000 euros or 3 percent of global turnover for general non-compliance.

The Deloitte State of AI in the Enterprise 2026 finding is the diagnostic. Eighty-two percent of enterprise leaders say their organization provides some form of AI training. Fifty-nine percent still report an AI skills shortfall. Most current training is fragmented, optional, and disconnected from job tasks. Buying literacy and calling it readiness is a category error the EU AI Act now also makes a compliance error.

Detail: AI literacy and training for employees.

How the 7 domains compose into operational discipline

The 7 domains are not 7 silos. They are 7 facets of one operational reality. Each domain depends on the ones above it; each domain feeds the ones below. The dependency map runs in a specific order, and the order matters.

  1. Inventory comes first. Every other domain depends on knowing what AI you have. You cannot classify data flowing into AI you have not catalogued. You cannot manage vendors you have not identified. You cannot oversee decisions made by tools nobody approved. You cannot disclose what you do not know is in production. You cannot respond to incidents in systems that are off your map. You cannot train staff on tools you do not know they are using.
  2. Data classification follows inventory. Once the inventory exists, classification answers the question what data is allowed to flow into which tool. The 5-tier scheme (Public, Internal, Confidential, Restricted, Regulated) maps to specific AI tool tiers and specific control requirements. Without classification, the inventory is a list. With it, the inventory becomes a control surface.
  3. Vendor management closes the contract layer. Once you know what AI you have and what data flows into it, vendor contracts can be written or renegotiated to match the exposure. The 12-item due diligence checklist (training data restrictions, data residency, tenant isolation, audit log access, DPA and BAA availability, breach notification, indemnification, certifications, regulatory change termination, liability cap, sub-processor disclosure) is the artifact.
  4. Human oversight runs on top of the contracted vendor surface. The decision authority matrix maps decision stakes (low, medium, high) and reversibility (reversible, costly, irreversible) to oversight modes (HOOTL, HOTL, HITL). Anything in the high-stakes-irreversible cell carries Colorado AI Act consequential-decision exposure and EU AI Act Article 14 oversight requirements.
  5. Transparency surfaces the program to consumers. The disclosure compliance matrix maps customer touchpoints (chatbot, AI-generated email, AI hiring screen, AI-generated marketing image, IVR voice, healthcare patient communication) to the specific statute that applies and the disclosure language that satisfies it.
  6. Incident response is the last-resort layer. When the inventory missed something, the classification was applied incorrectly, the vendor contract did not include audit log access, the human oversight reviewer rubber-stamped the AI output, or the disclosure failed to meet the regulator's read, an incident occurs. The 7-step playbook (Detection, Containment, Severity, Notification, Correction, Documentation, Post-mortem) is the operational answer.
  7. Literacy and training sits underneath all six. Every other domain requires individuals with the proficiency to design, operate, and maintain it. The AI Governance Maturity Model treats training as a parallel domain because it composes with all six others, not because it is independent.

The chain-strength rule applies. An organization with 6 domains at Level 3 maturity and 1 domain at Level 0 is a Level 0 organization for breach-cost, insurance, and regulatory-risk purposes. The IBM 2025 finding that 97 percent of organizations experiencing AI-related breaches lacked AI access controls (a single-domain failure under Vendor Management plus Human Oversight) makes the chain-strength model load-bearing in the empirical data.

The AI Governance Maturity Model

The 7 domains form the rows of the AI Governance Maturity Model. The 5 levels (Ungoverned, Aware, Structured, Managed, Strategic) form the columns. Each cell describes what THIS domain at THIS level looks like in operational terms.

  • Level 0: Ungoverned. No policy. No designated owner. Employees use AI tools at their discretion with no oversight. Shadow AI is the default mode of adoption.
  • Level 1: Aware. A basic AI use policy exists. People know rules exist. Enforcement is informal, ad hoc, and depends on which manager you report to.
  • Level 2: Structured. Formal processes are documented. Approved tools are defined and enforced. Data classification rules apply and are auditable. A designated AI governance owner exists with named accountability.
  • Level 3: Managed. Active monitoring is operational. Vendor oversight is continuous, not one-time. Incident response has been tested. Metrics are tracked and reviewed.
  • Level 4: Strategic. Governance functions as a competitive advantage. The architecture adapts continuously. Governance enables faster, safer expansion of AI capability rather than slowing it down.

The aggregation rule: org-level maturity equals the lowest domain score (chain-strength model). Most organizations cluster at Level 0 to Level 1 in 2026. Healthcare runs higher, pulled to Level 1 to Level 2 by HIPAA. Financial services runs higher still, pulled to Level 2 by existing model-risk-management discipline. Per-stage advancement benchmarks: Level 0 to Level 1 takes 4 to 6 weeks at under $5,000; Level 1 to Level 2 takes 2 to 3 months at $15,000 to $50,000; Level 2 to Level 3 takes 6 to 12 months at $50,000 to $200,000; Level 3 to Level 4 takes 12 to 18 months at $200,000 or more per year ongoing.

Detail and the 7-domain by 5-level self-assessment matrix: The AI Governance Maturity Model.

Indiana operators: what is in force right now

Indiana has one AI-specific private-sector law clearly in force today (HB 1133, political deepfake disclaimers, spring 2024). A healthcare payer downcoding law (HB 1271) becomes effective July 1, 2026, 50 days from May 12. HB 1620 was introduced in 2025 but did not become law. The 80 percent of Indiana CEOs who feel behind on AI (Bryce Carpenter, COO Conexus Indiana, on-record interview April 2026) are operating against a regulatory surface that does not pause for them.

Indiana HB 1620 (introduced 2025, did not become law). The 2025 Indiana General Assembly considered a bill that would have required healthcare providers and insurers to disclose certain uses of AI to patients and insureds. Available bill trackers indicate it did not pass and should not be treated as currently in force. Indiana healthcare and insurance operators should monitor for any reintroduction in the 2026 session and confirm current obligations with counsel. The compliance posture for Indiana healthcare AI today rests on general patient-rights doctrine, professional licensing requirements, and federal frameworks (HIPAA, FDA SaMD, ONC HTI-1) rather than a dedicated state AI disclosure statute.

Indiana HB 1133 (effective spring 2024). Authored by Rep. Julie Olthoff. Required disclaimer language on political ads containing AI-altered or AI-generated media: "Elements of this media have been digitally altered or artificially generated." Civil action available to candidates against those who paid for or sponsored unlabeled fabricated media.

Indiana HB 1271 (effective July 1, 2026). Establishes IC 27-1-52 governing downcoding of health benefits claims. Prohibits insurers from using AI as the sole basis to downcode a medical-necessity claim unless a human employee or contractor has first reviewed the medical record. Mandates disclosure when AI is used in adverse prior authorization or downcoding determinations. Indiana's first AI-specific statutory restriction with substantive operational impact on a vendor selection question (does the AI vendor support a documented human-review workflow that satisfies HB 1271?). Many do not.

The verified employer roster carries the Indiana lens across the cluster. Healthcare and life sciences: Eli Lilly, Roche Diagnostics, IU Health, Community Health Network, Eskenazi, Parkview. Lilly and IU signed a five-year $40 million agreement in December 2025 to build AI-enabled clinical trial infrastructure. Financial services: OneAmerica, Old National, Salin, Centier. Manufacturing IP exposure: Cummins, Allison Transmission, Subaru of Indiana, Toyota Material Handling.

Indiana's state posture sits at Indiana Management Performance Hub at in.gov/mph/AI. MPH operates a 3-tier AI risk classification (High, Moderate, Low Risk) anchored to the NIST AI Risk Management Framework, overseen by the Office of the Chief Data Officer and Chief Privacy Officer. Governor Braun's IN AI Initiative (announced April 28, 2026, executed via the Central Indiana Corporate Partnership) is private-sector-adoption-focused, not standards-setting. Indiana Attorney General Todd Rokita has not yet brought a public AI-specific deception action; compare to Massachusetts ($2.5 million Earnest settlement, July 2025), Texas (multiple AG actions on deceptive AI marketing), New York (active investigations into AI-driven hiring), and California (PAGA-driven litigation against Workday). Indiana CDPA effective January 1, 2026 with $7,500-per-incident fines. Indiana has not yet adopted the NAIC Model Bulletin on AI; 24 or more states have.

The federal and multistate landscape

EU AI Act (extraterritorial reach). Article 2 applies to non-EU providers and deployers whose AI outputs are used in the EU, regardless of where the company is established. Article 4 (AI literacy) became applicable February 2, 2025; enforcement mechanics depend on member-state implementation. Article 26 (deployer obligations, twelve operational requirements) begins direct effect for high-risk system deployers August 2, 2026. Article 50 (transparency, chatbot and deepfake disclosure) begins August 2, 2026. Article 73 (serious incident reporting by providers of high-risk AI systems after becoming aware of an incident: 15 days standard, 10 days where death is involved, 2 days for widespread infringement or serious critical-infrastructure disruption) begins August 2, 2026. Article 99 penalties run up to 35,000,000 euros or 7 percent of global turnover for prohibited-practice violations, 15,000,000 euros or 3 percent for high-risk non-compliance, and 7,500,000 euros or 1 percent for incorrect or misleading information. SMEs face the lower of the two figures, not the higher.

Colorado AI Act (SB 24-205). Effective June 30, 2026 (delayed from February 1 by SB 25B-004 in the August 2025 special session). Deployers of high-risk AI systems making consequential decisions in employment, education, housing, financial services, healthcare, insurance, government services, and legal services must implement a risk management policy, conduct an initial impact assessment within 90 days and annually after, disclose AI use to consumers, and report adverse algorithmic discrimination findings to the Colorado Attorney General within 90 days. The Colorado AG has exclusive enforcement authority with a 60-day cure period.

California AI Transparency Act (SB 942). Operative August 2, 2026 (extended from January 1 by AB 853, signed October 13, 2025, to align with EU AI Act Article 50). Applies to Covered Providers with over 1,000,000 monthly users; text-only systems are excluded. Requires free public AI content detection tool, manifest disclosures users can opt into, and latent metadata embedded in every output (provider name, AI system details, timestamp, unique identifier). Civil penalties up to $5,000 per day per violation. California AG plus city attorneys plus county counsel enforce.

Utah SB 149 (Utah AI Policy Act). Effective May 1, 2024; sunset extended to July 1, 2027 by SB 332. Original chatbot rule required disclosure when asked. SB 226 (2025) narrowed the rule: disclosure on request requires a "clear and unambiguous" request; required disclosure for regulated occupations is now limited to "high-risk artificial intelligence interactions" (sensitive personal data including health, financial, biometric; or financial, legal, medical, or mental health advice).

Federal enforcement (in flux). The FTC launched Operation AI Comply September 25, 2024, with five initial settlements (DoNotPay, Rytr, Ascend Ecom, Ecommerce Empire Builders, FBA Machine). The Rytr final consent order was reopened and set aside December 2025 in response to the Trump Administration's AI Executive Order and America's AI Action Plan; the FTC determined the complaint failed to satisfy the legal requirements of the FTC Act and unduly burdens AI innovation. Cleaner federal enforcement anchors today: SEC v. Presto Automation (January 14, 2025, AI capability misrepresentation under the AI-washing doctrine) and the Massachusetts AG Earnest settlement ($2.5 million, July 10, 2025).

Per the Tracker at ailawtracker.org/bills, all 50 states introduced at least one AI-related bill in 2025; 145 were enacted out of 1,208 introduced. The federal preemption fight remains live: Indiana AG Rokita joined a 36-state bipartisan coalition in November 2025 opposing federal preemption of state AI laws.

How The 7 Levels of AI Proficiency framework integrates

The AI Governance Maturity Model is org-level. The 7 Levels of AI Proficiency is individual-level. Both are required. One without the other produces predictable failure modes.

Level 1 Cadet through Level 3 Lieutenant are recognition and fluency. A team at Level 1 to Level 2 can satisfy EU AI Act Article 4 (AI literacy floor) but cannot build governance. A Level 3 (Lieutenant) leader can use AI tools well personally but cannot yet structure an organizational discovery process for the AI inventory.

Level 4 Commander is the floor for governance work. The Commander is the operator who can specify, in plain English to the AI tool and to the team, what an AI output needs to contain for a human to review it efficiently. A shadow AI inventory is a context-engineering problem at organizational scale; Level 4 is the floor. Data classification for AI is a Level 4 baseline competency. Disclosure design is a Level 4 surface (the 89 percent of consumers who want a human option are not asking for legal boilerplate; they are asking for honesty, and the Commander knows the difference).

Level 5 Captain designs the architecture. The Captain decides which categories of AI vendors the company will buy, sets the policy on training-data opt-in, defines the data residency floor, and writes the contract template clauses every vendor selection must clear. The Captain designs the decision authority matrix, the incident response system, and the per-tier control requirements. The 12-item vendor due diligence checklist is a Level 5 artifact.

Level 6 Admiral runs the cross-functional governance discipline. The Admiral coordinates legal review with security review with procurement review with the operating-team's intended-use review. An AI vendor decision in 2026 is no longer a procurement signature; it is a four-function decision. The Admiral is the role that runs that meeting and owns the contract clause negotiations against the vendor's standard paper. Cross-functional disclosure architecture (chatbot copy matches the privacy policy matches the JSON-LD on the marketing page matches the C2PA metadata embedded in the product image) is a Level 6 surface. Cross-functional incident response (legal, security, engineering, customer success, communications, finance, insurance) is a Level 6 surface.

Level 7 Mission Director sets company-wide AI strategy, evaluates AI investments at the board level, and decides whether disclosure is a competitive advantage or a compliance overhead. The answer changes the brand.

Pairing rules. Org maturity Level 0 to 1 (Ungoverned to Aware) needs Level 3 to Level 4 individuals (Lieutenant to Commander) on the team building governance. Org maturity Level 2 (Structured) needs Level 4 (Commander) leading the build and Level 3 (Lieutenant) operating it. Org maturity Level 3 (Managed) needs Level 5 (Captain) designing it and Level 4 (Commander) maintaining it. Org maturity Level 4 (Strategic) needs Level 5 to Level 6 (Captain to Admiral) driving the continuous improvement loop.

The pairing catches three failure modes. An organization tries to advance to Level 2 governance without a Level 3-or-higher individual on staff: the policy gets written but never operationalized. An organization reaches Level 3 governance but team median sits at Level 2: monitoring runs but findings never get acted on. An organization claims Level 4 governance with no Level 5-or-higher individuals: governance is performance theater, not adaptive capability.

The cleanest path is two assessments stacked: take The 7 Levels of AI Proficiency assessment at assess.launchready.ai to see where individual capability sits, then use the AI Governance Maturity Model to see where governance sits. The assessment baselines the team. The maturity model baselines the organization. Both are required to plan the work.

Where to start: the sequencing recommendation

Start with Domain 1: AI Inventory and Shadow AI. Every other domain depends on it. The order is causal.

  1. Week 1 to 4: Stand up the AI inventory. Run the 7-step inventory protocol from the shadow AI inventory CEO guide: no-punishment survey, 90-day DNS plus SaaS audit, browser visibility tooling, procurement card audit, sanctioned-tool catalog, 4-tier risk classification, 90-day re-inventory cadence. Output: a current AI inventory the leadership team can read in 10 minutes.
  2. Week 5 to 8: Stand up data classification. Define the 5-tier scheme (Public, Internal, Confidential, Restricted, Regulated). Map the inventory tools from Step 1 to the tiers. Stand up DLP scanning at the Tier 4 and Tier 5 boundary. Sign DPAs and BAAs where required. Output: every AI tool has a tier; every tier has a control set; staff have a one-page quick-reference card. Detail at data classification for AI tools.
  3. Week 9 to 12: Stand up vendor management. Apply the 12-item due diligence checklist to existing AI vendors (typically the top 5 by spend). Renegotiate contracts where the standard paper falls short. Sign new vendor contracts only against the checklist. Indiana counsel review on contract template language. Detail at AI vendor management and deployer liability.
  4. Week 13 to 16: Stand up human oversight. Build the 3-by-3 decision authority matrix for the highest-stakes departments first (typically HR, Finance, Customer Success, Healthcare or Legal where applicable). Map departmental decision types to oversight modes. Anything that crosses the Colorado AI Act consequential-decision threshold gets HITL with documented rationale. Detail at human oversight and decision authority.
  5. Week 17 to 20: Stand up transparency. Audit every customer-facing AI surface against the disclosure compliance matrix. Colorado consumers carry SB 24-205 starting June 30, 2026; California covered providers carry SB 942 starting August 2, 2026; EU consumers carry Article 50 starting August 2, 2026; Indiana HB 1133 has covered political ads with AI-altered media since spring 2024. Use the 7 sample disclosure language blocks at the transparency and disclosure CEO guide.
  6. Week 21 to 24: Stand up incident response. Build the 7-step playbook (Detection, Containment, Severity, Notification, Correction, Documentation, Post-mortem). Stand up the severity classification matrix. Pre-build regulator notification packages for EU AI Act Article 73, FDA SaMD where applicable, state AGs, and the state insurance regulator. Run a tabletop exercise. Detail at AI incident response and monitoring.
  7. Week 25 onward: Stand up training as an annual program. Baseline the team with the assessment at assess.launchready.ai. Map roles to required levels. Run a structured program with pre-measurement, post-measurement, and Day-90 retention checks. The forgetting curve eats 90 percent of single-workshop content within seven days; the structure (12-month annual program) is what defeats the curve, not the content density of any single workshop. Detail at AI literacy and training for employees.

The 24-week build window is aggressive for a mid-market company starting from Level 0. Most organizations will run it as an 8 to 12-month project with overlapping work streams. The point is the order: inventory first, classification second, vendor third, oversight fourth, transparency fifth, incident response sixth, training as the underlying year-round discipline. Skipping the order produces a program that looks complete on paper and fails in practice.

For an existing piece on the proof point this work creates, see 78 percent of CEOs cannot defend an AI audit and the measurement methodology at how to measure AI readiness in a team.

Frequently asked questions

What are the 7 domains of AI governance?

The 7 domains of AI governance are AI Inventory and Shadow AI, Data Protection and Classification, Vendor Management and Deployer Liability, Human Oversight and Decision Authority, Transparency and Disclosure, Incident Response and Monitoring, and Literacy and Training. The seven compose into a working AI governance program. Each domain has specific operational artifacts (an AI inventory, a data classification scheme, a vendor due diligence checklist, a decision authority matrix, a disclosure compliance matrix, an incident response playbook, a role-mapped training curriculum) that produce evidence an auditor or regulator can verify.

What is AI governance?

AI governance is the operational discipline of controlling and overseeing how an organization deploys, uses, and depends on artificial intelligence systems. It covers policy, process, technology controls, training, and accountability. The Cloud Security Alliance and Google Cloud 2025 State of AI Security and Governance found that 96 percent of organizations deploy AI while only 26 percent have comprehensive AI security governance. AI governance is what closes that 70-point delta.

Why does my company need an AI governance program?

Three reasons. Risk: 63 percent of breached organizations in the IBM 2025 Cost of a Data Breach Report had no AI governance policy, and shadow AI breaches added $670,000 to incident cost. Regulation: the EU AI Act Article 26 deployer obligations, Colorado SB 24-205, California SB 942, and the EU AI Act Article 73 incident reporting clock all carry direct effect during 2026. Competitive position: enterprise customers, EU buyers, and cyber insurers now ask AI governance questions during procurement and renewal.

Who owns AI governance in a company?

AI governance is a cross-functional function. The accountable owner is typically the CEO, COO, CIO, or General Counsel. Operational ownership distributes across IT security (Inventory plus Vendor Management), legal and compliance (Vendor Management plus Transparency plus Incident Response), HR (Literacy and Training plus Human Oversight policy), product and engineering (Data Classification plus Human Oversight implementation), and the executive sponsor. McKinsey's State of AI 2025 found 28 percent of AI-using organizations have CEO-level AI governance ownership and 17 percent have board-level ownership.

What is the difference between The 7 Levels of AI Proficiency and the AI Governance Maturity Model?

The 7 Levels of AI Proficiency is an individual-level framework with seven levels (Cadet, Ensign, Lieutenant, Commander, Captain, Admiral, Mission Director). The AI Governance Maturity Model is an organization-level framework with five levels (Ungoverned, Aware, Structured, Managed, Strategic). Both are required. An organization advancing to governance Level 2 needs Level 3 to Level 4 individuals on staff to build it. An organization at governance Level 3 needs Level 5 individuals to sustain it. The two compose, they do not substitute.

Where should we start building AI governance?

Start with AI Inventory and Shadow AI (Domain 1). Every other domain depends on knowing what AI you have. You cannot classify data flowing into AI you have not catalogued. You cannot manage vendors you have not identified. You cannot oversee decisions made by tools nobody approved. You cannot disclose what you do not know is in production. You cannot respond to incidents in systems that are off your map. You cannot train staff on tools you do not know they are using. The inventory is the foundational artifact.

Does the EU AI Act apply to my Indiana company?

Possibly yes. EU AI Act Article 2 applies to non-EU providers and deployers whose AI outputs are used in the EU, regardless of where the company is established. An Indiana SaaS vendor whose recommendation engine surfaces output to a German consumer is in scope. A Texas HR-tech platform screening resumes for a French employer is in scope. Indiana exports approximately $43 billion per year, with Germany, the United Kingdom, and France among the top European trading partners for the state's pharmaceutical, automotive, and aerospace manufacturers. Many Indiana mid-market companies have direct EU exposure.

What Indiana AI laws are in force right now?

Indiana has one AI-specific private-sector law clearly in force today: HB 1133 (effective 2024), which requires disclaimer language on certain political ads containing AI-altered or AI-generated media. HB 1271 becomes effective July 1, 2026 and restricts insurers from using AI as the sole basis to downcode a medical-necessity claim without prior human review. HB 1620, a 2025 healthcare and insurer AI disclosure bill, should not be treated as active law; available bill trackers show it did not become law.

What is the 96 percent versus 26 percent AI governance delta?

The Cloud Security Alliance and Google Cloud 2025 State of AI Security and Governance survey found 96 percent of organizations deploy AI in at least one business function while only 26 percent have comprehensive AI security governance in place. The 70-point delta is the structural shortfall the AI governance program closes. The IBM 2025 Cost of a Data Breach Report found 63 percent of breached organizations either have no AI governance policy or one still in development.

What is the AI Governance Maturity Model?

The AI Governance Maturity Model is a 5-stage descriptive and diagnostic instrument that places organizations from Ungoverned (Level 0) through Aware (Level 1), Structured (Level 2), Managed (Level 3), to Strategic (Level 4). The 7 domains of AI governance form the rows of the maturity matrix; the 5 levels form the columns. Aggregation rule: org-level maturity equals the lowest domain score (chain-strength model). Most organizations cluster at Level 0 to Level 1 in 2026.

How long does it take to build an AI governance program?

Industry consensus places foundational program build (Level 0 to Level 2) at 4 to 8 months, depending on existing infrastructure. Reaching Level 3 (Managed) typically takes 12 to 24 months of sustained investment. Reaching Level 4 (Strategic) typically takes 2 to 3 years and requires 1 to 3 dedicated AI governance FTE. The pace is governed less by implementation difficulty than by leadership attention and organizational change capacity. The Liminal AI Enterprise AI Governance Implementation Guide and Promethium 2025 corroborate these benchmarks.

What is the consequence of not having AI governance?

Three measurable consequences. Financial: shadow AI incidents added an average $670,000 to breach costs per IBM 2025. Regulatory: EU AI Act penalties run up to 35,000,000 euros or 7 percent of global turnover for prohibited-practice violations and 15,000,000 euros or 3 percent for high-risk system non-compliance. Operational: AI incidents grew 55 percent year over year in 2025 (233 to 362 documented per Stanford HAI 2026), while self-rated "excellent" AI incident response capability dropped from 28 percent to 18 percent. The capability-versus-incident split is widening.

Sources

  1. AI Law Tracker. "AI Governance for Indiana Companies." Canonical 7-domain framework reference. ailawtracker.org/governance
  2. AI Law Tracker. "Bills database." Live federal and state AI bills as they move. ailawtracker.org/bills
  3. IBM Security and Ponemon Institute. Cost of a Data Breach Report 2025. Sample: 600 organizations globally; data collected March 2024 through February 2025. ibm.com/reports/data-breach
  4. Cloud Security Alliance and Google Cloud. State of AI Security and Governance 2025. Cited via Beyondscale CISO Roadmap. beyondscale.tech AI Security Maturity Model
  5. Stanford HAI. 2026 AI Index Report. AI incident volume, response capability self-rating, public opinion. hai.stanford.edu/ai-index/2026-ai-index-report
  6. EU AI Act Article 4 (AI Literacy). artificialintelligenceact.eu/article/4
  7. EU AI Act Article 26 (Deployer Obligations). artificialintelligenceact.eu/article/26
  8. EU AI Act Article 50 (Transparency Obligations). artificialintelligenceact.eu/article/50
  9. EU AI Act Article 73 (Reporting of Serious Incidents). artificialintelligenceact.eu/article/73
  10. EU AI Act Article 99 (Penalties). artificialintelligenceact.eu/article/99
  11. Colorado General Assembly. SB 24-205 Consumer Protections for Artificial Intelligence. leg.colorado.gov/bills/sb24-205
  12. California Legislative Information. SB 942 California AI Transparency Act. leginfo.legislature.ca.gov SB 942
  13. Indiana General Assembly. HB 1620 Healthcare AI Disclosure (introduced 2025; available bill trackers show it did not become law). iga.in.gov HB 1620
  14. Indiana General Assembly. HB 1271 Healthcare AI Downcoding Limits (effective July 1, 2026). iga.in.gov HB 1271
  15. Indiana House Republicans. HB 1133 Olthoff Political Deepfake Disclaimers (effective spring 2024). indianahouserepublicans.com HB 1133
  16. NIST AI Risk Management Framework 1.0 (NIST AI 100-1, January 2023). nvlpubs.nist.gov NIST AI 100-1
  17. NIST AI 600-1 Generative AI Profile (July 2024). nvlpubs.nist.gov NIST AI 600-1
  18. Massachusetts Attorney General. $2.5M Settlement with Earnest Operations (July 10, 2025). mass.gov AG Campbell Earnest settlement
  19. Cyberhaven Labs. 2025 AI Adoption and Risk Report. cyberhaven.com 2025 AI Adoption and Risk Report
  20. Indiana Management Performance Hub. State of Indiana AI Policy and Guidance. in.gov/mph/AI
  21. Indiana University News. Lilly and IU sign $40M clinical trials and research agreement (December 2025). news.iu.edu Lilly IU $40M agreement
  22. EDUCAUSE. The Impact of AI on Work in Higher Education (January 2026). N=1,960 across 1,800-plus US institutions. educause.edu 2026 AI in Higher Ed
  23. Deloitte. State of AI in the Enterprise 2026. deloitte.com State of AI in the Enterprise 2026
  24. METR. Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity. Becker et al. 2025. metr.org 2025 OS developer study
  25. Aalto University. AI use makes us overestimate our cognitive performance. Computers in Human Behavior (February 2026). aalto.fi AI overestimate study
  26. SurveyMonkey. 2025 Customer Experience Statistics. n=2,017 US adults, conducted December 10-11, 2025; margin of error plus or minus 2.5 points. 89 percent want a human option. surveymonkey.com customer service statistics
  27. Relyance AI and Truedot. Consumer AI Trust Survey 2025. n=1,000+ US consumers aged 18+, December 2025; margin of error plus or minus 3.2 points. 84 percent would abandon brands over AI opacity; 76 percent would switch for transparency. relyance.ai consumer AI trust survey 2025
  28. Reuters and Council of the European Union. EU lawmakers reach provisional agreement to delay high-risk AI Act compliance to December 2, 2027 (announced May 7, 2026). consilium.europa.eu AI Act provisional deal May 7, 2026
Harrison Painter
Harrison Painter
AI Business Strategist. Founder, LaunchReady.ai and AI Law Tracker.

Harrison helps Indiana leaders build AI systems that cut cost and grow revenue. Founder of LaunchReady.ai and the 7 Levels of AI Proficiency framework. Author of You Have Already Been Replaced by AI and The White-Collar Factory is Closing.

Connect on LinkedIn

Find your AI Proficiency level

The free 7 Levels of AI Proficiency assessment places you across seven stages of AI capability. Under ten minutes. Research-backed scoring. Pair it with the org-level AI Governance Maturity Model to see where individual capability sits and where governance sits.

Get the weekly briefing

LaunchReady Indiana delivers AI news, compliance updates, and case studies for Indiana leaders. Every Tuesday. Five minutes.

Subscribe free