An AI incident is any event where an AI system's output or behavior caused harm to a person, a customer, an institution, or a fundamental right. When AI gives a bad output, the response is seven steps: detect, contain, assess severity, notify, correct, document, post-mortem. Documented AI incidents grew 55% in 2025, from 233 to 362, per Stanford HAI's 2026 AI Index. Self-rated excellent AI incident response capability dropped from 28% to 18%. IBM's 2024 Cost of a Data Breach Report measures average breach identification at 194 days, with the full lifecycle at 258 days. The EU AI Act gives reporters 2 days to notify regulators on the most severe incidents and 15 days on the standard tier. The detection clock is roughly 13 times longer than the standard reporting clock, and the capability is moving the wrong direction. Every CEO with an AI deployment in production needs this playbook ready before the incident, not after.
The velocity-vs-capability inversion
Stanford HAI's 2026 AI Index Report documents 362 AI incidents in 2025, up from 233 in 2024. That is a 55% year-over-year increase against a 2012 baseline of fewer than ten. The curve is exponential and the inflection has happened.
Over the same window, the McKinsey survey inside the AI Index measured organizations rating their own AI incident response capability as excellent. The number dropped from 28% in 2024 to 18% in 2025. Good dropped from 39% to 24%. Organizations experiencing three to five AI incidents in a 12-month window rose from 30% to 50%.
The detection lag is structural. IBM's Cost of a Data Breach Report 2024 measures average breach identification at 194 days, average containment at 64 days, and the full lifecycle at 258 days. That figure is a 7-year low and is down from 277 days the prior year, but the IBM survey methodology covers all data breaches, not AI-specific incidents. Public AI-specific breach-lifecycle benchmarks at the same statistical depth do not yet exist; the IBM number is the closest defensible proxy and runs in the same general direction the Stanford capability data implies.
The regulatory clock is the other axis. EU AI Act Article 73, application date August 2, 2026, requires providers of high-risk AI systems to report serious incidents within 15 days standard, 10 days where death is involved, and 2 days for widespread infringement or critical-infrastructure disruption.
The 194-day average breach identification time vs the 15-day standard reporting window is roughly a 13-times mismatch. The capability is moving the wrong direction while the regulatory clock is locked. This is the structural inversion every CEO with AI in production now operates inside.
What counts as an AI incident
An AI incident is distinct from an ordinary IT incident. An IT incident is a server crash, a code bug, a deterministic failure. An AI incident is probabilistic. The model produced an output that the model was technically permitted to produce, and that output caused harm.
Five categories matter for governance.
An AI incident is any event where the deployment or operation of an AI system caused or contributed to harm to persons, property, environment, fundamental rights, or institutional infrastructure (OECD AI Incidents Monitor methodology).
A serious incident under the EU AI Act is an incident or malfunction of an AI system that directly or indirectly results in death or serious harm to health, serious irreversible disruption to management or operation of critical infrastructure, infringement of EU fundamental-rights obligations, or serious harm to property or environment (EU AI Act Article 3(49) and Article 73).
A hallucination is a model output that is fluent, plausible, and confidently presented but factually incorrect or fabricated. Distinct from a training-data error; the output did not exist before the model produced it (NIST AI 600-1 Generative AI Profile).
Algorithmic harm is a negative outcome to a person or class of persons attributable to the design, deployment, or operation of an algorithmic system, including disparate impact across protected classes (CSET AI Harm Taxonomy).
A bias incident is an AI-driven decision or output that produces materially different outcomes across protected classes without adequate justification (NAIC Model Bulletin on AI; Massachusetts AG Advisory of April 2024).
Your IT incident playbook will not catch these. The failure modes do not look like alerts. They look like complaints, lawsuits, and reporter inquiries.
The 7-step CEO playbook
The 7-step playbook is the operational spine of any AI incident response plan. Each step is short. The discipline is doing them in order, before improvising.
1. Detect. The incident surfaces through one of four channels: customer complaint, internal staff report, automated monitoring alert, or regulator inquiry. The first two are the default for nearly every company that has not invested in detection infrastructure. The destination is automated monitoring wired into the production stack: sample audits, output drift detection, output classification, anomaly alerts. That wiring is Level 5 architectural design work. The Stanford capability-drop finding reads as a detection-infrastructure shortfall, the kind of structural deficit that calls for engineering investment rather than tighter human attention at the desk.
2. Contain. Pull the model from the workflow. Pause the affected feature. Disable the agent. Rotate the API key if external access is suspect. Document the time of containment in writing. Containment precedes investigation; you do not diagnose with a hot system. Iteration on a live model destroys the audit trail.
3. Assess severity. Apply the severity classification matrix below. Determine: single-user incident or pattern. Reversible or irreversible. Whether the incident meets a regulatory reporting threshold. Severity tier drives notification timeline.
4. Notify. Internal first: incident commander, agent owner, technical lead, AI safety lead, legal partner, communications lead (the six canonical roles named in the FAQ below). External next: affected customers, regulators (EU AI Act 2/10/15-day clocks, FDA MedWatch, OCC for banks, state AG for consumer-facing harm), insurance carrier if an AI liability policy is in force.
5. Correct. Two parallel tracks. Output correction: notify and remediate the user(s) who received the bad output. Process correction: change the model, the prompt, the guardrail, the training data, the retrieval set, the access controls, or some combination. Document each change with a timestamp. Output correction without process correction is not correction.
6. Document. Build the audit trail in real time: prompt logs, model version, retrieval context, output, user action, containment actions, notifications sent, corrections applied. The audit trail is your primary asset in litigation and your primary asset in insurance recovery. The Coalition for Secure AI's Incident Response Framework (October 2025) names full audit-trail capture as a core requirement of the Document phase. Without it, the post-incident reconstruction is opinion, not evidence.
7. Post-mortem. Within 14 days. Root cause identification (model, data, prompt, guardrail, human override, training corpus drift). Systemic fix. Governance update (which policy did the incident expose as inadequate). Cross-team learning. The post-mortem document becomes input to the next quarterly board AI risk review.
Severity classification matrix
The matrix is the single tool that distinguishes a Tier 1 log-and-close from a Tier 5 board-emergency response. Run every incident through it before notification.
| Reversible | Costly | Irreversible | |
|---|---|---|---|
| Single incident | Tier 1. Log, correct, close. Internal-only. No regulator notification unless statutory threshold tripped. Example: chatbot gives a minor factual error to one customer. | Tier 2. Log, correct, customer notification, insurance carrier notification, post-mortem. Example: AI-generated email sent to one customer revealed another customer's account data. | Tier 3. Log, customer notification, regulator notification within applicable statutory window, insurance, counsel, board notification within 24 hours. Example: AI-driven medical decision contributed to patient harm. |
| Pattern (3+ similar) | Tier 2. Pause feature, investigate root cause, customer-class notification, post-mortem, board awareness. Example: chatbot gives the same wrong policy answer to 12 customers in a week. | Tier 3. Containment, class notification, regulator engagement, insurance, counsel, executive committee. Example: hiring algorithm flagged for disparate-impact pattern across 90 days. | Tier 4. Full executive incident command, outside counsel, crisis communications, regulator engagement, class notification, insurance, board emergency session. Example: pattern of healthcare AI denials linked to identifiable patient harm. |
| Systemic (architectural failure) | Tier 3. Pull system, executive committee, outside counsel, customer/class notification, insurance, post-mortem with structural redesign. Example: model retrieval layer was indexing the wrong knowledge source for two months. | Tier 4. Full incident command, outside counsel, regulator engagement, class notification, crisis communications, board emergency session, structural redesign. Example: AI training pipeline mixed customer PII into model fine-tuning. | Tier 5. Full crisis response, regulator engagement at the highest available level, outside counsel, crisis comms, board, investor relations, potential public disclosure, system shutdown until rebuild. Example: AI system has been making materially-biased high-stakes decisions across 12+ months of deployment. |
Pre-build communication templates for every tier before the live incident. The 24-hour board-notification deadline on Tier 3 cannot be hit if the template is being drafted at 2 a.m.
The 194-day average breach identification time vs the 15-day EU standard reporting window is roughly a 13-times mismatch. The capability is moving the wrong direction while the regulatory clock is locked.
The Indiana operator wedge
Indiana's AI incident response posture in 2026 has three relevant facts.
One. Indiana has not adopted the NAIC Model Bulletin on the Use of Artificial Intelligence Systems by Insurers as of May 2026. As of an April 2025 Quarles tracking publication, 24 states had adopted the bulletin, including Connecticut, Maryland, Massachusetts, North Carolina, Pennsylvania, Virginia, Washington, Illinois, Michigan, and 15 others; the count has grown since. Indiana insurers operating exclusively in Indiana-domiciled risk pools do not yet face the NAIC's AIS Program requirement at the state level. Carriers writing across state lines face it through their other regulators. The lag is consequential because the NAIC bulletin is the most prescriptive existing US framework on AI incident classification and reporting in financial services.
Two. The Indiana Consumer Data Protection Act (IC 24-15) took effect January 1, 2026. The act gives Indiana consumers rights of access, correction, deletion, portability, opt-out from the sale of personal data, opt-out from targeted advertising, and opt-out from profiling in furtherance of decisions that produce legal or similarly significant effects (IC 24-15-3-1(b)(5)). The profiling opt-out is the load-bearing AI-incident hook: Indiana consumers can withdraw consent from automated decisioning that drives credit, employment, insurance, housing, and similar outcomes. What differentiates Indiana from Colorado, California, and Connecticut is the enforcement posture, not the substantive scope. The Indiana Attorney General is the exclusive enforcer, with no private right of action. Civil penalties run up to $7,500 per violation. The 30-day cure period is permanent (it does not sunset like Colorado's December 31, 2024 sunset or Connecticut's December 31, 2024 sunset), giving Indiana the most business-friendly enforcement posture in the state-privacy peer set. The substantive AI exposure (DPIA on profiling, sensitive-data consent, breach-response obligations) still applies when AI processing touches personal information.
Three. Eli Lilly and Indiana University announced a five-year, up-to-$40 million collaboration on December 3, 2025 across three pillars: AI-enabled clinical trial infrastructure focused on chronic diseases and oncology, expanded access to Alzheimer's diagnostics and treatment through IU Health's provider network, and workforce development in biotechnology and pharmaceutical research. The IU Launch Accelerator for Biosciences (IU LAB) leads the university's involvement. The AI-enabled trial-infrastructure component of the build, plus the broader Indiana healthcare-AI deployment momentum it signals, carries FDA SaMD adverse event reporting exposure on AI-driven medical devices, HIPAA breach notification clocks, and Indiana Department of Health reporting obligations on patient-safety events. The infrastructure for documenting AI-driven harm at the bedside is not yet standardized across Indiana hospital systems.
The combined picture: Indiana mid-market companies are deploying AI faster than the state's regulatory and operational scaffolding can absorb. The CEO who builds an internal AI incident response capability now is not chasing compliance. They are building the function that Indiana counsel and state regulators will assume exists 18 months from now.
Where this sits in the 7 Domains of AI Governance
AI Incident Response & Monitoring is Domain 6 of the 7 Domains of AI Governance Framework (Strategy, Inventory, Data, Vendor, Human Oversight, Incident Response, Workforce). Each domain has a five-level maturity reading.
Level 3 (Managed) on Domain 6 requires functional incident response: a documented playbook, a named incident commander, defined severity tiers, regulator-notification templates, and at least one rehearsal in the past 12 months. Level 4 (Strategic) requires a post-incident learning loop: every incident generates a governance update, every quarter the board reviews the cumulative incident log, the playbook revises against actual experience.
The CSA / Google Cloud State of AI Security and Governance survey (December 2025, n=300) reported that organizations with full AI governance policies in place are nearly twice as likely to be early adopters of agentic AI (46%) compared with organizations holding only partial guidelines (25%) or governance still in development (12%). Governance maturity is the strongest predictor of AI readiness in the CSA / Google Cloud data, and the Stanford 2026 capability collapse is what the partial-and-in-development end of that distribution looks like at the operational level. Closing the inversion requires moving Domain 6 from Reactive to Managed to Strategic.
How The 7 Levels of AI Proficiency integrates
Incident response capability is a Level 5 + Level 6 capability inside The 7 Levels of AI Proficiency framework. Without at least one Level 5 and one Level 6 in the room, the playbook does not run.
Level 5: Captain (Architectural Strategist). A Level 5 designs the AI incident response system before the incident happens. Decisions a Level 5 makes: which models get monitoring, which outputs get human review, what the severity matrix looks like for THIS company, who is on the incident response team, what the decision rights are, what the audit trail captures. The Level 5 work is the difference between an organization that can answer the regulator's first question in 24 hours and one that takes two weeks to assemble the facts. Read the Level 5: Captain deep-dive for the architectural-design competencies.
Level 6: Admiral (Cross-Functional Director). A Level 6 runs the incident response across legal, security, engineering, customer success, communications, finance, and insurance. The Level 6 work is the orchestration: who calls the carrier, who drafts the customer notification, who briefs the board, who decides whether to pull the product. A Level 6 has rehearsed the workflow with the team before the live incident. Read the Level 6: Admiral deep-dive for the cross-functional-command competencies.
The 7 Levels of AI Proficiency reads incident response as Level 5 + Level 6 work because it requires architectural design AND cross-functional operational command. A company with no Level 5 and no Level 6 will absorb the full Stanford-2026 cost of incident response failure.
Sources
- Stanford HAI (2026). AI Index Report 2026. Stanford University. hai.stanford.edu/ai-index/2026-ai-index-report. Documented AI incidents 233 to 362. Self-rated excellent response capability 28% to 18%; good 39% to 24%; organizations experiencing 3-5 incidents 30% to 50%.
- IBM (2024). Cost of a Data Breach Report 2024. ibm.com/reports/data-breach. Average breach identification 194 days; average containment 64 days; full lifecycle 258 days (a 7-year low, down from 277 days the prior year). Cross-confirmed by EDRM analysis at ediscoverytoday.com.
- IBM (2025). Cost of a Data Breach Report 2025. Shadow AI breaches add roughly $670K. 97% of AI-breached organizations lacked proper AI access controls. newsroom.ibm.com.
- Coalition for Secure AI (CoSAI) (October 30, 2025). Defending AI Systems: A New Framework for Incident Response. coalitionforsecureai.org. Audit-trail capture as a core requirement of the Document phase.
- Cloud Security Alliance and Google Cloud (December 2025). The State of AI Security and Governance (n=300 IT and security professionals). Press release at cloudsecurityalliance.org; full report at cloudsecurityalliance.org/artifacts. Organizations with full AI governance policies in place are 46% likely to be early adopters of agentic AI vs 25% (partial guidelines) and 12% (in-development governance). Governance maturity is the strongest predictor of AI readiness.
- European Union (2024). EU AI Act Article 73: Reporting of Serious Incidents. Application date August 2, 2026. artificialintelligenceact.eu/article/73.
- NIST (July 2024). AI 600-1 Generative AI Profile. nvlpubs.nist.gov. Hallucination definition; Incident Disclosure focus area.
- Indiana General Assembly. Indiana Consumer Data Protection Act (IC 24-15). Effective January 1, 2026. Civil penalty up to $7,500 per violation; 30-day cure period; AG-only enforcement. law.justia.com/codes/indiana/title-24/article-15.
- Quarles & Brady LLP (April 2, 2025). Nearly Half of States Have Now Adopted NAIC Model Bulletin on Insurers' Use of AI. 24 states tracked as of publication. quarles.com.
- Indiana University News (December 3, 2025). Lilly and IU to expand access to clinical trials and life sciences research. Five-year, up-to-$40M collaboration across AI-enabled clinical trial infrastructure, neurological health expansion, and workforce development. news.iu.edu.
- Moffatt v. Air Canada (BC Civil Resolution Tribunal, February 14, 2024). Liability ruling on AI chatbot misrepresentation. americanbar.org.
- Real-time AI regulatory tracking: ailawtracker.org/governance (canonical source for state and federal AI bill status).
Frequently asked questions
What do I do when AI gives a bad output?
Pull the AI from the workflow first. Document the time of the pull. Do not iterate the prompt while the system is hot. Contact your legal counsel before customer notification. Open an incident record with the model version, the prompt, the output, the user action, and the containment time. Apply the severity classification matrix to determine the notification path. Notify customers and regulators on the timeline the matrix triggers.
What is an AI incident?
An AI incident is any event where an AI system's output or behavior caused or contributed to harm to a person, property, environment, fundamental right, or institutional process. The harm can be financial, reputational, physical, or rights-based. AI incidents differ from ordinary IT incidents because the failure mode is probabilistic (model behavior) rather than deterministic (server crash, code bug).
Do I need to report AI incidents to regulators?
Depends on jurisdiction, sector, and severity. EU AI Act Article 73 requires providers of high-risk AI systems to report serious incidents within 15 days standard, 10 days where death is involved, and 2 days for widespread infringement or critical-infrastructure disruption, starting August 2, 2026. FDA SaMD regulations require adverse event reporting for AI medical devices. State insurance regulators that adopted the NAIC Model Bulletin require AIS Program incident logs. Consumer-protection AGs in Massachusetts and California have begun pursuing AI incidents under existing unfair-trade-practice statutes.
When does an AI hallucination become a legal liability?
When a customer or court can show four elements. The AI output was false. A reasonable customer relied on it. The deploying organization owed a duty of care. Damages flowed from the reliance. Moffatt v. Air Canada (BC Civil Resolution Tribunal, February 2024) is the leading published case. The deploying organization cannot escape liability by characterizing the AI as a separate entity.
How do I write an AI incident response plan?
Start with the 7-step playbook (Detect, Contain, Assess, Notify, Correct, Document, Post-mortem). Define your severity classification matrix. Name your six roles: incident commander, agent owner, technical lead, AI safety lead, legal partner, and communications lead. Pre-build communication templates for executives, customers, and regulators. Pre-build regulator notification packages for EU AI Act, FDA, state AGs, and your state insurance regulator. Rehearse the plan quarterly.
What is a serious incident under the EU AI Act?
An incident or malfunction of an AI system that directly or indirectly results in death or serious harm to health, serious irreversible disruption to critical infrastructure, infringement of EU fundamental-rights obligations, or serious harm to property or environment. Reporting timelines: 2 days for widespread infringement or critical-infrastructure disruption, 10 days for death, 15 days for all other serious incidents.
Who should be on my AI incident response team?
Six core roles. Incident commander (overall response). Agent owner (the AI system's accountable executive). Technical lead (engineering investigation and remediation). AI safety lead (severity classification and monitoring). Legal partner (regulator notification and counsel coordination). Communications lead (customer, public, internal messaging). For mid-market companies under 200 employees, the same person often holds two of these roles. Documented assignment is the discipline that holds under stress.
How long do I have to report an AI incident?
Under the EU AI Act Article 73 (effective August 2026): 2 days, 10 days, or 15 days depending on severity. Under FDA SaMD: 30 days for routine adverse events, faster for incidents involving death or serious injury. Under state insurance regulators that adopted the NAIC Model Bulletin: per the bulletin's AIS Program requirements. Under HIPAA breach notification: 60 days for PHI involvement. Under state consumer-protection statutes: case-by-case.
This article is informational only. It is not legal advice. Specific incident reporting obligations vary by jurisdiction, sector, and the facts of the incident. Consult counsel before making compliance decisions. Sources current as of May 12, 2026.
Find your AI Proficiency level
The free 7 Levels of AI Proficiency assessment places you across seven stages of AI capability. Under ten minutes. Research-backed scoring. Knowing where you sit on Level 5 (Captain) and Level 6 (Admiral) is the prerequisite for designing and running an incident response program.