AI Governance · Transparency & Disclosure

AI transparency and disclosure requirements: a CEO's guide for 2026

Three AI transparency laws (California SB 942, Colorado SB 24-205, and EU AI Act Article 50) all enforce inside the next 82 days. Indiana has no general state-level AI disclosure statute in force yet, but federal HIPAA, FTC Section 5, and consumer expectation create a real disclosure perimeter for any Indiana operator with consumer touchpoints. The picture in plain English.

By Harrison Painter As of May 10, 2026 Updated May 12, 2026 10 min read
  • Federal + Multistate + Indiana
  • Imminent enforcement

TL;DR

Do you have to tell customers when AI is being used? In a growing list of jurisdictions, yes. Three transparency laws all switch on inside the next 82 days: California SB 942 (image, audio, and video provider labeling, August 2, 2026), Colorado SB 24-205 (general consumer-AI disclosure, June 30, 2026), and EU AI Act Article 50 (chatbot and deepfake disclosure, August 2, 2026). Indiana has no general state AI disclosure law in force as of May 2026; the 2025 HB 1620 healthcare and insurance bill did not become law. Indiana operators with consumer touchpoints still face a real disclosure perimeter through federal HIPAA, FTC Section 5 unfair-and-deceptive-practices doctrine, FDA AI/ML guidance, and consumer expectation. The 7-domain governance approach to disclosure is to design once, satisfy every statute, and treat disclosure as a trust signal rather than a compliance burden.

Indiana's actual AI disclosure law as of May 2026

Indiana has one enacted general AI disclosure law on the books today: HB 1133, the political deepfake disclaimer statute authored by Rep. Julie Olthoff (R) and effective spring 2024. Required disclaimer language for AI-altered or AI-generated political media: "Elements of this media have been digitally altered or artificially generated." Civil enforcement: candidates may bring civil action against those who paid for or sponsored unlabeled fabricated media.

That is the entire enacted state-level AI disclosure surface in Indiana right now. There is no general consumer-AI disclosure statute analogous to Colorado SB 24-205, no high-risk decision notice law, and no synthetic-media labeling regime analogous to California SB 942.

HB 1620 was introduced in the 2025 General Assembly session and would have required healthcare providers and insurers to disclose certain uses of AI to patients and insureds. Available bill trackers show the bill did not become law. Indiana healthcare and insurance operators should confirm current obligations with counsel rather than treat HB 1620 as currently in force; reintroduction in the 2026 or 2027 session remains possible.

HB 1182, the 2026 nonconsensual AI sexual images bill, is sometimes confused with broader AI disclosure legislation. It is a separate matter: a criminal offense for creation, possession, and distribution of AI-generated nonconsensual sexual images. It cleared House committee with bipartisan support but did not reach final passage in the 2026 short session.

The practical takeaway for Indiana CEOs: your AI disclosure perimeter today is set by federal frameworks (HIPAA Privacy Rule for protected health information, FDA guidance on AI and machine-learning-enabled medical devices, ONC HTI-1 for certified health IT, FTC Section 5 for consumer deception), by any other state where you have consumer touchpoints (Colorado, California, Utah, the 28 states with political deepfake laws), and by the EU AI Act if you have any EU exposure. Indiana state law adds the political deepfake disclaimer layer on top.

The federal and multistate enforcement window opens this summer

May 10, 2026 puts the cluster 51 days from Colorado AI Act enforcement and 84 days from California SB 942 plus EU AI Act Article 50 enforcement. Three statutes that did not exist as enforceable law six months ago all switch on within an 84-day window.

California SB 942 (California AI Transparency Act). Originally scheduled for January 1, 2026. AB 853, signed October 13, 2025, pushed the operative date to August 2, 2026 to align with EU enforcement and broadened the scope. Coverage is narrow but consequential: any generative AI system with more than 1,000,000 monthly visitors or users publicly accessible in California, producing image, video, or audio output (text-only systems are excluded). Three obligations: a free public AI content detection tool; manifest disclosures (visible labels users can opt to embed); and latent disclosures (hidden metadata in every output containing provider name, AI system details, creation timestamp, unique identifier). Penalty: $5,000 per violation, where each day is a separate violation. Three new categories were added by AB 853, with later effective dates: large online platforms with two million-plus monthly users (effective January 1, 2027), generative AI hosting platforms (January 1, 2027), and capture device manufacturers (January 1, 2028).

Colorado SB 24-205 (Colorado AI Act). Enforces June 30, 2026 after SB 25B-004, signed by Governor Polis on August 28, 2025 in the 2025 Extraordinary Session, delayed the original February 1, 2026 effective date. General disclosure: any AI system intended to interact with consumers must disclose to each consumer that they are interacting with an AI system. Consequential decision notice: when a deployer uses a high-risk AI system as a substantial factor in a consequential decision (employment, education, financial services, healthcare, housing, insurance, legal services, government benefits), the deployer must notify the consumer before the decision. Enforcement: violations are deceptive trade practices under the Colorado Consumer Protection Act, enforced by the Colorado Attorney General.

EU AI Act Article 50. Enforces August 2, 2026 (24 months after Act entry into force). Chatbot rule: providers of AI systems that interact with people must inform them they are interacting with AI, unless this is obvious to a reasonably well-informed person. Synthetic content rule: providers must mark AI-generated audio, image, video, and text in machine-readable format and ensure it is detectable as artificially generated. Deepfake rule: deployers must disclose when image, audio, or video deepfakes are artificially generated or manipulated. Article 99 sets the mid-tier penalty for Article 50 violations at administrative fines up to 15,000,000 euros or 3 percent of total worldwide annual turnover, whichever is higher.

The state stack also includes Utah SB 149 (effective May 1, 2024, narrowed by SB 226 in 2025 to require disclosure only during high-risk regulated-occupation interactions on clear and unambiguous request), Tennessee ELVIS Act (the first state law specifically prohibiting AI mimicry of a person's voice without permission), Texas Election Code Section 255.004 (first state law criminalizing creation of certain deepfake videos within 30 days of an election), and 28 states total with enacted political-ad deepfake disclosure laws per R Street Institute and Public Citizen tracking.

What AI disclosure actually means

Four citable definitions for the four surfaces operators ask about most often.

Chatbot disclosure is a statement informing a user they are interacting with an AI system rather than a human, typically required at the outset of a conversation. Required by EU AI Act Article 50 for AI-human interactions in the EU, by Utah SB 149 (as amended by SB 226 in 2025) during high-risk regulated-occupation interactions on clear and unambiguous request, and by Colorado AI Act for any consumer-interacting AI system starting June 30, 2026.

AI-generated content labeling is a visible marker on the surface of media (caption, watermark, badge) indicating that the content was created or substantially modified by AI. Required by California SB 942 (manifest disclosures) and EU AI Act Article 50 for deepfakes.

Synthetic media disclosure is the umbrella term covering both visible labels and embedded metadata for AI-generated images, audio, and video. Regulated through state political-ad disclaimer laws (Indiana HB 1133, Florida, Texas, Tennessee ELVIS Act) and federal-level FTC truth-in-advertising enforcement.

Material AI use is the threshold concept used in most state laws. Disclosure is required when AI use is material to the consumer's decision or experience. Definitions vary: Colorado uses "substantial factor in a consequential decision," Utah uses "high-risk artificial intelligence interaction," California uses the one-million-monthly-user threshold for the provider rather than the use case.

The 10-surface compliance matrix

For each consumer-facing surface, which laws trigger and what disclosure is required.

Surface Trigger statutes Required?
Customer service chatbot (general)CO AI Act, Utah SB 149 (high-risk), EU AI Act Art. 50Required for CO/EU consumers; Utah on clear request during high-risk interactions
Healthcare patient chatbot or AI-generated patient communicationHIPAA Privacy Rule, FDA AI/ML guidance, CO AI Act (high-risk), FTC Section 5Required by federal frameworks for PHI handling and material consumer interactions; recommended for all patient-facing AI
Insurance coverage AI decision or AI-generated policy communicationCO AI Act consequential decision, NAIC Model Bulletin, FTC Section 5Required for Colorado consumers; recommended elsewhere; pre-decision notice for high-risk uses
AI-generated marketing content (text, image, video)FTC Section 5, CA SB 942 (image/video/audio only)Required if material; CA latent metadata for covered providers
AI-augmented hiring toolCO AI Act consumer notice, NYC Local Law 144 (AEDT), Illinois HB 3773Required if applicants in CO / NYC / IL
AI-generated product photographyC2PA recommended; CA SB 942 for covered providersRecommended; likely required by 2027 in CA
AI-generated voice (synthetic call audio, IVR)Tennessee ELVIS Act, FCC AI robocall rule, multiple state robocall lawsRequired in most jurisdictions
AI-augmented internal documentsNone generallyNot required (consider for governance)
AI training materialNone generallyNot required
Political ad with AI-generated or digitally altered contentIN HB 1133, Texas, Florida, 28 states totalRequired in 28 states

Seven paste-ready disclosure language blocks

Written to satisfy the statutory floor while preserving the consumer relationship. High-floor not high-friction.

1. Chatbot opening line (general consumer-facing).

You're chatting with our AI assistant. I can help with [common task list]. To reach a member of our team, type "agent" anytime, or click here.

Discloses AI status (EU AI Act + Utah SB 149 + Colorado AI Act compliant), names the function, surfaces the human escape hatch that 89 percent of consumers want according to a SurveyMonkey 2025 study (n=2,017 US adults), and keeps tone collegial. Skip "I'm just an AI" diminutives and "virtual assistant" euphemisms.

2. Healthcare patient portal chatbot (federal frameworks plus voluntary best practice).

This message was generated with the help of an AI tool reviewed by your care team. If you'd like to speak with your care team directly, please call [number] or message your provider through the portal.

No general state-level AI disclosure law applies to Indiana healthcare AI today, but FTC Section 5 (deceptive practices), HIPAA Privacy Rule, FDA AI/ML guidance, and the Colorado AI Act high-risk consequential-decision notice (for any Colorado patient interaction) all touch this surface. The language above discloses AI involvement, names the human-review layer, and gives the patient a direct human path. Treat as voluntary best practice with real federal exposure if omitted.

3. Insurance coverage decision communication (Colorado AI Act consequential-decision notice + NAIC Model Bulletin).

Notice: an artificial intelligence tool was used in reviewing your coverage. Our team has approved this notice before sending. To request a human review of any coverage decision, please call [number] or write to [address].

Required for Colorado consumers under SB 24-205 starting June 30, 2026 (consequential decision notice must be issued before the decision). Aligned with the NAIC Model Bulletin on the use of AI by insurers (adopted December 2023, adopted by 24-plus states as of early 2026). Recommended for any insurer with multistate exposure even before the Colorado date hits.

4. Email footer for AI-drafted email.

Drafted with AI assistance and reviewed by [sender name].

Short, honest, attributes review to a human. Use on outbound sales sequences, marketing emails, support follow-ups. Skip "Sent with AI" without naming the human reviewer; that suggests no human was in the loop and creates FTC exposure under the Presto Automation pattern.

5. Marketing image caption (forward-compatible with CA SB 942).

Image generated with AI. Verified through Content Credentials.

When the image is C2PA-signed, the caption can carry a Content Credentials icon (cr mark) linking to the verification page. After August 2, 2026, covered providers must embed latent metadata regardless of caption presence.

6. Hiring system pre-application notice (Colorado AI Act).

We use an AI-augmented system to screen applications. Before you apply, here's what you should know: [system purpose]; [decision factors AI considers]; [your rights to request human review]; [our contact info]. [Link to full statement].

Meets Colorado SB 24-205 consequential-decision notice, satisfies NYC Local Law 144 AEDT bias-audit disclosure when displayed with the audit summary, and respects the applicant relationship.

7. IVR voice disclosure.

Hi, you've reached [company]. I'm an AI voice assistant. To speak with a person at any time, press 0 or say "agent."

Front-of-call disclosure, immediate human escape hatch, no euphemism. Tennessee ELVIS Act considerations apply if the synthetic voice mimics any specific person; default to a non-mimicked synthetic voice for outbound IVR.

Indiana operators: what's actually in force and who enforces

Indiana's enacted state-level AI disclosure surface today is HB 1133 alone (political deepfake disclaimers, in force since spring 2024). There is no general consumer-facing AI disclosure law analogous to Utah SB 149, no high-risk decision notice law analogous to Colorado SB 24-205, no synthetic-media-labeling law analogous to California SB 942, and no enacted healthcare-AI disclosure statute (HB 1620 from the 2025 session did not become law per available bill trackers).

Indiana's largest mid-market AI buyers in healthcare and insurance (Eskenazi Health, IU Health, Community Health Network, Parkview, Eli Lilly, OneAmerica Financial, Elevance Health) operate today without a dedicated Indiana state AI disclosure statute. The applicable disclosure rules flow from federal frameworks, multistate exposure, professional ethics, and contractual obligations: HIPAA Privacy Rule for protected health information, FDA guidance on AI and machine-learning-enabled medical devices, ONC HTI-1 for certified health IT, the NAIC Model Bulletin on insurer use of AI (adopted December 2023, adopted by 24-plus states as of early 2026), Colorado SB 24-205 consequential-decision notice for any Colorado consumer interaction starting June 30, 2026, and FTC Section 5 across the board.

The Indiana Attorney General's consumer protection division has not yet brought a public AI-specific deception action. Indiana mid-market companies with consumer-facing AI chatbots should expect Indiana enforcement to follow the Massachusetts AG and FTC playbooks rather than originate doctrine. The cleanest federal anchors today are the SEC v. Presto Automation matter (January 2025, AI-washing doctrine, cease-and-desist), the Massachusetts AG Earnest settlement ($2.5M, 2025), and FTC Operation AI Comply (launched September 25, 2024, more than a dozen AI-washing cases brought through 2025). Federal AI enforcement direction shifted in 2025 under the Trump Administration's AI Action Plan; the FTC's Rytr matter is one publicly discussed case where the prior approach was reconsidered.

The 2026 Indiana General Assembly session may reintroduce a healthcare AI disclosure bill; monitor the AI Law Tracker for status. Until then, Indiana operators with consumer-facing AI should treat the federal-and-multistate stack as the binding disclosure perimeter.

How transparency maps to the 7-domain governance approach and The 7 Levels of AI Proficiency

Transparency is Domain 5 in the 7-domain governance approach. It composes with the other six domains (oversight, inventory, data, vendor, incident, literacy). A disclosure decision made in isolation produces statutory compliance and trust forfeiture; a disclosure decision made inside the 7-domain architecture produces both compliance and durable customer trust.

The individual-capacity question is The 7 Levels of AI Proficiency. Disclosure work anchors at Level 4 Commander and Level 6 Admiral.

The Level 4 Commander writes the chatbot opening line, builds the email footer template, designs the disclosure cadence inside the customer service flow. They translate statutory requirements into consumer-facing language that satisfies the law AND respects the customer relationship. Consumers asking for disclosure are not asking for legal boilerplate; they are asking for honesty. A Relyance AI 2025 survey of 1,000-plus US consumers found 84 percent would abandon or restrict use of a company over AI opacity and 76 percent would switch brands for transparency, even at higher cost. The Commander writes copy that earns the second number.

The Level 6 Admiral coordinates legal, marketing, product, and engineering on a single disclosure stance. They ensure the chatbot copy matches the privacy policy matches the JSON-LD on the marketing page matches the C2PA metadata embedded in the product image. The Admiral is a still-rare role inside US mid-market companies, and it is the role transparency-as-trust-signal requires.

Level 5 Strategist decides whether the company adopts C2PA at the asset level proactively or waits for legal requirement. C2PA has thousands of members across general and contributor tiers as of May 2026; the steering committee is Adobe, Amazon, BBC, Google, Meta, Microsoft, OpenAI, Publicis Groupe, Sony, and Truepic. Adobe Firefly, OpenAI DALL-E 3, Sora, and Google Imagen all embed C2PA metadata by default. Google's Pixel 10 (September 2025) is among the devices integrating C2PA Content Credentials at the camera level, alongside other manufacturers in the C2PA initiative.

Frequently asked questions

Do I have to tell customers when AI is being used?

It depends on what the AI is doing and where the customer lives. Colorado SB 24-205 requires disclosure for any AI system interacting with consumers starting June 30, 2026. EU AI Act Article 50 requires disclosure on any AI-human interaction in the EU starting August 2, 2026. Utah SB 149 (as amended by SB 226 in 2025) requires disclosure during high-risk regulated-occupation interactions on clear and unambiguous consumer request. California SB 942 requires labeling of AI-generated image, audio, and video for covered providers starting August 2, 2026. Outside those triggers, FTC Section 5 still applies if non-disclosure would be material to the consumer's decision, and HIPAA plus FDA guidance shape disclosure for healthcare-related AI.

What disclosure is required by California SB 942?

Three things, but only if your generative AI system has more than 1,000,000 monthly visitors or users publicly accessible in California, AND your output is image, video, or audio. Text-only systems are excluded. First, you must offer a free public AI content detection tool. Second, you must let users embed visible disclosures (manifest) in their AI-generated outputs. Third, you must embed latent metadata in every output: provider name, AI system details, creation timestamp, unique identifier. The effective date moved from January 1, 2026 to August 2, 2026 by AB 853, signed October 13, 2025. AB 853 also added new categories: large online platforms with 2 million-plus monthly users (effective January 1, 2027), generative AI hosting platforms (January 1, 2027), and capture device manufacturers (January 1, 2028).

Does my chatbot need a disclosure label?

If your chatbot serves Colorado consumers, yes, Colorado SB 24-205 requires it as of June 30, 2026. If your chatbot serves EU consumers, yes, EU AI Act Article 50 requires it as of August 2, 2026. If your chatbot serves Utah consumers in regulated occupations during high-risk interactions, yes, Utah SB 149 (as amended by SB 226) requires it on clear and unambiguous request. Outside those triggers, FTC Section 5 still applies if non-disclosure would be material. Consumer expectations point the same direction: a SurveyMonkey 2025 study (n=2,017 US adults) found 89 percent of consumers say companies should always offer the option to speak with a human when interacting with a chatbot.

Should AI-generated images be labeled?

Today, only if you are a covered provider under California SB 942 (1M+ monthly California users, image, video, or audio output) starting August 2, 2026, or a deployer of a deepfake under EU AI Act Article 50 starting the same date. Beyond that, labeling AI images is recommended rather than required. The C2PA Content Credentials standard is the emerging norm. Adobe Firefly, OpenAI DALL-E 3, Sora, and Google Imagen all embed C2PA metadata by default. Google's Pixel 10 is among the devices integrating C2PA Content Credentials at the camera level, alongside other manufacturers in the C2PA initiative. Adopting C2PA proactively positions a company to meet the 2027 regulatory wave with infrastructure already in place.

What is C2PA content provenance?

C2PA, the Coalition for Content Provenance and Authenticity, is an open technical standard for embedding cryptographically signed metadata about a media asset's origin, creation tool, edit history, and identity. The metadata sits in a C2PA Manifest signed with the creator's private key. Any verifier can validate the manifest without contacting the original creator. The consumer-facing brand is Content Credentials, typically displayed as a cr icon on the asset. As of May 2026 the C2PA steering committee is Adobe, Amazon, BBC, Google, Meta, Microsoft, OpenAI, Publicis Groupe, Sony, and Truepic.

What is Indiana's current AI disclosure law?

Indiana has no general consumer-facing AI disclosure law in force as of May 2026. HB 1620 was introduced in the 2025 Indiana General Assembly session and would have required healthcare and insurance AI disclosure; available bill trackers show it did not become law. Indiana's enacted AI surface today is HB 1133 (political deepfake disclaimers, in force since 2024). Healthcare and insurance AI disclosure for Indiana operators today flows from federal frameworks (HIPAA Privacy Rule, FDA guidance on AI/ML-enabled medical devices, ONC HTI-1) and FTC Section 5 unfair-and-deceptive-practices doctrine; no dedicated state AI statute applies. Confirm current obligations with counsel.

What is Indiana HB 1182?

HB 1182 was a 2026 Indiana bill creating a criminal offense for the creation, possession, and distribution of AI-generated nonconsensual sexual images. The bill cleared House committee with bipartisan support but did not reach final passage in the 2026 short session. The 2027 long session is the next refile window. HB 1182 is unrelated to employment deepfakes or general AI disclosure.

What is the penalty for not disclosing AI use?

Penalties vary by statute. California SB 942 authorizes the California Attorney General plus city attorneys plus county counsel to enforce, with civil penalties of $5,000 per violation, where each day is a separate violation. Colorado AI Act violations are deceptive trade practices under the Colorado Consumer Protection Act, enforced by the Colorado Attorney General. EU AI Act Article 50 violations fall under Article 99 mid-tier penalties: administrative fines up to 15,000,000 euros or 3 percent of total worldwide annual turnover, whichever is higher. FTC enforcement under Section 5 carries no statutory cap on injunctive remedies and can include consumer redress and operating restrictions; the SEC v. Presto Automation matter (January 2025) and the Massachusetts AG Earnest settlement ($2.5M, 2025) show how AI-washing and consumer-AI deception are being charged in practice.

Sources

  1. Indiana General Assembly. "HB 1620 (2025)." iga.in.gov HB 1620
  2. LegiScan. "Indiana 2025 HB 1620 Introduced PDF." legiscan.com HB 1620 text
  3. Indiana House Republicans. "Olthoff bill on AI in deceptive election ads now law (HB 1133)." indianahouserepublicans.com HB 1133
  4. Indiana Capital Chronicle. "House panel advances bill criminalizing nonconsensual AI nudity (HB 1182)." indianacapitalchronicle.com HB 1182
  5. California Legislature. "SB-942 California AI Transparency Act." leginfo.legislature.ca.gov SB 942
  6. Troutman Privacy. "California AI Transparency Act Amendments Signed Into Law (October 13, 2025)." troutmanprivacy.com SB 942 amendments
  7. Colorado General Assembly. "SB24-205 Consumer Protections for Artificial Intelligence." leg.colorado.gov SB 24-205
  8. Colorado General Assembly. "SB 24-205 signed text PDF." leg.colorado.gov signed text
  9. Utah Legislature. "S.B. 149 Artificial Intelligence Amendments." le.utah.gov SB 149
  10. Perkins Coie. "New Utah AI Laws Change Disclosure Requirements." perkinscoie.com Utah AI laws
  11. Colorado General Assembly. "SB 25B-004 (signed text PDF, August 28, 2025), extends SB 24-205 effective date to June 30, 2026." leg.colorado.gov SB 25B-004
  12. EU Artificial Intelligence Act. "Article 50: Transparency Obligations." artificialintelligenceact.eu Article 50
  13. EU Artificial Intelligence Act. "Article 99: Penalties." artificialintelligenceact.eu Article 99
  14. Jones Day. "European Commission Publishes Draft Code of Practice on AI Labelling and Transparency." jonesday.com EU draft code
  15. C2PA. "Membership and Steering Committee." c2pa.org/membership
  16. Google Blog. "How Google and the C2PA are increasing transparency." blog.google C2PA transparency
  17. NSA / CISA. "Strengthening Multimedia Integrity in the Generative AI Era (CSI on Content Credentials, January 2025)." media.defense.gov NSA CSI
  18. NAIC. "Model Bulletin on the Use of Artificial Intelligence Systems by Insurers (adopted December 2023)." naic.org AI Model Bulletin
  19. R Street Institute. "Update on 2025 State Legislation to Regulate Election Deepfakes." rstreet.org 2025 deepfake legislation
  20. Public Citizen. "Tracker: State Legislation on Deepfakes in Elections." citizen.org deepfakes tracker
  21. Federal Trade Commission. "FTC Announces Crackdown on Deceptive AI Claims and Schemes (Operation AI Comply, September 2024)." ftc.gov Operation AI Comply
  22. Cooley. "SEC charges AI-washing at Presto Automation (January 2025)." cooleypubco.com Presto SEC
  23. Massachusetts Attorney General. "AG Campbell Announces $2.5 Million Settlement with Student Loan Lender for Unlawful Practices Through AI Use, Other Consumer Protection Violations (2025)." mass.gov Earnest settlement
  24. SurveyMonkey. "Customer Service Statistics 2026 (n=2,017 US adults, December 2025)." surveymonkey.com customer service
  25. Relyance.ai. "Consumer AI Trust Survey 2025 (n=1,000+ US adults, December 2025)." relyance.ai trust survey
  26. AI Law Tracker. "AI Governance Tracker." ailawtracker.org/governance
  27. AI Law Tracker. "Bill database." ailawtracker.org/bills
Harrison Painter
Harrison Painter
AI Business Strategist. Founder, LaunchReady.ai and AI Law Tracker.

Harrison helps Indiana leaders build AI systems that cut cost and grow revenue. Founder of LaunchReady.ai and the 7 Levels of AI Proficiency framework. Author of You Have Already Been Replaced by AI and The White-Collar Factory is Closing.

Connect on LinkedIn

Track AI legislation as it moves

AI Law Tracker covers every active federal and state AI bill in plain English. Daily updates. Indiana-flagged.

Get the weekly briefing

LaunchReady Indiana delivers AI news, compliance updates, and case studies for Indiana leaders. Every Tuesday. Five minutes.

Subscribe free