COSO Finally Showed Up to the AI Party

SOXINTERNAL AUDITCOMPLIANCERISK

Internal Audit Next Editors

3/1/20264 min read

The framework that governs how you prove your controls work just got an AI update. Here's what that means.

Better Late Than Never

On February 23, 2026, COSO released "Achieving Effective Internal Control Over Generative AI (GenAI)." The committee that wrote the internal control framework governing every SOX compliance program in America finally put out formal guidance on governing AI.

This is meaningful. COSO isn't a regulator, it can't fine you, but its frameworks form the conceptual backbone of what "effective internal controls" means under Section 404. When COSO speaks, auditors listen, external auditors follow, and audit committees eventually ask questions about it. The new publication, developed in collaboration with Deloitte, isn't proposing a new governance model. Instead, it's adapting the existing five COSO components: control environment, risk assessment, control activities, information and communication, and monitoring activities, to generative AI specifically.

The core message is worth pausing on: generative AI transforms how information is generated, processed, and acted upon, but it does not change the fundamental purpose of internal control. The objectives are the same. The mechanisms have to evolve.

What the Framework Actually Says

On risk assessment, COSO is telling organizations to ask "What if…" questions for each AI capability and document those scenarios for use as audit evidence, not just internal exercises. Maintain living risk registers that update when models, retrieval corpuses, or configurations change, not just at annual review cycles. Link identified risks to specific KRIs, dashboards, or alerts that will surface early signs of drift, bias, or misuse.

On control activities, the guidance gets specific: test AI performance before and after deployment and periodically retest for ongoing reliability. Separate the ability to configure AI settings from the authority to approve or review outputs, a segregation of duties concept applied to machine behavior. Require documented approvals and evidence for changes to prompts, thresholds, and retrieval corpuses. When output confidence falls below acceptable levels, block the ability to take action or require additional human review.

On information and communication, COSO urges organizations to record where data came from, how it was processed, and by which model configuration. Maintain prompt libraries and model cards in controlled systems with role-based access. Define model KPIs, including hallucination rates and citation coverage and report them alongside traditional control KPIs.

The framework is a framework after all.

The Accountability Chain Problem, Revisited

What the COSO guidance is really addressing, without explicitly pointing to the elephant in the room, is the accountability chain problem that AI agents create in SOX environments. When a bot approves a transaction, who signed off? The guidance's answer: you need to be able to trace that answer through documented governance.

The control isn't "the AI did it correctly." The control is "we can demonstrate that the AI operated within defined, approved parameters, exceptions required human judgment, and the governance over the AI itself was designed, documented, and tested." That's meaningfully different from how most organizations currently treat AI in their control environments, which is to say, as IT infrastructure rather than as control-executing agents.

COSO specifically calls out data transformation and integration as a priority area: a small mapping or enrichment error can silently corrupt large datasets, leading to cumulative downstream reporting or compliance failures. Silent errors at scale. That's the nightmare scenario for financial reporting integrity.

Your data is what really matters.

The Implementation Gap Is Already Here

Here's the uncomfortable reality: most organizations are nowhere close to meeting the standard COSO just articulated, and they were already supposed to be governing AI as part of their existing Section 404 obligations.

The gap isn't conceptual. CAEs generally understand that AI agents touching financial data need governance. The gap is structural. The same budget, integration, and skills constraints that prevent continuous monitoring adoption are the same constraints that prevent proper AI governance. You need people who understand both the financial control objective and the technical configuration well enough to connect them. That person is rare, expensive, and being recruited by every tech company with a GRC budget. All of those people have been outsourced to India or replaced by AI. New talent is not on the horizon.

The PCAOB's current leadership is focused on "back to basics." They haven't issued specific AI inspection guidance. But quality control standards will eventually catch up, and when they do, the companies that treated AI governance as a separate IT exercise rather than a Section 404 obligation will be behind.

The Practical Implication for CAEs

COSO just gave you ammunition and a roadmap. Use it.

Ammunition: you can now point to a formal, COSO-aligned framework when making the case to your CFO and audit committee that AI governance belongs in your SOX program, not siloed in IT risk. "COSO says so" is not a bad opening line in a budget conversation.

Roadmap: start with your highest-risk AI use cases; the ones where AI outputs directly influence financial statement accounts or disclosures. For each one, can you document what the AI is doing, who approved the parameters it operates under, how exceptions are routed and resolved, and how you'd know if the model drifted? That's the baseline the framework implies.

The COSO publication doesn't change the regulatory landscape overnight. The PCAOB won't start citing it in inspection findings next quarter. But the direction is clear. AI agents are control-executing entities in your financial reporting environment. Governing them the same way you govern human access and human approval workflows isn't optional; it's what Section 404 was always going to require once someone got around to saying it explicitly.

These are the opinions of the editors of Internal Audit Next and/or the writer who authored this article. Any use of this copyrighted material without permission of Internal Audit Next - including training for AI Models - is prohibited. Copyright 2026.

Related Articles