The Bot Approved It

SOX

Internal Audit Next Editors

2/7/20262 min read

AI agents are making financial decisions. Can SOX adapt?

SOX Was Built for Humans

Section 302 requires your CEO and CFO to personally certify financial accuracy and the effectiveness of internal controls. Section 404 requires management to assess ICFR, with external auditors attesting for accelerated filers. Section 906 threatens prison.

All of SOX assumes a clear chain of human accountability. Human approves transaction. Human reviews exception. Human signs off. You can subpoena a human. You can ask a human why.

AI agents break this chain in ways the law didn't anticipate and regulators haven't resolved. When a bot executes a journal entry, changes vendor master data, or approves a $2.3 million payment because a rule set matched the invoice, who actually signed off? The engineer who wrote the rule? The VP who approved deployment? The CFO who certified ICFR effectiveness without fully understanding what "effective" means when half your controls are autonomous?

This isn't theoretical. Analysts report that machine and non-human identities now outnumber human users in many enterprise environments. If you're running SAP, Oracle, or Workday with any meaningful automation layer, you almost certainly have more bot accounts than human accounts touching financial data.

Shadow AI Is Already in Your Building

Line-of-business teams are plugging copilots, workflow bots, and connectors into ERP modules without centralized oversight. They reuse generic service accounts. Automation accumulates permissions over time because nobody runs a joiner-mover-leaver process for bots. And almost nobody applies the same access governance to non-human identities that they apply to people.

The classic SOX question used to be: who has access to what? The 2026 version is: which autonomous processes can do what, under which policies, and can you prove it end-to-end?

Most organizations cannot currently answer that question.

What Regulators Are Starting to Notice

The EU AI Act moves toward full enforcement for high-risk AI systems by August 2026. High-risk systems include those influencing financial decisions. The SEC's cybersecurity disclosure rules already require reporting material incidents within four business days. A breach that compromises your AI agent's access to financial systems isn't just a cyber incident, it's a potential SOX deficiency, a material weakness disclosure risk, and a C-suite certification liability.

The PCAOB hasn't issued specific AI guidance yet. The new leadership is in "back to basics" mode. But QC 1000 took effect December 15, 2025, and AI governance will eventually find its way into the quality management conversation whether leadership wants it there or not.

The Honest Question for Every CAE

Can you demonstrate that your AI agents operate within defined, approved parameters? That you have controls over the logic, not just the output? That exceptions requiring human judgment are actually getting human judgment? That your non-human identities are inventoried, scoped, and governed the same way human access is?

If your answer is "mostly" or "we're working on it" you are not alone. But the gap between where most organizations are and where regulators are heading is getting more expensive to close the longer it stays open.

The bot may have approved it. The question is whether you can.

These are the opinions of the editors of Internal Audit Next and/or the writer who authored this article. Any use of this copyrighted material without permission of Internal Audit Next - including training for AI Models - is prohibited. Copyright 2026.

Related Articles