Time for Real AI Governance
FEATUREDBUSINESSRISK


Company leaders, especially those in the GRC space, need to start driving real governance over AI. To wait any longer, could be too late.
Good Fences Make Good Neighbors
In Robert Frost’s poem, Mending Wall, the quote from his neighbor, “Good fences make good neighbors,” implies that boundaries are what prevent conflict. Yet, at the risk of infuriating my college English professor by over simplifying this poem, the poem is really a reflection on the inherent contradiction that boundaries, while preventing conflict, also wall us off from others. This is actually a great analogy for where we find ourselves with AI. As AI tears through the fabric of our business culture and society as a whole, corporate leadership and GRC leaders need to step up and bring order to the chaos.
Current Events
On March 10, Business Insider reported that Amazon was introducing “tighter controls” as a result of a number of pretty substantial errors. The report detailed that on one day 120,000 orders were lost and 1.6 million website errors happened. Among the controls being instituted at Amazon: more human coders signing off on code before it goes into production.
In the same week, Sam Altman, the man future generations may equate with patient zero on the path to human destruction, also announced that, “We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter.” For the purposes of this discussion, let’s just swap out “intelligence” for, “processing capabilities to search large quantities of data and produce results with little to no effort required on the part of the user.”
In summary, we have what appears to be an over-reliance on an unproven technology that will be metered (and, potentially, with no limits on cost) involved in mission critical aspects of a growing number of companies. Don’t believe me? I submit to you Jack Dorsey, CEO of Block, who announced on February 26, that Block would cut 40% of the workforce because of AI.
Why Should GRC Leaders Care?
In our world, risk, audit, and compliance leaders should be constantly assessing risk. And I, for one, see a great deal of risk. Those of us in the trenches are seeing both the promise and the problems with AI. A coder recently told me that the code he has run through AI Coding agents was bloated. When I pressed for details he said he and his team did a test and wrote code for a project and asked one of the AI coding agents to write code using the same parameters. The AI generated code had 10 times the lines of code over the code his team wrote. Is there utility in using AI coding tools? 100%. But AI isn’t ready to do this by itself. At least that was one coder’s opinion.
That’s where the good fences come in. AI governance is a notion that, as far as I can tell, is more about checking the box than understanding and mitigating risk from a business continuity and cost standpoint. And that’s where GRC professionals have a card to play. GRC leaders should be leading the charge on all things governance. If you aren’t already having this conversation with your C-suite leaders, then I suggest setting that meeting on the calendar immediately. Start with the risk. Ask the questions that any AI governance committee and/or framework should be prompting:
How are we really using AI and how does that impact our core business function (think Amazon orders lost), compliance, legal, cybersecurity, and staffing risks?
What do teams do with the AI generated work product, specifically?
How does our current governance framework on AI impact any one individual use-case of AI work product?
My experience with many of the AI governance frameworks in place today is that they are usually built around three key questions:
Do we have legal liability?
Does this present a security risk to our data?
Do our employees understand the implications of using AI?
Approvals and/or denials of AI use often focus on the latter three questions, not the former. The questions I hear the most are, “will we allow people to use a specific AI provider?” Or, “is it OK for us to use this technology provider that has AI built in?” I would suggest that these aren’t the right questions to be asking. Imagine a marketing leader using AI to build out a whole product brand launch using what he or she thinks is a custom character. Only to find out after launch that the character looks remarkably similar to a character that is copyrighted and was “used” by the AI tool in question to learn. I feel confident in stating that we will begin to see more and more cases just like that, not just in marketing or visual AI outputs (did you see the AI fight scene between Tom Cruise and Brad Pitt?). And, like it or not, there is a good chance many employees are still using their own AI tools to help with their own career advancement. The risks seem really high. Despite Mr. Altman’s irrational exuberance.
Robert Frost and the poem’s central theme is actually the right way to think about this. There is inherent tension between maximizing the use of these tools and protecting the interests of the company. What enables us to go faster, can also lead to us crashing the car. Did anybody think of designing a seatbelt?
What Should GRC Leaders Do?
I see this as an opportunity for the GRC Leader to drive value. To interrogate what’s currently happening and ask the hard questions. But where do you start?
Gain buy-in from your C-suite through the lens of risk. Meet with the leaders and point out the risks of using AI (Amazon gives you a good case study) and how it can impact your business or industry.
Create a plan to understand those risks and quantify them. If you are in software, how much code is being generated by AI? If you rely on AI for customer facing content or interactions, how much of that is being utilized? Do we understand the cost factor of using AI today (Mr. Altman’s metered comment) and if that cost doubled, tripled, quadrupled … what would it mean to our bottom line? The time is coming quickly for AI companies to produce more revenue, they are hoping companies are addicted enough to pay any price.
Create tactical governance frameworks. This isn’t going to be handled by one corporate governance committee or team. Each org, team, and sub-team may require a governance framework unique to the function and/or impact on revenue or costs. This will never be a one-size fits all proposition, make sure each use case is carefully examined.
Create continuous monitoring efforts. This will never be a set it and forget it proposition.
Explore distributing the risk models. While direct use of general AI tools (think Claude, ChatGPT, Gemini, etc.) probably represents the greatest risks, there are a number of technology companies (both start-up and established) harnessing the power of AI in their technologies to gain the value of AI while removing some of the risk to the end users.
Wash, rinse, repeat … as they say.
This is, most certainly, a fast moving target. But I see this as an opportunity for GRC and all corporate leaders to drive real value for their companies while engaging in the mitigation of risk that may have catastrophic impacts on business success. Yes, good fences do isolate us from our neighbors. But they also can protect us from the threats we don’t know about. So go build the fence, just not too high.
Michael Pellet is a leader in the technology space and a former Director at Lyft, Salesforce, and Workiva. Michael's experience includes Audit, Enterprise Risk Management, Customer Success, Operations, and Strategy. You can learn more about Michael on LinkedIn.
These are the opinions of the editors of Internal Audit Next and/or the writer who authored this article. Any use of this copyrighted material without permission of Internal Audit Next - including training for AI Models - is prohibited. Copyright 2026.