The Algorithm Keeps the Books:
On Trust, Technology, and the Quiet Revolution in Financial Controls
I remember a time, not long ago, when the phrase internal controls evoked the soft rustle of binders, the smell of dry-erase markers on audit room walls, and the ritual gravity with which someone—usually the CFO—would solemnly pronounce, “Controls are in place.” It was a phrase meant to instill comfort, but behind it lay a complicated pact. We didn’t really know whether the controls were in place. We only knew they hadn’t failed. Yet.
Back then, controls were static structures—segregation of duties, approval thresholds, reconciliations. They were human systems dependent on consistency and vigilance, two qualities that, in any high-growth company, are often in short supply. Having spent three decades as an operational CFO across industries, I have watched these systems buckle under the weight of scale and speed. It wasn’t that the people were failing. It was that the processes hadn’t evolved.
Then came the algorithms.
Artificial Intelligence entered the world of financial controls not with a bang but with a whisper. It began by suggesting. A flagged transaction here. An unexpected expense there. At first, it felt like a helpful assistant, the kind of intern you dream of—quiet, tireless, pattern-obsessed. But slowly, it became clear: this was not an intern. This was a new nervous system. One that learns.
At a fintech company I advised, we plugged an AI tool into our procurement process. No grand unveiling. Just a pilot, a test. Within a month, it began surfacing invoice anomalies—a supplier charging 22 percent above historical norms, a contract term that didn’t match what was negotiated. We hadn’t told it to look for these things. It had inferred them from behavior. I remember staring at the screen, realizing that this machine had just accomplished what three layers of human approval had missed.
But the revelation wasn’t just in the precision. It was in the timing. This AI wasn’t catching mistakes after the fact. It was catching them in flight. And that changes everything.
Controls have always walked a line between prevention and permission. Too strict, and you throttle the business. Too loose, and you invite risk. But AI introduces a third dimension: anticipation. It doesn’t merely stop bad behavior. It warns you before the system veers off track.
I’ve seen AI tools flag employees booking hotels during non-travel periods. Not because they broke a rule, but because their behavior deviated. I’ve seen neural nets track the velocity of spending and trigger soft budget locks before we needed a financial triage. These systems aren’t rule followers. They’re risk interpreters.
And that’s where the shift becomes cultural.
Traditional controls are grounded in compliance. They are backward-looking and comfortingly binary—pass or fail, within threshold or outside it. But AI doesn’t speak in binaries. It speaks in likelihoods. It speaks in shades. And for a generation of finance professionals trained to think in ledgers, this probabilistic whisper requires a new kind of listening.
There is, of course, a cost. When you begin trusting machines to interpret financial signals, you must also ask: Who audits the algorithm? How do we know it isn’t biased, overfitting, or simply wrong? In one company, an AI control flagged a high-risk transaction tied to our largest client. The model was right—but we overrode it for commercial reasons. That override sparked a deeper question: where do human instincts belong in a system that no longer waits for them?
This is not just a technical question. It is a philosophical one. Controls were once about reducing human error. AI now asks us to reconsider the definition of error itself.
So, where does this leave the CFO? No longer the gatekeeper of compliance, but the architect of intelligent trust. In boardrooms, I now speak less about checklists and more about learning systems. I report not just on exceptions caught, but on exceptions predicted. I show how our models evolve, how our blind spots shrink, and how our confidence—real confidence, not the performative kind—grows.
What’s remarkable is how much more strategic this makes financial controls. They are no longer the province of auditors and accountants alone. AI-infused controls touch operations, procurement, sales, and HR. They surface inefficiencies. They suggest better workflows. They nudge behavior. They don’t just say “no.” They ask, “Are you sure?”
In one Series D company, an AI-driven review of refund patterns revealed a subtle trend in customer behavior that had nothing to do with fraud—and everything to do with a flawed returns policy. A finance tool had just improved the customer experience.
That is the future we are walking into—not a world where humans are replaced, but one where machines whisper just loudly enough to make us better. Controls will still exist. But they will no longer be mute guardians. They will be sentient, evolving partners in execution.
And in that quiet, under the surface of daily transactions, something profound happens. Trust is no longer a declaration. It is a signal. Verified, adaptive, and increasingly, not built by us—but learned.
When the Controls Start to Think
On Generative Intelligence, Agentic Systems, and the Future of Financial Trust
It begins, as revolutions often do, quietly. A line of code. A spreadsheet that updates itself. An email reply, not from a person, but from something trained to sound like one. At first, these automations feel like toys or time-savers. But slowly, unmistakably, they begin to assert something stranger. Something bordering on judgment. They begin to make choices. And when those choices concern the flow of money—the granting of access, the escalation of risk, the stopping of fraud—then the question is no longer what they can do. The question is what we are willing to let them decide.
Internal controls have always been about trust. Not the blind, emotional kind, but the institutional sort—cold, deliberate, and deeply procedural. They live in checklists and policies, approvals and reconciliations. As a CFO, I have spent decades designing such systems: who approves which transactions, what thresholds trigger review, which vendors are pre-approved. These rules, though tedious, were predictable. They made clear where the responsibility lay. If the control failed, we could trace the failure to a person, a policy, or a system lag. There was always a post-mortem. There was always accountability.
And now, enter Gen AI. And its more autonomous cousin: Agent AI.
Where generative models mimic, synthesize, and generate new content—from emails to summaries to code—agentic systems go a step further. They observe, decide, and act. A Gen AI tool might write a policy. An Agent AI system might enforce it, monitor compliance, and revise it—all without waiting for a human prompt.
In theory, this is thrilling. Imagine an internal control system that doesn’t wait for the quarter close to flag a risk. That doesn’t just follow rules but rewrites them when patterns shift. That watches vendor behavior in real time, adjusts spending limits dynamically, and deactivates accounts showing anomalous activity before finance ever gets the alert. Imagine a world where every transaction, every exception, every approval chain is continuously tested, validated, and tuned—not by human auditors, but by algorithms that never sleep.
I have seen the early signs. A generative model trained on historical vendor contracts flags a deviation in indemnity language. An AI agent monitoring procurement processes pauses a purchase order and opens a Slack thread to finance with the note: “This approval chain bypasses standard routing. Recommend escalation.” No one told it to do this. It inferred the risk based on prior outcomes. Then it acted.
But with that action comes a shift—a profound one—in the architecture of control. Because these systems are not static. They learn. They experiment. They simulate. And increasingly, they decide.
So who, then, is accountable?
This is not a theoretical concern. In one organization I worked with, an agentic AI system deactivated a vendor account after detecting irregular invoice sequencing and mismatched tax records. It was right—fraud was involved. But in doing so, it disrupted a mission-critical supply chain. No human had been consulted. The AI didn’t just flag the issue. It resolved it. By the time the finance team got involved, the damage was done—and the CFO had to explain why no one made the call.
This is the paradox of intelligent controls: the more precise they become, the more human they feel. And the more human they feel, the more we assume they understand context—the messy, gray, risk-adjusted, trade-off-laden context that defines real-world decisions.
But they don’t. Not yet.
Generative AI doesn’t “understand” the ethics of shutting down a vendor during a supply crunch. It doesn’t weigh shareholder reaction. It doesn’t read the room in a board meeting. And yet, we are quickly arriving at a point where it might act as if it does.
So the challenge ahead is not just technological. It is architectural. It is philosophical. It is governance. Who audits the agents? Who teaches the AI what matters to the enterprise, beyond pattern fidelity? And when something goes wrong—as it inevitably will—who takes the fall? The coder? The model? The CFO?
I believe there is a path forward. But it requires humility. And new rituals.
The CFO’s role will evolve—from designing rules to curating signals. From approving thresholds to setting guardrails. From asking “What went wrong?” to asking “What did the agent learn?” Controls will no longer be engineered. They will be trained. And the vocabulary of internal audit will expand—from policies and tests to prompts, weights, and model drift.
This is not dystopian. It is developmental. Because at its best, this new breed of intelligent control doesn’t just protect the enterprise. It teaches it. It surfaces biases. It identifies systemic inefficiencies. It reveals where human judgment, once thought infallible, quietly falters.
In that sense, perhaps these systems do not erode trust. Perhaps they distribute it. They shift it from individuals to architectures. From intuition to inference. From afterthought to forethought.
But they also demand a new kind of courage. The courage to oversee a system that not only obeys, but questions.
And in the quiet spaces of finance—between the forecast and the ledger, the approval and the payment—those questions may be the most valuable control of all.
The Guardian Learns to Think
On the Promises and Perils of AI in Financial Controls
There’s an odd sort of poetry in a spreadsheet. A rhythm, a rigor. Columns align, cells obey, totals cascade downward with quiet finality. For decades, the financial control system—perhaps the most unsung infrastructure in modern capitalism—was built on this orderliness. The logic was linear. Rules were clear. And the humans who enforced them knew their role: protect the enterprise from itself.
But now the spreadsheet whispers. It suggests, predicts, even decides. The controls, once silent sentries, have begun to think.
I have been a CFO for over thirty years. In that time, I’ve watched control systems evolve from manual ledger checks to ERP-integrated workflows, and now—inevitably—into the domain of artificial intelligence. What’s striking is not just how fast this shift has arrived, but how quietly it is embedding itself into the financial DNA of our companies. Most CFOs don’t talk about AI in board meetings the way they talk about cash flow or burn rate. Yet, piece by piece, these learning machines are being granted custody over our most fundamental questions: Who can approve? When should we block? What counts as a red flag?
The appeal is obvious. AI in controls is like hiring a team of analysts who never sleep, never forget, and never need to be reminded of policy nuances. They detect anomalies in real time. They flag risky transactions. They learn what “normal” looks like and raise an eyebrow—figuratively, for now—when something deviates. In a world where finance must operate at speed, scale, and precision, who wouldn’t want that kind of partner?
But like any power that arrives cloaked in efficiency, AI’s presence in controls carries both promise and peril.
Let us begin with the promises.
The first is speed. Traditional controls are bottlenecks. They rely on manual reviews, periodic audits, and exception queues. AI collapses that latency. Instead of waiting for the quarter close to identify fraud, an algorithm can detect unusual vendor billing patterns within hours. Instead of relying on sample testing, AI can scan entire populations of transactions, flagging edge cases with uncanny accuracy.
The second is contextual intelligence. Unlike rule-based systems that look for specific violations—say, expenses above a certain threshold—AI models learn from behavior. They can see that a $450 charge at a hotel is perfectly normal for one employee, and suspicious for another. They adjust their sensitivity based on patterns, not hard-coded rules. This makes controls not just stricter, but smarter.
The third is scalability. As companies expand across geographies, currencies, and compliance regimes, traditional control frameworks struggle to keep up. AI can monitor thousands of payment streams, vendor profiles, and policy documents simultaneously, adapting to local context without needing an army of controllers.
And yet—for every elegant edge AI sharpens, it introduces a new kind of risk.
The most obvious is opacity. Traditional controls are transparent by design. Approvals, thresholds, and escalation paths are visible. With AI, the reasoning becomes probabilistic. A transaction is flagged not because it violated a known rule, but because it deviated from a model’s trained expectation. For finance professionals used to deterministic logic, this is deeply unsettling. When the system says “this doesn’t look right,” it’s often hard to explain why.
That leads to the second concern: accountability. When a human controller misses a red flag, we know who is responsible. But when an AI-driven control misfires—approving something it should have blocked, or blocking something strategic—who takes the blame? The model’s designer? The user? The CFO? This ambiguity becomes especially thorny in high-stakes environments like SOX compliance, tax audits, or material misstatements in financial reporting.
A third challenge is bias. AI learns from historical data, and historical data often contains the biases of the systems that created it. If a company’s procurement controls have historically treated new vendors with suspicion and large incumbents with deference, an AI trained on that data may replicate that bias—subtly privileging the past over innovation, or entrenching systemic blind spots.
Then there is the matter of overreach. Agentic AI—models that not only detect but act—introduces a new governance dilemma. In one company I worked with, an AI system automatically paused a large vendor payment after detecting an unrecognized invoice format. The issue? The vendor was critical to product delivery, and the delay set back shipments by a week. The model was technically correct. The override was human. But the damage was already done.
What all of this points to is a broader truth: AI in controls doesn’t just automate judgment. It redefines who holds it.
And so we find ourselves in a transitional moment. The control system, once designed for human visibility and accountability, is being rewritten for speed, scale, and prediction. The financial guardian is evolving from rule-follower to probabilistic analyst to autonomous actor.
The challenge for CFOs is not to resist this evolution—but to govern it.
That means treating AI as a tiered partner. Use it to monitor, but not to decide—at least not without human oversight. Establish model explainability protocols. Review false positives and false negatives not just for technical tuning, but for ethical risk. Train teams to interpret machine-generated signals alongside traditional metrics.
Above all, remember that the greatest strength of a control system is not its intelligence, but its trustworthiness. Trust is not built through automation. It is built through understanding, transparency, and accountability.
In that spirit, AI is not the enemy of controls. But it is a force that demands we reimagine them. We must learn to partner with machines that learn, to teach systems that teach themselves, and to retain our judgment even as we build systems designed to mimic it.
Because in the end, it’s not about whether the machine gets it right.
It’s about whether we know what to do when it doesn’t.
Discover more from Insightful CFO
Subscribe to get the latest posts sent to your email.
