The EU AI Act comes into full enforcement on August 2, 2026. That's five months from now. If you're a finance leader using AI in any capacity, or if your team is using it without your knowledge (more on that shortly), this deadline matters to you.

I'm not going to give you a legal treatise. There are plenty of those, and most of them are written by people who've never had to implement a controls framework under time pressure. What I want to do is translate this regulation into practical terms: what it actually requires, which finance use cases are affected, and what a proportionate response looks like for a mid-market business that doesn't have a dedicated compliance department.

What the Act actually requires

The EU AI Act takes a risk-based approach. Not all AI is treated equally. The regulation classifies AI systems into four tiers: unacceptable risk (banned), high-risk (heavy regulation), limited risk (transparency obligations), and minimal risk (essentially unregulated).

For finance, the high-risk category is where you need to focus. An AI system is classified as high-risk if it makes or materially influences decisions that affect people's access to financial services or their financial standing. The specific use cases called out include:

Credit scoring and creditworthiness assessment. If you're using AI to evaluate whether someone gets credit, how much, or at what rate, that's high-risk. This includes automated lending decisions, credit limit adjustments, and risk scoring models.

Fraud detection systems. AI that flags transactions or individuals as potentially fraudulent falls into the high-risk category. The logic is straightforward: a false positive can freeze someone's account, block a legitimate transaction, or trigger an investigation. Those are consequential decisions.

Anti-money laundering (AML) screening. Automated systems that assess AML risk, flag suspicious activity, or determine enhanced due diligence requirements are high-risk. If your AML tool uses machine learning to score transaction patterns, it's covered.

Financial forecasting that feeds material decisions. This one is less explicitly defined but increasingly being interpreted broadly. If AI-generated forecasts directly influence investment decisions, covenant assessments, or board-level strategy, the argument that these systems carry high-risk characteristics is gaining traction among regulators.

For each high-risk system, the Act requires: a risk management system, data governance and quality standards, technical documentation, record-keeping and logging, transparency to users, human oversight mechanisms, and accuracy and robustness standards.

That sounds like a lot. It is a lot if you're starting from nothing. But if you already have a decent controls environment, you're closer than you think.

The SOX connection most people miss

If your business has any SOX compliance obligations, or if you've built internal controls to a SOX-like standard (as many PE- and VC-backed companies do in preparation for exit), you've already done a significant portion of the groundwork.

SOX fundamentally requires human-in-the-loop for material financial processes. You need to demonstrate that a human with appropriate authority has reviewed and approved key outputs. The EU AI Act requires something very similar for high-risk AI: meaningful human oversight, the ability to intervene, and the capacity to override automated decisions.

The controls frameworks overlap substantially. Documentation requirements, audit trails, risk assessments, segregation of duties, exception handling. If you have a mature SOX-like environment, adapting it to cover AI governance is an extension of what you already do, not a new discipline entirely.

The frameworks for PE- and VC-backed companies that were designed with exit in mind and were built properly, with genuine rigour rather than just tick-box compliance, translate almost directly into AI governance frameworks. The gap is usually in AI-specific technical documentation and the formal risk classification process. The underlying governance muscle is already there.

The shadow AI problem

Here's the statistic that should concern every CFO: 78% of finance teams are using AI tools that haven't been approved by their organisation. And 75% have shared sensitive financial data with AI systems. Not maybe. Not occasionally. Three quarters of your team.

I've seen finance functions where the management accountant was using ChatGPT to draft variance commentary, the FP&A analyst had built a forecasting model using an AI tool nobody in leadership knew about, and the accounts payable team was using an unapproved browser extension to categorise invoices.

None of these people were being reckless. They were being resourceful. The tools made them faster and they adopted them. But from a regulatory perspective, each of those is a potential uncontrolled AI system processing financial data with no governance, no documentation, no risk assessment, and no audit trail.

Under the EU AI Act, the organisation is responsible. Not the individual who downloaded the tool. If an unapproved AI system is making or influencing financial decisions using your data, you need to know about it, assess its risk, and either bring it into your governance framework or stop it.

The CFO sees the approved tools on the IT roadmap. The controllers see what people are actually using day to day.

The perception gap makes this worse. Research shows 51% of CFOs believe they've achieved full AI adoption. Only 19% of controllers agree. That gap isn't about ambition. It's about visibility. If you're a CFO who thinks you have full visibility of AI use in your finance function, you almost certainly don't.

What a proportionate response looks like

I want to be direct about something: the EU AI Act compliance framework designed for a global bank is not what a PE- or VC-backed mid-market company needs. If you try to implement that, you'll spend a fortune, exhaust your team, and probably not finish before the deadline anyway. That is exactly the pattern behind the statistic that 42% of companies have abandoned the majority of their AI initiatives, up from 17% in 2024. Overscoping is the killer.

A proportionate response for a mid-market business looks like this:

Right-size the governance. You don't need a 50-page AI policy. You need a clear, practical framework that covers: what AI systems are in use, who owns them, how they're classified by risk, what controls exist around them, and who reviews them periodically. It typically is a 10-to-15-page governance framework with supporting templates. It should be something your finance team can actually maintain, not a document that lives in a drawer.

Focus on what's high-risk. Not every use of AI in your finance function will be high-risk under the Act. An AI tool that helps format Excel charts is minimal risk. An AI system that generates credit assessments is high-risk. Put your effort where the regulatory exposure actually sits. For most mid-market finance functions, that's probably two to five systems, not fifty.

Build on what you have. If you have an existing controls framework, internal audit function, or risk register, extend those to cover AI. Don't build a parallel governance structure. The Act doesn't require a separate AI governance team. It requires that AI risks are managed within your existing risk management approach.

Why PE- and VC-backed companies specifically need to care

Compliance risk gets priced into exits. Full stop.

When a buyer's due diligence team discovers that your finance function has been running AI systems without proper governance, that's not a minor finding. It's a material compliance risk that affects the valuation. The question isn't whether the buyer will find out. With August 2026 as a hard deadline, AI governance will be a standard due diligence question within twelve months. If it isn't already in your data room, it will be asked for.

There are PE and VC exits where compliance gaps in far less scrutinised areas led to price chips or extended warranty provisions. AI governance is going to be high on the diligence agenda precisely because it's new, it's high-profile, and the penalties for non-compliance are significant: up to 35 million euros or 7% of global turnover, whichever is higher.

The flip side is that having a clean AI governance framework actually strengthens your exit story. It demonstrates that the finance function is mature, well-controlled, and forward-looking. It signals that the management team understands regulatory risk and manages it proactively. That's exactly what buyers want to see.

Operating partners are starting to ask about this. Not all of them yet, but the ones with legal and compliance backgrounds are already adding AI governance to their portfolio review checklists. Getting ahead of that question is significantly easier than scrambling to answer it during a compressed exit process.

Three things to do this quarter

You have five months. That's not a lot, but it's enough to get the essentials in place if you start now and stay focused. Here's what I'd prioritise:

1. Conduct an AI use case inventory. Find out what's actually being used. Every AI tool, every automation, every browser plugin, every ChatGPT conversation that touches financial data. Survey your team. Be explicit that this isn't a witch hunt. You need honesty, not compliance theatre. Ask: what AI tools do you use, what data do you put into them, and what decisions do their outputs inform? The answers will surprise you. They always do.

2. Classify each use case by risk. Take your inventory and map each AI system against the Act's risk categories. High-risk systems need the full governance treatment. Limited-risk systems need transparency measures. Minimal-risk systems need to be documented but don't require heavy controls. For most mid-market finance functions, you'll find one or two genuinely high-risk systems, a handful of limited-risk tools, and a lot of minimal-risk usage that just needs to be visible.

3. Build your governance framework. For high-risk systems: document the purpose, the data inputs, the decision logic (as far as you can), the human oversight mechanisms, and the controls around accuracy and bias. For everything else: establish a simple register, an approval process for new AI tools, and a periodic review cycle. Assign ownership. Make it someone's job to keep this current.

This won't get you to full compliance by August 2, but it will get you to a defensible position. You'll know what AI is in use, you'll have assessed the risks, and you'll have a framework in place. That's a credible compliance posture, and it's miles ahead of most mid-market companies right now.

Governance is not the enemy of speed

One can say: "If we add governance around AI, we'll slow everything down and lose our competitive advantage."

I understand the instinct. But it's wrong.

Governance doesn't slow you down. Chaos slows you down. The company that deployed five different AI tools across finance with no coordination, no data standards, and no oversight? They're the ones who are slow, because every tool produces slightly different numbers, nobody trusts any of them, and the CFO spends half her time reconciling AI outputs that should agree but don't.

The company with a clear governance framework knows exactly which AI systems are running, what they do, and how to verify their output. When something goes wrong, and it will, they can identify and fix the problem quickly. When a new use case emerges, they can assess it, approve it, and deploy it through a known process rather than having someone download a tool and hope for the best.

Governance is what lets you move fast with confidence. It's the difference between driving fast on a road you know and driving fast in fog. Both are fast. Only one gets you where you're going.

The companies that will get the most value from AI in finance over the next five years won't be the ones that adopted earliest. They'll be the ones that adopted smartly, with proper oversight, clear accountability, and the ability to prove to regulators, auditors, and buyers that they're in control.

That's what the EU AI Act is really asking for. Not perfection. Control. And for any well-run finance function, that shouldn't be a foreign concept. It's what we do.


If the EU AI Act compliance timeline is relevant to your finance function, the AI Readiness Assessment covers governance frameworks, risk classification, and the data and process foundations that compliance depends on — as well as a practical roadmap for getting there before August 2026.

Did this raise questions about your finance function?

I offer a complimentary 30-minute diagnostic conversation. No pitch — just an honest assessment of where you stand.

Get in Touch