Finance AI Strategy July 2026 · 11 min read

AI in financial management: the real opportunity (beyond the hype)

Finance teams are deploying AI in the wrong order — starting with the visible, glamorous use cases and skipping the unglamorous ones where the returns actually are. Here is how to fix the sequencing.


quick · ai

There is a particular kind of meeting that CFOs are having in 2026 that did not exist three years ago. A vendor presents an AI demo. The outputs look impressive. The CFO asks about ROI. The answer involves a lot of words like "transformative" and "efficiency gains" and very few numbers. The CFO approves a pilot. The pilot runs for a quarter. The results are underwhelming. The initiative is quietly deprioritised. Something else comes along.

This pattern is not a sign that AI does not work in finance. It is a sign that finance teams are selecting the wrong use cases, in the wrong order, for the wrong reasons.

The finance function sits in an unusual position. It has more data than almost any other business function — every transaction, every forecast, every variance, every reconciliation. It has workflows that are high-frequency, rules-bound, and miserable to do manually. It has decisions with material consequences that benefit enormously from faster, more accurate analysis. The conditions for AI to deliver outsized value are almost perfect.

And yet finance consistently delivers the lowest AI ROI of any business function, and has the lowest adoption rate of AI of any part of the organisation. The gap between what is possible and what is being achieved is not a technology problem. It is a sequencing problem — and it is one that is entirely solvable.

Why finance lags — and why that gap is closing

56%
of finance leaders now use AI — doubled since 2023, but still the lowest of any business function
10%
average AI ROI in finance — against a target of 20%+ that most organisations are pursuing
68%
of CFOs say they've been slow to adopt because they don't know where to start

That last figure is the most important one. Not the adoption rate, not the ROI gap — but the fact that two thirds of finance leaders have been held back by sequencing uncertainty rather than scepticism about the technology itself. The question is not whether AI belongs in finance. It is which problems to apply it to, in what order, with what realistic expectations about when returns will materialise.

Finance is risk-averse by training and by necessity. CFOs think in terms of ROI and downside risk, not in terms of possibility and potential. This is not a weakness — it is exactly the disposition that produces good financial governance. But it creates a specific failure mode when applied to AI adoption: the bar for a new initiative is set so high that only the most convincingly-marketed use cases get approved, which tends to mean the most visible ones rather than the most valuable ones.

The CFOs who are getting this right are not the ones who bet heavily on a single transformative AI initiative. They are the ones who started with a narrow, high-frequency workflow, measured what changed, and expanded from there. The sequencing question is their secret advantage — and it is not complicated once you know what to look for.

The hype versus the reality

Before getting to what works, it is worth naming what does not — or rather, what works less well than the marketing suggests.

Hype
AI will automate the finance function end-to-end
The "autonomous finance" narrative — AI agents closing the books, preparing board decks, making investment decisions with minimal human involvement. Good for vendor marketing. Not what is actually happening in 2026.
Real
AI eliminates the mechanical layer; what remains is judgment
The finance teams generating the best results are not replacing their people — they are removing the data-wrangling, reconciliation, and report-assembly work that consumed most of their time, so that people can focus on interpretation, decision-making, and strategic communication. OpenAI's internal finance team operates at roughly 22% of the headcount of comparable companies. This is not because AI replaced finance people — it is because AI absorbed the mechanical work that would have required several times more of them.
Hype
Start with the most visible AI use case
AI-generated board presentations, chatbots for employee finance queries, generative summaries of analyst reports. These are easy to demo, easy to explain to leadership, and almost universally selected as first pilots.
Real
Start with the highest-frequency, data-richest workflows
Accounts payable, reconciliation, fraud detection, FP&A scenario modelling. These are harder to demo impressively in a thirty-minute meeting. They are also where the returns are an order of magnitude larger, because AI is compressing work that happens every day at high volume rather than work that happens once a month and takes an afternoon.
Hype
Get your data right before you start
The most common reason finance teams delay AI indefinitely. Data quality is cited as a prerequisite; the prerequisite is never fully met; the initiative never begins. In practice, this is waiting for perfect before starting good.
Real
Start with what you have; improve data quality in parallel
Fraud detection and AP automation work on messy, real-world transaction data. FP&A scenario modelling can start from existing spreadsheet models. Data quality improvement is a parallel workstream, not a prerequisite. The teams that treat it as a prerequisite have typically not started AI deployment yet.
Hype
Expect ROI in the first quarter
65% of CFOs felt pressure for quick returns on technology investments in 2023. Finance AI ROI typically accrues over 12 to 18 months. Mismatched expectations — set during approval, not corrected before results come in — kill more good AI initiatives than technical failures do.
Real
Measure leading indicators early; accept a longer ROI horizon
Forecast accuracy, error rates, time-to-close. These move within weeks. Revenue and cost-line impact move within quarters to years. Frame the ROI conversation correctly with the board before the initiative begins, not after the first quarter's results disappoint — by which point the initiative is already at risk of cancellation.

What is actually working: four use cases with real returns

What follows is not a comprehensive inventory of AI applications in finance — it is a prioritised list of where the most reliable returns are, based on what is actually being deployed and measured in 2026. Start here.

Use case 1 — FP&A and scenario modelling

Use case 01 FP&A and scenario modelling Up to 40% faster forecasting

Financial planning and analysis is the use case with the highest strategic value and the clearest case for AI involvement. The reason is structural: FP&A is almost entirely a data assembly and manipulation problem dressed up as a strategic one. Analysts spend the majority of their time collecting data from disparate systems, cleaning it, reformatting it, and running calculations that a model can perform in seconds. The strategic thinking — which assumptions to challenge, which scenarios to stress-test, what the numbers actually mean — is a small fraction of the total time and the part that actually requires human judgment.

AI compresses the data assembly layer dramatically. Scenario runs that once took days now take hours. Reforecasting that happened quarterly can happen continuously. The constraint shifts from "how long does it take to build the model" to "how quickly can we interpret what the model is telling us" — which is a much better problem to have.

PwC analysis shows AI-assisted financial planning can improve forecast accuracy and speed by up to 40%. The underlying mechanism is straightforward: AI handles data collection and cleansing, processes real-time inputs rather than month-end snapshots, and can run hundreds of scenario variants simultaneously rather than the handful that analysts can build manually. The output of the FP&A process improves not because the analyst's judgment improved, but because they are now applying that judgment to better data, faster.

What this requires: A reasonably consolidated data environment — not clean data, but data accessible from a single system or a small number of systems. Most mid-size organisations have this. The integration work is the main investment; the modelling capability either already exists in tools like Microsoft Copilot or can be added to existing FP&A platforms without rebuilding them.

Watch out for

AI-generated scenarios that look authoritative but embed assumptions the analyst did not set. The model will run any scenario you point it at. The judgment of which scenarios are worth running, and how to interpret the outputs in the context of actual business strategy, remains irreducibly human. Build explicit review steps into the workflow rather than assuming the numbers speak for themselves.

Use case 2 — Month-end close compression

Use case 02 Month-end close compression From periodic to continuous

The monthly close is the most predictably painful process in corporate finance. It is high-frequency, heavily rule-bound, and almost entirely mechanical — the perfect conditions for AI to add value without requiring anyone to change how they think about their job.

The traditional close cycle compresses weeks of work into a few days at the end of the month: account reconciliations, journal entry reviews, intercompany eliminations, variance checks, sign-offs. AI can run the reconciliation and anomaly-flagging elements continuously throughout the month rather than in a compressed end-of-period sprint. The result is that by the time the close period arrives, most of the work is already done — the remaining task is human review and sign-off rather than starting from scratch.

Spendesk, whose CFO described this directly, runs AI-powered reconciliation continuously throughout the month, enabling what they call a "real-time close" rather than a month-end scramble. The finance team's capacity is no longer pinned to a predictable end-of-month crunch that prevents everything else from happening during that period. The close still happens; it just does not consume the team's entire existence for five days every four weeks.

What this requires: Transaction data flowing into a centralised system with consistent categorisation. This is the most common data quality issue in mid-size organisations — not dirty data, but inconsistently categorised data that requires manual intervention to reconcile. AI can learn categorisation patterns, but needs enough historical volume to do so reliably. Most organisations with a year or more of transaction history have sufficient data.

Watch out for

Over-reliance on AI flagging for anomaly detection. The system will surface the anomalies it is trained to surface — not the anomalies that do not fit the pattern of previous anomalies. Build in periodic manual review of transactions that were not flagged, not just the ones that were. The absence of a flag is not the same as clean data.

Use case 3 — Fraud detection and anomaly identification

Use case 03 Fraud detection and anomaly identification Highest adoption: 81% of finance AI deployments

Fraud detection is the most widely adopted AI use case in finance for a simple reason: the ROI case is easier to make than for almost any other application. A fraud that AI catches and a human would have missed has a clear, quantifiable value. A fraud that AI catches faster than a human would have is marginally easier to recover from. The downside of a false positive — flagging a legitimate transaction for human review — is manageable and known. The downside of a false negative — missing a genuine fraud — is potentially catastrophic.

The AI advantage in fraud detection is not that it is smarter than an experienced fraud analyst. It is that it is faster and more consistent at pattern-matching across very large transaction volumes. A human analyst reviewing thousands of daily transactions will exhibit fatigue, will develop heuristics that make some patterns invisible, and will necessarily sample rather than review everything. An AI model can review everything, every time, against a much larger set of patterns than any analyst has memorised.

The use case has expanded significantly with the rise of AI-enabled fraud attempts. As AI makes it easier to generate convincing synthetic invoices, deepfake voice authorisations, and fraudulent payment requests, the organisations with AI-based fraud detection are better equipped to identify the new patterns than those relying on rule-based detection systems that were designed for older fraud types.

What this requires: Historical transaction data with labelled examples of fraud — ideally, at least some examples of fraud that was eventually caught, so the model has positive training cases. For organisations without historical fraud examples, transfer learning from industry-wide fraud datasets is a reasonable starting point, though it will require tuning for the specific patterns in your transaction environment.

Watch out for

Treating AI fraud detection as a replacement for the audit rather than a complement to it. The model detects the patterns it has been trained on. Novel fraud types — and fraud actors are adaptive — will not match existing patterns and will not be flagged. Use AI to raise the floor on detection quality; do not use it as grounds to lower the ceiling on audit rigour.

Use case 4 — Accounts payable automation

Use case 04 Accounts payable automation The quick win most CFOs underestimate

Touchless invoice processing is mature technology with documented ROI and the lowest organisational resistance of any finance AI use case. It is also consistently underinvested in because it feels like infrastructure rather than strategy.

This is the right observation applied to the wrong conclusion. AP is infrastructure. That is precisely why automating it delivers reliable, compounding returns rather than the unpredictable returns of more strategic AI applications. The volume is high, the rules are clear, the inputs are standardised (invoices are invoices), and the savings scale directly with transaction volume. Every invoice that goes through touchlessly is time and cost that does not scale with the business as it grows.

Major retailers including Logitech, Superdry, and Primark have implemented AI-driven AP processes with documented outcomes: faster processing, fewer errors, and significant cost reductions per invoice processed. The technology handles document ingestion, data extraction, three-way matching, exception flagging, and payment scheduling. Human intervention is reserved for the genuinely ambiguous cases — which, in a well-tuned system, is a small minority of total invoice volume.

What this requires: A reasonably standardised supplier invoice format, or the willingness to work with major suppliers to standardise it. EDI integration is ideal but not required — modern AP automation handles PDFs, emails, and even scanned paper documents. The integration with the existing ERP is the main technical work; most modern AP automation tools have pre-built connectors for SAP, Oracle, and Microsoft Dynamics.

Watch out for

Automating a broken process. If your AP workflow has structural problems — suppliers submitting invoices without POs, frequent disputes, inadequate approval controls — AI will process the broken invoices faster. Fix the process design before or alongside the automation, not after.

The CFOs getting the best results from AI are not the ones who bet on a single transformative initiative. They are the ones who started narrow, measured what changed, and expanded from there.

What finance teams get wrong when they deploy AI

Four failure modes appear consistently — not as cautionary tales from other industries, but as patterns playing out in finance teams right now.

  • Failure 1
    Starting with the visible use case rather than the valuable one. Board presentations and executive dashboards are easy to demo and easy to get approved because they are easy to understand. They are also the use cases with the shallowest returns — they affect low-frequency, high-variability work rather than the high-frequency, rule-bound workflows where AI compounds. The organisations that started with AP automation or FP&A scenario modelling in 2023 are now expanding AI into more complex applications. The organisations that started with AI-generated reports are largely still at the pilot stage.
  • Failure 2
    Setting ROI expectations at the board level before understanding the ROI timeline. Finance AI ROI accrues over 12 to 18 months for most use cases. If a CFO approves a pilot with an implicit expectation of meaningful return in 90 days, and the initiative is measured against that expectation, it will almost always disappoint — not because it failed, but because it was evaluated at the wrong time. The fix is simple: define success metrics and timelines before the pilot, not after results come in. Leading indicators — forecast accuracy, error rates, time-to-close — are available within weeks. Business impact takes longer, and saying so explicitly at approval time is not a weakness; it is accuracy.
  • Failure 3
    Treating data quality as a prerequisite rather than a parallel workstream. Every finance team believes its data is unusually bad. In almost every case, it is not unusually bad — it is normally messy, which is a different thing. Most high-value AI use cases in finance work on normally messy data. The organisations waiting for perfect data before starting AI deployment have, in most cases, been waiting for years. Start with the data you have. Use the AI deployment as the occasion to identify specific data quality issues that matter and address them. Reverse the dependency.
  • Failure 4
    Deploying without a human override mechanism. AI in finance produces outputs that inform decisions with material financial and regulatory consequences. Every production AI system in a finance context needs an explicit, easy path for a human to review, correct, or override the output before it affects a decision. This is not distrust of the technology — it is the accountability structure that makes it safe to deploy at scale, and the thing that will satisfy an auditor or regulator when they ask how AI-assisted decisions were made. Systems deployed without this mechanism will eventually surface an error in the worst possible moment.

The sequencing question: where to start if you haven't yet

For the 45% of finance teams still in limited pilot mode — and the 17% that have not yet deployed AI in any core workflow — the sequencing question is practical and answerable. Here is a framework that works.

Days 1–30
Audit what you already have
  • Check existing finance software for embedded AI capabilities. By end of 2026, the majority of enterprise software spend will be on products with built-in generative AI — SAP, Oracle, Microsoft Dynamics, and Workday all have active AI feature roadmaps. You may already be paying for capabilities you have not activated.
  • Identify the three highest-frequency manual workflows in the finance function — specifically, tasks someone does every day or every week that follow a consistent pattern.
  • Map where AI-enabled fraud is currently a risk, even if the answer is "we don't know" — that uncertainty is itself useful information about current detection gaps.
Days 30–90
Pick one workflow and run a real pilot
  • Choose the highest-frequency workflow from your audit that maps to one of the four use cases above. If in doubt, AP automation is the lowest-risk starting point with the most predictable returns.
  • Define success criteria before starting — at minimum, one leading indicator measurable within 30 days, and one lagging indicator measurable within 6 months.
  • Use real users and real data from day one. A pilot tested on curated inputs by motivated team members is not a pilot — it is a demo. The information you need comes from the edge cases, the exceptions, and the moments when the system produces something unexpected.
  • Assign a named internal owner who is responsible for the pilot's outcomes, not just its execution. Pilots without owners drift.
Days 90–180
Evaluate honestly and decide what comes next
  • Measure against the criteria you defined before starting, at the threshold you set. If results meet the criteria, prepare to scale. If they do not, diagnose why before deciding to continue, redesign, or stop — these are different problems requiring different responses.
  • Document what you learned about the failure modes and edge cases. This is the most valuable output of any pilot and the thing most organisations fail to capture before moving on.
  • Use the pilot results to have a specific conversation with the board about ROI timeline and what comes next. The pilot's evidence should update the original expectations, in either direction.
  • If the pilot succeeded, identify the second use case — ideally one that builds on the infrastructure already in place. AP automation and month-end close compression share data infrastructure; FP&A and scenario modelling share analytical infrastructure. Sequence for compounding returns, not for maximum variety.
Free download
The 30/90/180-day AI sequencing framework
A one-page version of this framework for finance leaders — shareable with your team and board.
Download free →

The governance piece nobody mentions

AI in finance is not just an efficiency question — it is a control question. Finance handles the most sensitive data in any organisation: compensation, forecasts, board materials, acquisition targets, regulatory filings. Every AI deployment in the finance function needs to answer three questions before it goes live.

Who reviews the output before it affects a decision? For routine, low-stakes outputs like processed invoices below a certain threshold, automated processing is appropriate. For outputs that inform forecasts, board reporting, or regulatory filings, explicit human review is required. The review step needs to be designed into the workflow, not added as an afterthought.

How do you explain an AI-assisted decision to an auditor? The bar is not that the auditor needs to understand the model — it is that there needs to be a documented process for how the model's output was validated before it influenced a decision. Most AI governance frameworks that were written for general corporate use need significant adaptation before they are adequate for finance-specific contexts. The governance framework question deserves its own dedicated attention, not a footnote in an AI implementation plan.

What is your shadow AI exposure? Finance employees using consumer AI tools — ChatGPT, Claude, Gemini — on sensitive financial data is a real and underacknowledged risk in most organisations. The data does not necessarily stay in the tool, the outputs may not be auditable, and the use may not be discoverable until something goes wrong. A clear policy on AI tool use in the finance function, combined with enterprise-grade alternatives that satisfy security and compliance requirements, is not optional — it is a control gap that regulators and auditors are increasingly aware of.

Working through AI governance for your finance function? Book a conversation with quick·ai →

The finance function is not uniquely difficult to transform with AI. It is uniquely positioned to benefit — because its workflows are data-rich, high-frequency, and rules-bound in ways that most other business functions are not. The teams that understand this, start with the right use cases, and manage the sequencing and governance carefully are already pulling ahead. The question is not whether AI belongs in financial management. It is whether your finance function will be leading the transformation or reacting to it.

quick · ai
Work with quick·ai on AI in finance
If you're a finance leader working out where to start — or why current initiatives aren't delivering — I work with finance teams on AI strategy and implementation. It starts with a conversation.
Book a free 30-minute call →