Skip to main content

Legacy, AI and the modernisation imperative: A conversation with Simon Hull

  • News
  • Digital Transformation
  • Financial Services
  • Artificial Intelligence
Simon Hull Apr 29, 2026
Simon Hull, CreateFuture, on AI in modernisation

Most legacy modernisation programmes in financial services never reach the finish line. And the ones that do rarely arrive where they expected. Director of Financial Services, Simon, has spent years helping firms navigate that tension, and he thinks the arrival of AI has made getting this right more urgent than ever.

In this Q&A, he explains why legacy systems are both a firm's biggest liability and its most underappreciated strategic asset, what "thin-slice" transformation looks like in practice, and why the gap between what AI makes possible and what most firms can support is getting wider by the day.

 

TL;DR

 

Most legacy modernisation programmes fail because they're treated as big-bang deliveries rather than incremental journeys
UK financial services firms spend £3.3 billion a year just maintaining core banking infrastructure, yet the true cost of ownership is still being underestimated
The real strategic advantage isn't the legacy system, it's the decades of proprietary data sitting underneath it
"Thin-slice" transformation offers a more resilient path, delivering real value incrementally while progressively retiring the old stack
Only 7% of firms have scaled AI enterprise-wide. The path forward starts with different decisions today

 

Why do so many legacy modernisation programmes fail before they finish?

The starting point is usually that these systems are poorly understood. They're undocumented, built on old technology, and the people who originally built them have often long since left the business. So just getting a handle on the business logic and functionality that's accumulated over decades is an enormous task before you've even started changing anything.

Then there's how modernisations tend to be approached. There's almost always a temptation to try and fix everything at once. A big-bang delivery. But the complexity of these systems makes it incredibly difficult to break that down into manageable, incremental slices. And running large multi-year programmes is fraught with risk. Change fatigue sets in quickly, and when there's no visible delivered value to point to, confidence erodes. Eventually someone decides to pull the plug.

There's also the verification problem. How do you prove that the new system behaves exactly the same as the old one? Achieving semantic equivalence is really hard, which makes regression testing and building confidence to go live genuinely difficult, particularly when these systems are running the business and there's real fear about disruption.

And then there's perhaps the most underappreciated failure mode: partial success. Firms often do replace some of their legacy functionality, but until everything is replaced, the old systems can't be switched off. So they end up managing both simultaneously, and actually end up with more complexity than they started with.

What proportion of a typical FS technology budget goes on maintaining legacy systems?

The short answer: most of it. Typically somewhere between 40 and 80 percent, depending on the type of firm.

UK banks alone spend £3.3 billion every year just maintaining core banking infrastructure. Asset managers are allocating 60 to 80 percent of their technology budgets to run-the-business activities. Insurers are spending around 70 percent on legacy maintenance on average.

The problem is compounded by the cost pressures many FS firms have been navigating in recent years. As firms look to right-size their cost base, legacy maintenance budgets are often the last thing that can be touched — these are the costs keeping the lights on. So when cuts are made, it's discretionary spend and business initiatives that take the hit. Innovation gets deprioritised not because firms don't see the value, but because the legacy cost base leaves them with little choice.

What's striking is how much the total cost of ownership is still being underestimated. When you factor in not just direct costs but indirect and opportunity costs — the innovation that isn't happening, the new capabilities that can't be built, the talent that gets tied up in maintenance — the true total is underestimated by around 3.4 times. That's a significant blind spot.

Could legacy systems actually be a strategic advantage?

It sounds counterintuitive, but there's a real case to be made, with an important nuance.

The legacy systems themselves are a liability. The speed disadvantage compared to digital-first challengers is real. Firms built on decades-old infrastructure genuinely cannot move at the pace of a nimble new entrant.

But those systems have had decades of business flowing through them. And that means decades of proprietary data: client data, transaction history, credit and lending performance, execution data, asset performance. That data is the strategic advantage. It's a huge opportunity waiting to be unlocked.

And AI has sharpened that opportunity considerably. The models themselves are available to anyone, they're not a competitive advantage. But the proprietary data that could drive those models? That is a differentiator. A significant one.

The irony is striking: A firm's biggest liability is sitting directly on top of its biggest strategic asset.

What's the difference between an AI-native organisation and one that's bolted AI onto existing infrastructure?

The difference is fundamental, and it matters more than most firms appreciate right now.

Legacy systems — monolithic architectures, spaghetti codebases, on-premises hardware, siloed data, slow change processes — are simply not compatible with what AI adoption requires. An AI-ready architecture needs well-governed and accessible data, scalable cloud infrastructure, composable business services, real-time APIs, and security and compliance baked in from the ground up.

The uncomfortable reality is that the gap between what AI makes possible and what legacy environments can actually support is only getting wider. If AI is going to reinvent the industry in the way many people expect, then modernisation becomes less of a technology programme, and more of a top-level strategic imperative.

What does "thin-slice" transformation mean?

Thin-slicing means defining end-to-end slices of genuine business value that can be delivered incrementally. Rather than trying to replace everything at once, you identify a discrete capability, deliver it fully on the new stack, and start realising real business benefits from it immediately.

One of the historic challenges with this approach is breaking down monolithic legacy architecture into those manageable slices in the first place. That's where AI has made a real difference. By helping teams better understand existing code structure, map dependencies and explore different modernisation scenarios, AI makes it possible to identify those slices with far greater confidence than before.

The legacy system doesn't disappear overnight. It runs alongside the new one. But it becomes progressively less central as more functionality migrates across. You're not betting everything on a multi-year programme and hoping it holds together. You're building confidence, delivering value, and reducing risk at every stage.
It's a fundamentally different philosophy from the big-bang approach, and a more resilient one.

Where does AI have the most impact on the modernisation journey itself?

What's interesting here is that AI isn't just a tool you apply at one stage of the journey. It's the catalyst, the accelerator, and the destination all at once.

It's the catalyst because asking "how do we become AI-native?" reframes the entire conversation. Suddenly modernisation isn't an IT programme. It's a strategic imperative with a clear business case behind it.

It's the accelerator because AI makes the journey itself faster and safer than it's ever been. Firms can now achieve in months what previously took years, with significantly less risk attached.

And it's the destination. Unlocking that proprietary data advantage — the decades of client, transaction and performance data sitting inside legacy systems — that's what firms are ultimately modernising towards.

In practical terms, AI accelerates each of the four stages of the modernisation journey:

  • Discovery: AI analyses code, dependencies, business logic, data lineage and risk hotspots, alongside interviews, documentation and tooling data, to build an audited knowledge base and a single source of truth. The undocumented becomes documented. The unknown becomes known.
  • Delivery: That knowledge base then powers controlled agentic delivery loops. AI supports exploration, impact analysis, requirements generation, task breakdown, implementation and automated verification, with experienced engineers providing oversight and accountability throughout.
  • Testing: AI generates tests and test data to prove semantic equivalence between old and new systems, including edge cases that would previously have been missed. Techniques like mutation testing add further confidence and rigour.
  • Route to production: AI streamlines governance by producing change evidence, risk assessments, testing evidence and audit trails. Combined with thin-sliced, modular releases, deployments become safer, more reversible, and faster to value.

 

FAQs

What's the biggest mistake firms make when starting a modernisation programme?

Underestimating how poorly understood their own systems are. Decades of accumulated business logic, written by people who've long since left, with little or no documentation — that's the starting point. Most firms don't fully reckon with that before committing to a programme.

Partial success. Firms replace some legacy functionality but can't switch off the old system until everything is migrated. So they end up running both simultaneously, which adds complexity rather than reducing it.

Significantly. Direct maintenance costs are large enough on their own, but the indirect and opportunity costs — innovation that isn't happening, talent tied up in upkeep, capabilities that can't be built — mean the true total cost of ownership is roughly 3.4 times higher than most firms account for.

Well-governed and accessible data, scalable cloud infrastructure, composable business services, real-time APIs, and security and compliance built in from the start — not retrofitted. Monolithic architectures and siloed data are simply incompatible with what AI adoption requires.

What previously took years can now be achieved in months, with meaningfully less risk. AI accelerates discovery, delivery, testing, and deployment — turning a historically fraught, slow process into something far more tractable.

Meet the author

Simon Hull is Head of Financial Services at CreateFuture. With over 20 years in banking and wealth, including UBS, Barclays, BlackRock and Deutsche Bank, he helps firms turn AI strategy into practical, accountable change.
 
 

 

 

 

Simon Hull

 

Industry Insights

Explore the latest thinking from our industry and tech experts.

AI in UK wealth and pensions: 5 themes reshaping customer experience
Financial Services

AI in UK wealth and pensions: 5 themes reshaping customer experience

by Simon Hull
Why do AI experiments fail and how do you scale AI-native delivery?
Digital Transformation

Why do AI experiments fail and how do you scale AI-native delivery?

by Chris Hawley
Can AI close the UK pensions gap?
Financial Services

Can AI close the UK pensions gap?

by Simon Hull