Scottish Financial Enterprise (SFE) is the membership body for the Financial Services (FS) industry in Scotland, and Reinvigoration is a member.
Earlier this week, Graham Turnbull attended an SFE event hosted by Dentons in Edinburgh titled "AI: Navigating the Regulatory Jungle." The keynote from Mayesta Ewer, Head of Digital Intelligence at the FCA, was one of the more practically useful regulatory perspectives we have encountered in some time. We wanted to share the substance of it together, because it speaks directly to something we discuss with FS operational leaders every week: the gap between AI ambition and AI delivery.
The headline from the FCA is unambiguous: no new AI-specific regulation is coming. The FCA has reaffirmed, consistent with its public position since at least December 2025, that existing frameworks are well-equipped to extend to cover AI, and that firms should treat AI governance as a natural extension of good governance overall. That means assessing AI outcomes for customers, managing data security in AI-enabled environments, and building the internal infrastructure to evaluate and approve use cases responsibly before they reach production. The FCA's position is not a blank cheque for unrestricted adoption; it is an expectation that firms demonstrate genuine governance maturity as they move forward.
According to the Bank of England and FCA's own survey, 75% of UK financial services firms are already using AI, with a further 10% planning to adopt within three years. That is significant reach. The deployment reality, however, tells a rather different story. According to the 2026 Riverbed Global Survey, only 40% of organisations in the financial sector consider themselves ready to operationalise AI, just 12% of AI initiatives have achieved full enterprise-wide deployment, and 62% remain stuck in pilot or development phases.
The same survey found that 92% of decision-makers agree that improving data quality is critical to AI success, yet only 43% are fully confident in the accuracy and completeness of their organisation's data. In our experience, this is where many AI programmes quietly stall. When the underlying operational environment is fragmented, with inconsistent data, split ownership and process variability baked in across years of accumulated change, AI tends to inherit those problems rather than resolve them.
According to Lloyds' 2025 Financial Institutions Sentiment Survey, 59% of institutions now see measurable productivity gains from AI, up from 32% a year earlier. That trajectory is encouraging, though it also means that 41% are not yet seeing those gains. Most of the firms recording improvements have achieved them in contained, well-defined workflows rather than across fragmented end-to-end operations, which is a meaningful distinction for anyone thinking about scale.
What stood out most from the keynote was the framing around agentic AI. Mayesta's message was clear:
Firms should manage AI agents with the same rigour they would apply to managing a team of people. Monitor outcomes. Hold them accountable. Do not deploy and assume.
According to Burges Salmon's 2025 sector analysis, around a third of banks are already piloting agentic AI, with around half of those pilots expected to go live in 2026, which means many firms are moving into this territory before the governance frameworks needed to support it are properly in place.
The analogy offered for building AI user capability was a particularly useful one. Think of it less like training someone on a new software tool, and more like developing a first-time people manager. You need to learn how to delegate clearly, monitor progress, review the output, and then challenge it, rather than accepting the first draft or automating the review away. The image used was of shaping a sculpture iteratively, not accepting whatever emerges from the first cut. It is a useful frame because it positions AI capability as a skilled, disciplined activity rather than a switch to be flicked.
This is a new and distinct organisational capability, and most firms have not yet built it in any deliberate way. Research from Aveni's Transformation Nation report found that firms will need to redesign processes, strengthen oversight frameworks, and improve collaboration with technology providers and regulators if they are to move beyond isolated pilots to sustainable AI value.
The FCA outlined a range of collaborative services available to firms, including Innovation Sprints and AI Live Testing initiatives, which allow firms to validate models under regulatory oversight in controlled, real-market scenarios. These are not theoretical offerings. They provide structured, lower-risk environments in which organisations can test ideas, engage directly with the regulator, and reduce the cost and risk of AI experimentation before committing to scale.
Graham had a useful conversation with Mayesta after the session about how consulting partners and their clients can access and participate in these services. We are actively exploring how we can support clients to engage with them as part of identifying and validating AI use cases within their operational transformation programmes. If your firm is not yet aware of what is on offer, it is worth finding out.
The FCA is not looking to constrain AI adoption. It is looking to shape it into something that works sustainably for customers and for firms. The firms best positioned to respond to that are those who approach AI not as a replacement for operational rigour, but as something that becomes genuinely valuable once that rigour is in place.
Getting the sequencing right matters considerably here. Simplifying the operating model, clarifying ownership and reducing process variability creates the conditions in which AI can deliver real value, rather than amplifying the inconsistency that already exists. According to Deloitte's Financial AI Adoption Report, only 38% of AI projects in financial services meet or exceed ROI expectations, with over 60% of firms reporting significant implementation delays. In most cases that reflects the consequences of deploying AI into operations that were not yet ready to support it.
The overall direction of travel from the FCA is a constructive one, and the Edinburgh session reinforced that the regulator sees itself as a collaborative partner in navigating this carefully. We would welcome a conversation with any FS operational leader who is thinking through these questions.