Artificial intelligence is now embedded across financial services and increasingly across wider mid-tier businesses. It touches credit decisions, customer communications, fraud detection, pricing, marketing, operations and risk management. In many organisations, it is already business-critical.
What is striking is how often responsibility for AI is still treated as a technical issue. Something to be owned by IT, data, or a third-party vendor. Something abstract, experimental, or not quite “real” yet.
The UK Treasury Committee’s recent report on artificial intelligence in financial services makes one thing clear: that assumption is wrong… very wrong.
Accountability has not moved. The risk has.
The Committee’s evidence sessions and conclusions underline a position regulators have been signalling for some time. There is no imminent plan to create a neat, AI-specific regulatory regime that shields senior leaders from accountability. Instead, existing frameworks apply. Senior Managers and Certification Regime. Consumer duty. Operational resilience. Governance and conduct expectations.
In simple terms, AI has arrived, but the accountability model has not changed.
If an AI-driven system causes customer harm, financial exclusion, operational failure, regulatory breach or reputational damage, responsibility does not sit with “the algorithm”. It sits with the people who authorised its use, signed off on its deployment, and benefited from its outcomes.
Boards cannot delegate that away.
“I didn’t understand it” will not save you
One of the more uncomfortable themes in the report is the tension between AI opacity and senior management accountability. Many AI systems, particularly those using machine learning, are not easily explainable, even to specialists.
That creates a dangerous gap. Something that I covered in an earlier insight: AI’s Hidden Perils: Why Without Strong Governance, AI Becomes a Crisis in the Making.
If a senior leader is accountable for an outcome, but cannot explain how the system that produced it works, they are exposed. The report makes it clear that regulators are alive to this issue and are not sympathetic to the idea that technical complexity absolves responsibility.
Not understanding the technology does not remove accountability. It magnifies the risk.
From a board perspective, this matters because AI failures do not announce themselves politely. They surface as complaints, press interest, regulatory questions, system outages or sudden loss of trust. By the time leadership is engaged, the issue is rarely theoretical.
It is already live.
AI has expanded the crisis surface area
Traditionally, crises were easier to categorise. A financial issue. A legal issue. A communications issue. An operational failure.
AI does not respect those boundaries.
An automated decisioning model can create consumer harm. That triggers regulatory scrutiny. Which leads to media attention. Which exposes weaknesses in governance, data handling and leadership oversight. All while systems continue to operate at scale.
The Treasury Committee highlights growing concern around AI-driven fraud, financial exclusion, cloud dependency and third-party concentration risk. These are not future hypotheticals. They are present-day vulnerabilities.
The result is that AI incidents escalate faster, cut across more functions, and overwhelm leadership teams more quickly than traditional failures. This is exactly where decision paralysis sets in.
Why fragmented advice fails under AI pressure
When an AI-related issue breaks, most organisations default to what they know. Legal advice in one corner. Technical teams in another. PR or communications brought in once the story leaks. Each advisor operating correctly within their own lane.
The problem is that AI-driven crises do not have lanes.
Legal caution can conflict with communications urgency. Technical remediation can clash with regulatory disclosure obligations. Financial exposure and reputational risk move faster than internal alignment.
The Treasury Committee report indirectly exposes this weakness. AI risk is multi-disciplinary by nature, but most organisations are still structured to respond in silos. Under pressure, that fragmentation slows response, muddies accountability, and increases damage.
This is no longer about innovation. It is about control.
Much of the public conversation around AI still frames it as an innovation challenge. How fast to adopt. How far to push. How not to fall behind competitors.
The report reframes it more bluntly. This is a governance and control issue.
Boards are expected to understand where AI is used, what risks it introduces, how it can fail, and who is accountable when it does. Not in technical detail, but in operational reality.
That requires more than policy documents and ethics statements. It requires crisis-level thinking applied before and during incidents.
Where Arx Nova fits when AI risk turns real
At Arx Nova, we work with leadership teams when issues have already escalated beyond internal containment. When uncertainty, pressure and scrutiny collide.
AI-related crises are becoming a familiar pattern. Not because leaders are reckless, but because systems evolve faster than governance, and incidents move faster than decision-making structures.
Our role is not to explain algorithms. It is to stabilise organisations.
We deploy senior leadership across legal, financial, operational and communications fronts simultaneously. We remove fragmentation, impose structure, and take control of the moving parts that overwhelm boards in the first critical days.
When accountability is personal, time is compressed, and the narrative is already forming, leadership needs clarity, not complexity.
AI has not changed that truth. It has made it more urgent.
A final thought for boards
The most dangerous assumption in this space is not that AI might fail.
It is the belief that when it does, responsibility will sit somewhere else.
It will not.
If AI is already embedded in your business, this is no longer a future risk discussion. It is a present leadership obligation.
And “I didn’t understand it” is not a defence.
Who’s behind this post?
Simon Larkin
Director & Co-Founder
Simon Larkin is a Fellow of the Chartered Institute of Marketing and a Chartered Marketer. As Co-Founder of Arx Nova, he brings over 20 years of experience in crisis communications and marketing. Simon works with leadership teams to manage reputational risk, control the narrative, and restore stakeholder confidence during periods of uncertainty.