Skip to main content
On Air Now

When leaders don’t understand AI, public trust is the real casualty

Share

When leaders don’t understand AI, public trust is the real casualty
When leaders don’t understand AI, public trust is the real casualty. Picture: LBC/Alamy

By Benjamin Laker

This week, the chief constable of West Midlands Police apologised to MPs after admitting he had given incorrect evidence about a decision to ban Maccabi Tel Aviv fans from a football match.

Listen to this article

Loading audio...

The error, he explained, arose from the use of Microsoft Copilot, which produced false intelligence about a match that never took place.

At first glance, this looks like a familiar story about artificial intelligence going wrong. But focusing on the technology misses the deeper problem. This was not a failure of AI. It was a failure of leadership.

AI did not choose to present unverified information as intelligence. A senior leader did. AI did not deny its use when questioned by Parliament. A senior leader did. And AI did not design the systems that allowed a probabilistic tool to be treated as a source of factual certainty. Leaders did.

An independent report into the incident concluded there was a “failure of leadership”. That phrase matters. It signals that the issue was not a technical glitch, but a breakdown in judgment, oversight, and accountability at the top.

This pattern is becoming increasingly familiar in organisations adopting AI at speed. Tools are introduced to improve efficiency and decision-making, but far less attention is paid to the unglamorous work of governance. Clear rules about where AI can be used. Training on its limits. Processes to verify outputs before they shape decisions. When those foundations are missing, errors are not just possible, they are predictable.

A growing body of research shows that people tend to over-trust automated systems, especially when those systems speak with confidence. The danger is not that AI hallucinates, but that humans stop interrogating it. That risk increases when senior leaders are removed from the day-to-day use of these tools but remain responsible for their consequences.

The result can be a quiet accountability gap. Leaders authorise systems they do not fully understand, then find themselves surprised when those systems fail publicly. In policing, where legitimacy depends on trust, that surprise comes at a high cost.

This is why the debate should not fixate on whether individuals should resign. That framing offers the comfort of blame without the discipline of learning. The more important question is whether leaders have taken responsibility for the decision systems operating in their organisations.

Accountability in the age of AI is not just about intent. It is about competence. If leaders cannot clearly explain where AI is used, how its outputs are checked, and who is responsible when it fails, then their organisation is not ready to use it in high-stakes decisions.

AI does not erode public trust on its own. Trust erodes when leaders treat powerful tools as shortcuts rather than responsibilities.

___________________

Benjamin Laker is a Professor of Leadership at Henley Business School who researches identity, legitimacy and meaning-making in the context of work and leadership development.

LBC Opinion provides a platform for diverse opinions on current affairs and matters of public interest.

The views expressed are those of the authors and do not necessarily reflect the official LBC position.

To contact us, email opinion@lbc.co.uk