Every discipline starts the minute somebody names the gap.
Institutions have been developing their governance capabilities for decades. Compliance systems, risk committees, AI ethics policies, model risk management — all emerged in reaction to a problem. They all solved a symptom but never went to address the underlying structure.
The underlying structure is the decision chain — a sequence through which an important decision passes, starting from setting the institution's purpose to making the decision happen in the real world. This decision chain has layers to it; it has failure points; and it has points at which intent drifts from execution so gradually that nobody notices until the damage is done.
Governance gives you the accountability structure. Compliance gives you the rules to follow. AI Governance gives you the constraints the machine needs to run in. None of these will tell you if the decision you're automating still aligns with your institutional purpose.
This was the gap that needed filling. Decision Engineering™ is the discipline that fills it — not through increasing oversight, not by drafting more policy, but by recognizing the decision chain for what it is: a system with inputs, layers, failure points, and feedback — and designing it as such.
This newsletter applies that approach to real decision-making failures. Each issue highlights a case from finance or healthcare, analyzes the exact point at which the decision chain failed, and lays out the structural solution. No theoretical concepts. No philosophy. Just the system of institutional decision making — and how to engineer it more effectively.
Why this exists. Institutions are not bad at making decisions because the people involved are incompetent or have bad intentions. The problem lies in the architecture linking intent with execution, which was never designed. Governance frameworks inform you about responsibilities. But they do not indicate whether the decision being made remains relevant for fulfilling the purpose the institution was established to achieve. It existed before AI. AI only made it invisible.
What Decision Engineering™ focuses on. Each significant decision passes through a series of events — from defining institutional purpose, via strategy, intention, rules, judgement, decision, ending with outcome and feedback. We call this the Decision Integrity Chain™. It consists of eight layers. Any failure within an institution happens somewhere within one or several of these layers. This newsletter will identify the source of that failure — and help you fix it.
How each issue works. Every issue begins with a real failure in financial services or healthcare. No theoretical scenarios. No composite stories. Real institutions, real decisions, actual consequences. The Diagnosis section isolates which layer of the DIC™ broke and how. The Engineering Note provides one actionable remedy. The Question ends every issue with one statement worthy of discussion in your next board meeting.
What you will gain. Not a framework for analysis. A perspective for action. Over the course of issues, you will see all eight layers of the Decision Integrity Chain™ applied to real-world failures — and learn practical solutions for implementing the changes needed at your own institution.
Coutts terminated its services for Nigel Farage as a client after conducting an internal reputational risk assessment.
It became evident later that the decision was not made purely from financial and regulatory considerations. It involved considerations of whether the individual's interests aligned with the bank's values and public positioning.
All governance and risk management processes were followed. Appropriate documentation existed. No policy violation occurred. However, it resulted in the CEO of NatWest resigning and a comprehensive FCA investigation into debanking activities.
This was not a procedural failure but an architectural one. The bank had two purposes: to provide services to eligible customers, and to protect its reputation. There was no formal mechanism to balance the two. The system defaulted to the measures it could track. Reputation prevailed. Purpose was never formally encoded.
Several NHS trusts introduced AI-driven triage and prioritisation tools to manage emergency department demand. Clinical feedback and independent reviews highlighted a consistent pattern: patients with complex, multi-condition profiles — often elderly — were being deprioritised.
The reason was not bias in the conventional sense. It was optimisation. The systems were trained to improve throughput, reduce waiting time metrics, and increase flow efficiency. Complex patients reduce throughput. The system learned exactly that.
No rule was violated. No guideline was broken. The model performed as designed. But emergency care is not designed to optimise flow. It is designed to prioritise clinical need. The optimisation function and the institutional mandate diverged — and no one forced reconciliation.
Different sectors. Different systems. Identical structural failure.
In both cases an institution deployed a system optimised toward a measurable objective — reputational risk exposure, A&E throughput — without engineering a mechanism to continuously reconcile that objective with what the institution was actually built to protect.
In the Decision Integrity Chain™, that mechanism lives at Layer 1 — Purpose.
Purpose is not a mission statement. It is the non-negotiable institutional mandate that defines what the institution exists to protect — regardless of what its systems are optimising toward.
At Coutts, Purpose and optimisation diverged. The system followed optimisation. Purpose was not in the room. At the NHS trust, same story. The break was not at the model. Not at the data. Not at the human who approved deployment. It was at the point where institutional purpose should have been encoded into what the system was built to do — and wasn't.
That distance is the Fiduciary Gap. And it opens at Layer 1 — before a single decision is made.
Your institution has a Purpose. Your systems have parameters. Nobody checked whether they still match.
Here is what Decision Engineering™ would have changed — at each institution, before deployment, not after the damage.
Before the reputational risk system went live, one question would have been mandatory at board level: What is the Purpose of this deployment — stated in one sentence — and does every parameter it uses serve that Purpose?
The answer would have surfaced an immediate conflict. Client eligibility obligation on one side. Reputational exposure minimisation on the other. Two objectives. No reconciliation. No defined hierarchy of which takes precedence. That conflict would have required a board decision — documented, attributed, reviewable. The system would not have gone live until it was resolved.
Prior to launch: What is the Purpose of this deployment — and does each parameter support that Purpose? Parameter aimed at optimizing throughput. Parameter focused on prioritizing patient needs. Contradiction. The Purpose-to-parameter analysis would have highlighted this prior to a single patient being triaged — and subordinated the throughput parameter to clinical need before deployment.
Just one sentence. Approved by the board and visible to every auditor and operator. When the people running the system fail to produce it within 30 seconds, purpose is implied — not defined. Implied purpose is not enough. It must be codified.
Every optimization parameter must have traceability to the Purpose statement. Those without it must be removed or the statement modified. And always revisited whenever a parameter changes or the model is retrained.
One question, posed by one named person with board visibility, every quarter: has anything changed that means this system's parameters no longer serve the Purpose they were designed to serve? This is not a model evaluation. It is a mandate evaluation. The two should never be run by the same function.
These are not technology interventions. They do not require a new team or a new budget. They require an institution that has decided to engineer the connection between its Purpose and the systems it builds to serve it. That is Decision Engineering™.
For every AI system your institution operates today — can you produce the Purpose it was designed to serve, and demonstrate that its current parameters still serve that Purpose?
If that question takes longer than thirty seconds to answer, the Fiduciary Gap is already open.