Last week I named the five layers between AI strategy and AI build — Knowledge Bank, Accelerate, Decide, Deliver, Digital Nervous System — and promised a walkthrough built around the most common pattern: the AI use case that isn't wrong, just early. This is that walkthrough. Each layer has a job, each closes a limitation of the existing options, and the easiest way to see why all five are needed at once is to follow a pattern that recurs in nearly every AI-readiness conversation.
Second of three posts introducing the five layers. The next post closes the introduction by showing what changes when an organisation runs them together.
Many AI use cases aren't wrong. They're early.
A function leader identifies a use case that genuinely fits the function: say, a customer-service triage assistant that pre-classifies incoming tickets. The business case holds up. The technology is mature. The vendor demos are convincing. And then the organisation discovers, halfway into scoping, that the customer ticket data is fragmented across three systems with three different category taxonomies, and that nobody owns reconciling them.
The capability isn't built yet, and won't be for two quarters, when a separate data-quality initiative is supposed to land.
What happens next, in most organisations, is one of three things. The use case gets force-fit on the existing data and produces a low-quality assistant that fails its pilot. Or it gets quietly dropped, with no record of why and no condition under which it should be revisited. Or it gets re-evaluated from scratch nine months later, by a different team, who reach the same conclusion and don't know that the org has already had this conversation.
Each of the five layers carries one piece of preventing that.
Knowledge Bank: the shared scaffolding
The Knowledge Bank is the running start. It's the off-the-shelf scaffolding every customer gets on day one. Context schema (industry, function, role, AI maturity). Objectives library, organised by OKR pattern. Capability framework that names what an organisation has to be good at to run a given use case. Use case library, continuously growing. And expanding into process patterns, orchestration patterns, and operating-model templates.
Same shape across every customer. The scaffolding doesn't get rebuilt per engagement. What the organisation builds on top is its own: the actual context, the actual objectives, the actual capability map.
In the customer-service triage example, the Knowledge Bank is what supplies the use case description, the capability prerequisites (data unification, taxonomy reconciliation, queue routing logic), and the standard objective patterns the use case typically maps to. The organisation does not have to invent any of that.
What it closes: every engagement starting from a blank slate, and the rebuild-from-scratch tax that comes with it. By giving organisations a standardised reference for expected outcomes against their context and capability profile, the Knowledge Bank turns the which 3–4 use cases beat 10 pilots question into a structured selection rather than a guess. The shared structure also makes downstream contributor input slot into a frame rather than producing an unstructured pile of insight nobody can act on.
Accelerate: standardised matching with two paths
Accelerate is the matching engine. It takes the organisation's context (from the Knowledge Bank plus the org-specific layer) and matches it to use cases. Two starting paths.
Selection-first is for organisations that already know what they want, common at large enterprises with a function-level outcome in mind. The org enters the conversation at "we want a customer-service triage assistant"; Accelerate runs problem discovery to validate that the use case lands where leadership thinks it will, and runs value-chain analysis to size the impact across upstream and downstream processes.
Problem-first is for organisations that need to find both the problem and the solution, common at SMEs and small organisations. The org enters at "customer wait times are too long"; Accelerate finds the bottleneck (queue routing, ticket misclassification, agent skill mismatch), then matches to the family of use cases that addresses each.
In the example, a large enterprise running selection-first would arrive at the customer-service triage use case directly and use Accelerate to surface the data-unification dependency. An SME running problem-first might arrive at the same use case via the wait-time symptom, but with the dependency surfaced just as early.
The limitation it removes: the top-down trap on the selection-first path (executives picking use cases that don't fit operational reality) and the bottom-up discovery gap on the problem-first path (small orgs not knowing what AI use cases exist for their problem). The contributor-evaluation step inside Accelerate is also what aggregates the distributed knowledge that external interview rounds can't reach. One mechanism, multiple gaps closed at once.
Decide: impact first, value second
Decide is two steps in a deliberate order. First the unfolding traces where the AI deployment's effects propagate, multiple hops upstream and downstream. Not two boxes and an arrow. Then the impact map converts into a value calculation: for each affected node, what value is created or lost, what the magnitude is, what the confidence is.
The output of Decide is a decision: pursue, defer, reject, or revisit if context shifts. Defer and revisit are first-class options here, not afterthoughts.
In the example, Decide unfolds the customer-service triage use case and finds three downstream dependencies (data unification, taxonomy reconciliation, and a queue-routing change) that have to be in place for the value to land. Today, that finding is where the use case dies quietly. With Decide, the use case is deferred, with the conditions named.
Without this layer, the ROI exercise starts from the use case and guesses at impact, which gets the order wrong. Decide reverses it. The decision rests on where the change actually lands, not on the sponsor's optimism.
Deliver: the workstream nobody names
Deliver is the work strategy decks describe and integrators wait for. Three pieces.
The first is workflow redesign: what changes about how the function operates once the use case is in place. In the example, the triage assistant changes how an agent picks up a ticket; that change propagates to how managers measure agent throughput, how complex cases get escalated, how training is sequenced.
The second is accountability remapping: named owners at every handover, scope, trigger conditions, escalation paths. Without this, the workflow change is documented but not operationalised. Accountability is a redesign output, not a governance principle to declare.
The third is sequencing, captured as a dependency graph rather than a Gantt chart. Deliver computes which capabilities have to land first because three downstream initiatives depend on them. In the example, data unification and taxonomy reconciliation are sequenced ahead of the triage assistant, and the dependency is recorded so that when those land, the triage decision automatically resurfaces.
This ends the unowned operating-model decision layer. Deliver names it, scopes it, and assigns it.
Digital Nervous System: the compounding layer
The Digital Nervous System (DNS) is what keeps the deferred decision from evaporating, and what keeps the rest of an initiative from evaporating with it. Every initiative and campaign leaves a residue here: the strategic objectives behind it, the processes contributors mapped, the capabilities that surfaced as gaps, the use cases considered, the ones chosen, and the ones rejected. Each carries its rationale, the context that drove the call, and the trigger conditions for revisiting. The drift in that context over time stays attached to the decision.
In the example, the customer-service triage decision is captured as deferred-pending-data-unification, with the rationale (current taxonomy fragmentation), the trigger conditions (unification project completion, taxonomy reconciliation completion), and the projected revisit date. Two quarters later, when the data work lands, the system surfaces the deferred decision automatically. The org doesn't re-evaluate from cold.
This is the layer that makes "revisit when X is built" operational. Without it, that phrase is a meeting-minute decoration that nobody reads.
Without the DNS, institutional memory evaporates and pilot purgatory reproduces itself every twelve months.
Why all five matter at once
Drop any layer and the customer-service triage example breaks at a different point.
- Without the Knowledge Bank, the use case has to be defined from scratch: six weeks of consultancy time before matching can even start.
- Without Accelerate, matching is ad hoc and biased toward whatever the loudest stakeholder wants.
- Without Decide, the data-unification dependency stays invisible until scoping. The use case launches and fails.
- Without Deliver, the dependency is identified but the sequencing and accountability stay unowned. Two quarters pass, the data work doesn't land, and nobody flags it.
- Without the Digital Nervous System, the deferred decision is captured in a deck somewhere, and the next time someone asks about customer-service AI, the question gets re-evaluated from scratch.
Each layer closes a specific limitation. The count isn't the point; the compounding across layers is.
Where this leaves us
For organisations running AI transformation with strategy decks plus integrator engagements, the gap is not abstract. It's the use case that didn't ship, the capability that wasn't built first, and the deferred decision that nobody can find. The five layers carry that work explicitly.
For the SME without an analyst, all five layers compress into a single guided flow. The discipline that historically required a consultant becomes self-serve.
Before your next AI transformation review
Pick one AI use case currently scoped in your organisation. Walk it through each layer.
Is it grounded in the Knowledge Bank, or did the team rebuild the definition from a vendor deck? Was matching standardised, or was it advocacy? Has the impact unfolding been done (multiple hops, not two)? Have workflow, accountability, and sequencing been worked through? And if the use case is going to defer, is the rationale and trigger condition captured anywhere queryable?
Most organisations will find the answer is "yes" for one or two layers and "no" for the rest. That's the diagnostic.
AI Readi runs all five layers as a single platform: Knowledge Bank as shared scaffolding, Accelerate as standardised matching, Decide as impact-then-value, Deliver as workflow plus accountability plus sequencing, and the Digital Nervous System as compounding decision memory.
Next: strategy outputs depreciate, discipline appreciates. What changes when an organisation runs the five layers together — the discipline shift, what it looks like in practice, and what it costs to keep running without it.
Sources
- Dependency mapping as a core step in scoping and prioritisation: OpenAI, From Experiments to Deployments (2025)
- 70% of obstacles to AI adoption are people-and-process, not technology — Boston Consulting Group, AI at Work (2024)
- The "revisit when X is built" logic was argued in The order you do things matters more than what you do; F2 makes it operational across the five layers.