Six months after deployment, an AI initiative stalls. The technology still works. But upstream, a data source it relied on was reformatted when another team upgraded their system. Downstream, the team consuming the AI's output had built new workflows around it — workflows that broke when the output varied outside expected bounds. A finance team adjusted their reconciliation process around the output format; when the model updated, the reconciliation stopped matching. An adjacent process that used to rely on human judgment at a handoff point now depends on an automated decision that behaves differently under edge conditions. None of these teams were in the room when the initiative was scoped.
In week five, I wrote about sensitivity to initial conditions: small differences in starting conditions producing wildly different outcomes over time. The scenario above is what that sensitivity looks like in practice when AI enters your value chain. I'm calling it the cascade effect. Introduce a change at one point in your operating model and the consequences propagate, sometimes in expected directions, often not. Last week I wrote about why value chain understanding and organisational context come before technology selection. Without seeing the chain, you can't anticipate where the cascade reaches.
Second and final post in the value chain engineering track. Last week covered why process understanding precedes technology selection. The decision making track (weeks five and six) established where AI decisions actually get made; this week looks at what happens when those decisions cascade through connected processes, and why the people conducting them determine whether it works.
What isolated analysis misses
Most AI initiatives get evaluated in isolation. Can the technology handle the task? Is the data good enough? Will the team adopt it? All necessary questions, but they stop at the initiative boundary. Automating invoice processing doesn't just affect accounts payable; it changes what suppliers provide, how exceptions get escalated, what financial reporting receives, and how audit trails are maintained. Each shift has its own downstream dependencies, owned by teams who weren't consulted when the initiative was scoped.
I wrote in Week 3 about why pilots don't surface this kind of complexity; they're isolated by design. The cascade only becomes visible when the initiative meets the full operating model. By then, the budget is committed and the timelines are set.
Why cascades can't be blueprinted
The traditional response is to analyse harder upfront. Map every dependency, model every scenario, then execute. But value chain dependencies are computationally irreducible — a concept Stephen Wolfram demonstrated with cellular automata: even absurdly simple rules, given iteration, produce complexity with no analytical shortcut to the result. You can't predict step 1,000 without running through steps 1 through 999.
Peter Robin Hiesinger found the same principle in neural development. The genome doesn't contain a wiring diagram for the brain; it contains rules that generate one through iterative unfolding. Each step's output becomes the next step's input. The final architecture can't be predicted from the starting conditions alone. You have to run the process.
AI adoption through value chains follows the same pattern. Changing one process affects downstream processes in ways that genuinely can't be predicted from static analysis, because each change reshapes the context for the next decision. The cascade only becomes visible as you work through it, which is why last week's argument about continuous discovery matters here. If the knowledge needed for good decisions is both distributed across people and computationally irreducible, the answer can only emerge through a process that builds understanding iteratively.
AI Readi's value chain impact analysis was built around this principle. Phase 5 (value chain impact) feeds Phase 7 (value chain engineering) because each phase's output reshapes what you look at next. The system doesn't try to predict the full cascade from a static snapshot; it surfaces the cascade iteratively through the people closest to each connection, updating as their reality changes.
The cascade reaches people
When AI changes how a workflow operates, it changes what the people conducting that work need to know, how they make decisions, and when they need to intervene. Two in three companies struggle to reimagine workflows and upskill their workforce for AI. That's the gap between treating AI adoption as a technology deployment and treating it as a change to how people work.
The cascade reaches the person doing the work, not just the process boundary. If that person can't recognise when the AI is wrong, or lacks the context to understand what changed upstream, the initiative fails regardless of how well the technology performs.
There's a difference between training someone to use an AI tool and preparing someone for how their work changes because an AI tool was deployed three teams away. The first is a skills gap. The second runs deeper: a change management problem most organisations don't recognise until it surfaces as resistance, workarounds, or quiet non-adoption. The person in finance whose reconciliation inputs suddenly look different needs to understand what changed, why, and what to do when the new format doesn't match what they expect. That's not an AI training problem.
Conventional training programmes don't cover this. When five connected processes are affected by an AI initiative, five sets of people need to understand what shifted and what their role looks like now. Most organisations budget for training the team directly touched by the deployment. They rarely budget for the teams downstream. And the people furthest from the initiative but still in the cascade — the ones who experience the change as an unexplained shift in their inputs — are usually where adoption breaks down.
Readiness belongs in process design, not at the end of an implementation plan as a training line item: who in the cascade needs to be ready, for what specific change, and with what level of autonomy to intervene when things go sideways?
What separates the organisations that succeed
65% of high-performing organisations (those generating measurable returns from AI) have defined explicit processes for when AI outputs need human review, validation, or intervention. Among everyone else, it's 23%.
I referenced this figure in the governance track a few weeks ago, in the context of algorithmic transparency. Here it points to something different. High performers govern the handover points where AI outputs flow into human processes — identifying where in the cascade a human needs to step in and building that into the process design before deployment.
That gap tells you what "AI readiness" actually means in practice: whether the people and processes have been designed for how the technology will interact with them.
It also tells you which initiatives are worth pursuing. The cascade works as a selection tool: when you can see how far a proposed change propagates (through how many teams, systems, and handoff points) you can make a more honest assessment of whether the value justifies the disruption. An initiative that looks compelling in isolation looks different when you trace its cascade through four downstream teams, two system integrations, and a regulatory handoff that nobody flagged during evaluation. Some of those cascades are manageable. Others suggest you're buying a $200,000 AI deployment and a $2 million change management programme alongside it.
The organisations that scale AI aren't running the most experiments. They're choosing initiatives whose cascades they can actually manage.
Next week starts a new track: the AI strategy paradox, and why three well-chosen use cases consistently outperform ten scattered pilots.
Sources
- "2 in 3 companies struggle to reimagine workflows and upskill AI talent" — Boston Consulting Group, AI Radar 2025: Value-Strategy Gap (2025), survey of 1,803 C-suite executives
- "65% of high-performing organisations define human-in-the-loop validation vs 23% of others" — McKinsey Global Institute, "The State of AI" (2025), 1,993 participants across 105 nations
- Computational irreducibility and Rule 110 — Stephen Wolfram, A New Kind of Science, Wolfram Media (2002)
- Iterative unfolding in neural development — Peter Robin Hiesinger, The Self-Assembling Brain, Princeton University Press (2021)
- Sensitivity to initial conditions — Edward Lorenz, MIT (1961); referenced in detail in the decision making track (Week 5)
- Network effects and cascade dependencies in complex systems — Albert-László Barabási, "Linked: How Everything Is Connected to Everything Else," Basic Books (2002)