Last week I wrote about the cascade effect, how one AI initiative ripples through connected processes, teams, and systems. Each initiative needs deep process redesign, dependency analysis, and change management to survive that cascade. Run too many at once, and none of them get it.
First of two posts on the AI strategy paradox. The previous track covered value chains: why process understanding precedes technology selection, and how changes cascade through connected processes. This track addresses what comes next: choosing what to pursue, and in what order.
Why scattered fails
Organisations focused on a few AI initiatives see 2.1× better ROI than those running six or more scattered experiments. The stat is well cited by now. The mechanism behind it is more interesting than the stat itself.
Every AI initiative requires deep familiarity with the process it touches: the upstream dependencies, the downstream consumers, the workarounds people have built, the handoff points where things go sideways. The cascade effect covered why. That understanding takes sustained attention from people who know the domain. It doesn't parallelise well.
When a central team runs six pilots across six functions, they're context-switching constantly. Each function has its own processes, dependencies, stakeholders, and data quirks. Every switch means rebuilding: who owns what, where the data lives, which stakeholders matter and which ones are just copied on emails. That overhead is invisible. Nobody budgets for it. But it's the main reason scattered portfolios produce scattered results.
Enterprises currently deploy AI across an average of three business functions. The pattern is usually the same: a central team or innovation lab picks a handful of pilots based on executive interest, then runs them across departments. Each pilot gets surface-level process understanding. Enough to build a demo, not enough to redesign the workflow.
Why 3-5 per function
Each function runs different processes, faces different friction, and sits at a different stage of AI maturity. The knowledge of what to fix (which handoffs break, where workarounds exist, what data is unreliable) lives with the people doing the work. I wrote in week two about the information gap: the knowledge that matters most for AI adoption is distributed across the people closest to the process. They're the ones who should be choosing what to pursue.
Three to five initiatives per function is the range that works. More than five and you're back to dilution, spreading the function's attention across too many simultaneous changes. Each initiative needs process redesign, change management, and dependency analysis. That kind of work doesn't survive being split across a dozen priorities. The sweet spot gives each initiative the bandwidth for what comes next.
What comes next is workflow redesign. Deploying an AI tool into an existing process doesn't change how people work; it just adds a tool to the existing workflow. The value comes from redesigning the workflow around what the technology makes possible. The cascade effect is why this matters: each initiative touches upstream inputs and downstream consumers, and the redesign has to account for how those connections shift. Once you've selected the technology, value chain analysis maps what will actually change: which processes need redesign, which teams are affected, where the cascade reaches.
The team that selected the initiative because they live inside the process is the team best equipped to do that redesign. They already know which upstream inputs are unreliable, which downstream teams depend on their output format, and where a human needs to step in. That understanding can't be imported from a consulting engagement or a central transformation office. It's already there. It just needs the space to be applied.
For organisations where a central technology team manages AI across functions, the same logic applies. That team serves internal clients; each function is effectively a separate customer. Give each client three to five deep engagements rather than shallow coverage across everything, and the total output goes up.
What changes when you get this right
84% of companies haven't redesigned jobs around AI capabilities. The gap makes more sense when you look at how most AI portfolios are structured: initiatives chosen centrally, landing in functions where the team had no say in the selection and no ownership of how work actually changes. Job redesign becomes a line item on somebody else's plan, disconnected from the people who know what the job actually involves.
When the domain owns the initiative, workflow redesign is part of the process from the start. The team that identified the friction is the team redesigning how work gets done. They don't need a handoff document explaining the process they already live inside.
75% of organisations in the World Economic Forum's 2026 MINDS programme (those with systematically implemented AI) reinvest their returns to expand into new functions. The pattern is consistent: go deep in a few domains, prove results, then extend once the playbook exists. The focused start creates the evidence base for expansion. Organisations that end up with AI working across many functions are usually the ones that started with a few, deliberately.
But even well-chosen initiatives can fail in the wrong sequence, when one depends on foundations that another should have laid first.
Next week: why the order you pursue initiatives matters more than which ones you choose, and how dependency-aware sequencing turns a scattered portfolio into a building sequence.
Sources
- 2.1× more ROI from strategic focus on fewer initiatives vs 6+ scattered experiments: Boston Consulting Group, From Potential to Profit (2025)
- Enterprises deploy AI across an average of 3 business functions; 78% of large organisations use AI; only 1% describe implementations as mature: McKinsey Global Survey on AI (2025)
- 84% of companies have not redesigned jobs around AI capabilities: Deloitte AI Institute, State of AI (2026)
- 75% of MINDS organisations reinvest AI returns to expand into new functions: World Economic Forum / Accenture, Proof over Promise (2026)