Skip to main content
Back to Blog
AI Strategy

The order you do things matters more than what you do

Even the right initiatives fail when pursued in the wrong order.

Angel HorvatApril 21, 20265 min read

Last week I made the case for domain-level focus: each function choosing a few well-aligned initiatives rather than the organisation spreading thin across scattered experiments. But there's a second dimension most organisations skip: the order you pursue them in.

Second and final post in a track on the AI strategy paradox. Last week covered why focus applies at the domain level, not the company level. This week: why even the right initiatives fail when pursued in the wrong order. This also closes the ten-week core series.

Four foundations enable five intermediate initiatives, which in turn enable five advanced ones. Data standardisation feeds the most downstream value, making it the natural starting point. Each completed tier creates the conditions for the next.

The invisible dimension

Most organisations pick their AI initiatives, then run them. The sequencing, when it exists, usually follows executive priority or budget cycles. Whatever has the strongest sponsorship starts first. The next one goes when it clears procurement.

The result is that initiatives run into dependencies nobody planned for. A customer service AI needs clean, consistent data across three systems, but the data quality initiative that would have fixed those inconsistencies is scheduled for next quarter. An automated approval workflow needs governance protocols that the compliance team hasn't built yet because their own AI initiative was deprioritised. Each failure looks like a technology or resourcing problem, but usually it's a sequencing one.

The information gap I wrote about in week two shows up here in a specific way: the knowledge about which initiatives depend on which foundations is distributed across teams who rarely coordinate at the planning stage. The team building the customer service AI knows they need better data, and the data team knows they need to standardise across systems. Nobody connected those two timelines.

Foundational versus dependent

AI initiatives fall into two broad categories that most portfolio planning ignores.

Foundational initiatives create capabilities that other initiatives need: data standardisation, governance frameworks, team skills, integration layers. They're rarely the most exciting projects, don't demo well, and almost never get executive sponsorship ahead of revenue-facing use cases.

Dependent initiatives require those foundations to work. An AI-powered pricing engine needs reliable data pipelines. Automated compliance needs a governance framework that defines what "compliant" means for AI decisions, and predictive maintenance won't go anywhere without sensor data integration that somebody has to build first.

When dependent initiatives launch before their foundations exist, they either stall waiting for prerequisites or build workarounds that become technical debt. Both outcomes waste resources. And both are preventable with sequencing that accounts for dependencies.

Dependency mapping shows up as a core step in OpenAI's own deployment framework, right alongside scoping and prioritisation. They treat it as infrastructure for knowing what to build first.

The cascade in reverse

In week eight I described how AI changes cascade through connected processes, one initiative's effects rippling upstream and downstream through teams and systems. Dependency-aware sequencing uses the same logic, but in planning mode rather than damage assessment.

Instead of tracing how a deployed change propagates, you trace how a planned change depends on conditions that another initiative creates. What produces the data quality that three other initiatives need? Where does a governance framework have to be in place before autonomous decision-making can operate, and how do team capabilities need to develop before a process redesign makes sense?

Value chain understanding makes this concrete. When you can see how processes connect across functions, you can see which initiatives create conditions for others. The dependency structure follows the operating model.

PE-backed companies that systematically build AI capabilities across functions see nearly 2x returns compared to those without systematic approaches. "Systematic" in that finding means initiatives that build on each other in a coherent sequence, where each success creates the conditions for the next.

What sequencing looks like in practice

Organisations focusing narrowly on technology or short-term ROI consistently struggle to scale. The ones that succeed treat AI adoption as five interconnected dimensions: organisational capability, human-AI collaboration, data quality, unified platforms, and responsible practices. That word "interconnected" does a lot of work. You can't build any one dimension independently; each depends on progress in the others.

Smart sequencing turns this into a practical question: given where you are across those dimensions, what do you build next?

Start with what enables the most. Data standardisation that feeds three downstream initiatives does more work than a standalone pilot, even if the pilot has a faster ROI projection on paper. A governance framework that clears the path for five use cases across three functions is more valuable than any single initiative it enables.

Then let each completed initiative create the foundation for the next. This is how a scattered portfolio becomes a building sequence; momentum compounds rather than each initiative starting from zero.

AI Readi's value chain impact analysis surfaces these dependencies as part of the process. When contributors across the organisation describe how their processes connect, what's upstream and downstream, and where handoffs create friction, the dependency structure becomes visible. Phase 5 feeds Phase 7 because understanding the cascade reveals which initiatives should come first.

Where this leaves us

This closes the arc the series has traced over ten weeks. The adoption crisis is an information problem before it's a technology problem. The knowledge needed for good AI decisions lives in the people doing the work, distributed across functions and processes. Understanding your value chain reveals how those processes connect, how changes cascade through them. Focus at the domain level, sequence by dependencies, and the same organisation that was running scattered pilots starts building compounding capability.

That's the paradox from the track title. It resolves when you look at the domain level rather than the org chart. Doing less per function, with more deliberation behind each initiative, compounds into more total impact than spreading everything thin.


Sources

AI Readi's value chain impact analysis surfaces initiative dependencies through the people closest to each connection, turning a scattered portfolio into a building sequence.

Get Started

The AI Readiness Brief

Biweekly insights on AI adoption strategy. No fluff, just data-driven analysis.

No spam. Unsubscribe anytime.