42% of companies abandoned most of their AI initiatives in 2025, up from 17% the year before. What will 2026 bring when all the Agentic AI craze hits business realities? Last week I wrote about pilot purgatory and how organisations get stuck cycling through proofs of concept that never reach production. This week I want to look at the usual helpers, the consultants, vendors, and system integrators that organisations bring in to fix this, and why they all hit the same wall.
The answer has nothing to do with AI. It comes from economics and behavioural science, two fields that independently arrived at the same conclusion about how knowledge works in complex systems. Understanding that conclusion explains why the 80% project failure rate persists, and why spending more on the same approaches won't change it.
Fourth in the core series. Week 1 covered the 80% failure rate. Week 2 identified the information gap as the root cause. Week 3 traced pilot purgatory back to it. This week: why the traditional helpers can't close that gap. This is the last article on the broader theory; from next week, we shift to practical applications and impact.
The knowledge that can't be centralised
In 1945, economist Friedrich Hayek identified a problem that's aged remarkably well. The knowledge needed for good decisions, he argued, never exists in concentrated form. It's scattered across many people as incomplete, frequently contradictory fragments. No central authority can possess it, no matter how smart or well-resourced.
The knowledge of the circumstances of which we must make use never exists in concentrated or integrated form, but solely as the dispersed bits of incomplete and frequently contradictory knowledge which all the separate individuals possess.
Apply this to any organisation trying to figure out AI. The front-line analyst knows which datasets are reliable. The operations manager knows which processes break under pressure. The IT architect knows the integration constraints no vendor demo accounts for. This knowledge is distributed across dozens, sometimes thousands, of people, and most of it is tacit. Scale makes it worse: as organisations grow, management layers filter information upward, usually toward optimism, and the picture that reaches leadership is incomplete and disconnected from ground truth. I covered this information gap in detail in week two.
Three approaches, one structural flaw
Consultants can deliver real depth: knowledge, planning frameworks, execution experience. But their method has a structural problem with how it gathers information. They interview 20 to 50 executives, the people farthest from operational reality. That's low diversity (a narrow stakeholder set drawn from similar seniority levels). Early interviews anchor later ones, removing independence. The consultant synthesises subjectively rather than aggregating statistically. And the entire process is centralised by design. The knowledge they capture is a fraction of what the organisation actually knows. The output is a presentation that costs from $200k up to millions, is obsolete within months, and the expertise leaves when the consultants leave.
System integrators offer something different: they build. But they build what you can articulate, and organisations can't articulate what they don't know. They answer "can we build it?" but not "should we build it?" before building starts. The requirements funnel through a centralised project scoping process, drawn from the same narrow set of stakeholders who can't access the distributed knowledge. The people closest to operational reality, the ones who know where processes actually break, rarely define what gets built. Execution without strategic alignment creates dependency and rework.
Technology vendors sell what their platform does well, regardless of organisational fit. Their incentive is vendor revenue, not your success. And there's no mechanism to combine your organisation's distributed knowledge into a decision about whether their solution fits; the vendor demo replaces that aggregation with a single showcase. Your organisation's operational reality never enters the equation. That's how you get the pilot purgatory trap from last week: technology that works in a demo meets an organisation that was never actually assessed.
The shared flaw across all three is structural, not a matter of competence or intent. None of them can access the distributed institutional knowledge that Hayek described. And they all assume you can blueprint the answer upfront, when the system actually requires something different.
That high project failure rate persists because every major approach to AI advisory is structurally bad for making good decisions.
When distributed knowledge works
So what would it take to actually use distributed knowledge well? James Surowiecki's research identified four conditions under which groups consistently outperform individual experts: diversity (different people hold different information), independence (opinions aren't influenced by what others think), decentralisation (people draw on local, domain-specific knowledge), and aggregation (a mechanism exists to combine individual judgments into a collective answer).
When all four conditions are met, the results are striking. The Iowa Electronic Markets outperformed professional polling organisations 74% of the time across five presidential elections. Ford Motor Company cut forecast error by 25% compared to expert predictions using internal prediction markets. These systems work because they aggregate distributed information that no single person possesses. Each participant brings a fragment, and the mechanism combines them into something more accurate than any individual expert could produce.
How AI Readi solves what others can't
When all four conditions are satisfied, distributed knowledge becomes a decision-making advantage. That's the design principle behind AI Readi. Instead of centralising knowledge through interviews or demos, the platform goes where the knowledge actually lives.
Decentralised insight from every perspective
Instead of narrow insights, AI Readi aggregates knowledge from the people who actually do the work. Each contributor evaluates independently, and each insight goes deep because it's decentralised: focused on extracting knowledge from that individual's expertise.
- Technical perspective — Engineers and architects evaluate integration constraints, data quality, and infrastructure readiness from direct experience.
- Business perspective — Function leaders and strategists assess alignment with objectives, resource trade-offs, and expected outcomes based on their domain context.
- Operational perspective — Process owners and front-line managers evaluate feasibility against how work actually gets done, including the workarounds and dependencies that formal documentation misses.
Confidence-weighted statistical aggregation turns these individual judgments into collective intelligence. Each phase's output feeds the next, so understanding builds through structured discovery rather than a predetermined blueprint.
Value chain visibility
AI Readi also surfaces how processes connect and where dependencies sit across the value chain. Most organisations can't see this because the knowledge is scattered across departments that rarely talk to each other. Aggregating across roles, functions, and levels produces a picture of the system, not a collection of isolated opinions.
Decisions built on operational reality
Insights become clearer because they're grounded in operational reality rather than filtered through management layers. And there's more trust in the outcome; it comes from broad participation across the organisation, not a narrow set of executives or an outside consultant's interpretation.
Knowledge that stays
The knowledge compounds internally. It doesn't walk out the door when an engagement ends.
This is the last article on the broader theory. The frameworks from Hayek and Surowiecki explain why the current approaches fail and what the alternative has to look like. From next week, I'm shifting to how this should be applied in practice, starting with why top-down AI strategies systematically miss organisational reality.
Next week: we start a new series on Where AI Decisions Get Made and why top-down AI strategies systematically miss organisational reality, and what happens when you flip the direction.
Sources
- 42% of companies abandoned most AI initiatives in 2025, up from 17% in 2024: S&P Global Market Intelligence, Enterprise AI Survey (2025) [ref-008, ref-073]
- 80% AI project failure rate: Harvard Business Review, "Keep Your AI Projects on Track" (Bojinov, 2023) [ref-006]
- Prediction markets outperformed professional polling 74% of the time: Iowa Electronic Markets, Berg et al. (2008), International Journal of Forecasting [ref-042]
- Ford Motor Company 25% reduction in forecast error: Cowgill & Zitzewitz, "Corporate Prediction Markets: Evidence from Google, Ford, and Firm X" (2015)
- Distributed knowledge: Friedrich Hayek, "The Use of Knowledge in Society," American Economic Review (1945)
- Four conditions for crowd intelligence: James Surowiecki, The Wisdom of Crowds, Doubleday (2004)
- Communication scaling thresholds (150, 500, 1,500, 5,000): Robin Dunbar, Oxford evolutionary psychology research