Skip to main content
Back to Blog
AI Adoption

The top-down trap: why executive AI strategies miss organisational reality

Executive sponsorship is essential. Imposing direction without discovery is the problem.

Angel HorvatMarch 18, 20266 min read

Executive sponsorship has been the number one contributor to change success in every Prosci benchmarking study since 1998. Projects with effective sponsors are six times more likely to meet objectives. Overall project success rates shift from around 25% to 85% with active sponsorship. You already know this; it's one of the least controversial findings in change management.

But somewhere along the way, "secure executive sponsorship" turned into "let leadership set the direction." A leader decides to adopt an AI use case, change the marketing platform, or go with a particular vendor, and pushes it down. Or a consultant recommends a direction and leadership runs with it. Either way, the people who know the operational reality never inform the decision. That's a very different activity from sponsorship, and confusing the two explains a pattern most people in large organisations have lived through: initiatives with full leadership backing that still fail to deliver.

This post opens a new track in the core series. The first four weeks covered the AI adoption crisis: the 80% failure rate, the information gap, pilot purgatory, and why consultants, vendors, and integrators can't close the gap. This track shifts to where AI decisions actually get made, starting with why the knowledge to set good direction rarely sits where the authority to sponsor does.

Sponsorship and direction need different knowledge

Every major transformation needs executive sponsorship. Budget allocation, political cover, cross-functional authority, accountability for outcomes. This part is settled. Without it, initiatives die from resource starvation or organisational antibodies.

But sponsorship means championing, removing obstacles, and holding people accountable for results. Setting direction means deciding what to adopt, where to deploy it, and which processes to change. These draw on very different knowledge. Setting direction means understanding what the downstream impacts are, what processes need to change, and what it actually takes for the direction to hold course once it hits operations. The executive who should champion a transformation isn't the person who knows that the claims processing team handles exceptions through a workaround nobody documented, that operations stopped trusting two data fields two years ago, or that the process in Building B exists because a previous system migration never fully landed. That's where direction either holds or falls apart, and the knowledge to get it right lives with the people doing the work.

Each management layer filters information upward. Usually toward optimism. The front-line team knows the integration will break three downstream processes; by the time that detail reaches the steering committee, it's become "some implementation risks to manage." Hayek described this in 1945 and I wrote about it in weeks two and four: the knowledge needed for good decisions is distributed across many people as incomplete, frequently contradictory fragments.

How information degrades in both directions through the management chain.

And this is where the confusion does damage. In practice, leadership rarely sets direction based on operational insight. They contract consulting firms or delegate to middle-to-upper management, which is still too far from the people doing the work. The plan gets built on consultant financial models, analyst reports, vendor demos, and peer benchmarks. Employees sometimes call it "strat(egy) maths": looks rigorous on a slide but has little in common with operational reality. The plan solves a generalised version of the organisation's problem. The implementation team then discovers the actual problem is different: processes work differently than documented, data quality varies by region, informal workarounds carry institutional knowledge that nobody wrote down. So they retrofit. The retrofit creates new workarounds. The workarounds create technical debt. The project succeeds on paper but delivers a fraction of expected value at a multiple of planned cost.

This pattern isn't unique to AI. ERP implementations, CRM rollouts, process reengineering, digital transformation programmes: same dynamic. A 2024 study in Public Money & Management compared top-down and bottom-up digitisation programmes and found that top-down approaches contribute to standardisation but create problems with local adaptation, acceptance, and resistance to change. The technology works, however, the mismatch between mandate and operational reality compounds and minimises the potential benefits.

Why similar organisations get different results

There's a related misconception that makes this worse. If Organisation A succeeds with approach X, Organisation B (same industry, similar size, similar budget) should too. This is the premise behind benchmarking, best-practice frameworks, and vendor case studies. And it's wrong more often than people expect.

Organisations that look similar on paper have very different internal realities. Different informal processes born from different historical constraints. Different workarounds that encode different institutional knowledge. Different tacit expertise distributions. Different political dynamics around who owns what.

Complexity science has a name for this: sensitivity to initial conditions. Lorenz showed in 1961 that in complex systems, arbitrarily small differences in starting conditions produce wildly different trajectories over time. Organisations are complex adaptive systems. The Santa Fe Institute's foundational work on complexity demonstrates that value in these systems emerges from capability interactions, not from individual components. The same technology, same methodology, same budget produces different outputs because the system conditions aren't the same.

Recent research in the Strategic Management Journal confirms this at the firm level: when companies imitate a competitor's strategy, they replicate different components because their internal interdependencies vary. When one component interacts differently with other parts of the strategy, the consequences of copying it become harder to predict. What works in one context fails in another, even when both organisations look structurally identical from the outside.

And these aren't static differences. As organisations grow past Dunbar's thresholds (150, 500, 1,500 people), small initial variations in how teams work, how information flows, and how decisions get made compound through each layer. What started as a minor process variation at 50 people becomes a fundamentally different operating model at 1,500. These gaps come from context, not technology. And these new organizations or functions appear and disappear all the time in organisations with new technology waves, market shifts, legislative, and leadership changes. (e.g. the rise of data and privacy teams, the fall of digital transformation offices, the emergence of AI centres of excellence.)

Benchmarks are useful for general, high-level views. They tell you what's happening in the market, where investment is flowing, what broad patterns look like. But they fall apart the deeper and more specific you get. Benchmarking against peers assumes the starting conditions are comparable when they're often not. Two hospitals, two banks, two manufacturers that look similar on every measurable dimension can have radically different outcomes from the same initiative because the unmeasured dimensions (the tacit knowledge, the informal processes, the political dynamics) diverge.

What leadership should actually do

This points to a clear division of labour. Leadership should sponsor: champion the initiative, secure resources, hold accountability for outcomes, remove organisational barriers. But the knowledge to set good direction, the understanding of what will actually work in this specific organisation with its specific processes and its specific people, comes from the people who do the work.

Acknowledging that your organisation's context is unique matters. Benchmarks and case studies from similar organisations tell you what's possible. They don't tell you what's right for your specific operational reality. The only way to know that is to go where the knowledge lives: the teams closest to processes, data, and customers.

Structured aggregation of that distributed knowledge produces better decisions than filtered executive summaries. That's why we built the contributor model: it's an alignment mechanism where business and strategy, technology, and operations each contribute from their own perspective. Each brings knowledge the others don't have, and the structure surfaces both consensus and disagreement. Leadership sponsors the process, but the knowledge comes from the people who actually do the work.

This week's argument is that the knowledge to set good direction rarely sits where the authority to sponsor does. Next week I'll look at the other side: if top-down direction misses reality, what does bottom-up discovery actually look like in practice?

Next week: bottom-up discovery — where organisational knowledge actually lives, and how structured aggregation reveals what top-down direction misses.


Sources
  • Executive sponsorship #1 contributor to change success since 1998, projects with effective sponsors 6x more likely to meet objectives: Prosci, Best Practices in Change Management, benchmarking studies 1998-2025
  • Overall project success rates shift from ~25% to ~85% with active executive sponsorship: Prosci, Best Practices in Change Management (2018)
  • Top-down vs bottom-up digitisation comparison: Public Money & Management, Taylor & Francis (2024), "Top-down or bottom-up digital transformation? A comparison of institutional changes and outcomes"
  • Sensitivity to initial conditions: Edward Lorenz, MIT (1961); foundational concept in chaos and complexity theory
  • Organisations as complex adaptive systems: Santa Fe Institute, "Foundational Papers in Complexity Science: Updated Edition" (2024)
  • Firms imitating same strategy replicate different components: Balachandran, Strategic Management Journal, Wiley (2025), "When mimicry leads to divergence"
  • Communication scaling thresholds (150, 500, 1,500): Robin Dunbar, Oxford evolutionary psychology research
  • Distributed knowledge: Friedrich Hayek, "The Use of Knowledge in Society," American Economic Review (1945)

AI Readi's contributor model aggregates what your organisation actually knows, from the people who do the work, not filtered through management layers or consultant financial models.

Try AI Readi

The AI Readiness Brief

Biweekly insights on AI adoption strategy. No fluff, just data-driven analysis.

No spam. Unsubscribe anytime.